Science.gov

Sample records for common approximations assumed

  1. Selection of Common Items as an Unrecognized Source of Variability in Test Equating: A Bootstrap Approximation Assuming Random Sampling of Common Items

    ERIC Educational Resources Information Center

    Michaelides, Michalis P.; Haertel, Edward H.

    2014-01-01

    The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…

  2. Approximate natural vibration analysis of rectangular plates with openings using assumed mode method

    NASA Astrophysics Data System (ADS)

    Cho, Dae Seung; Vladimir, Nikola; Choi, Tae MuK

    2013-09-01

    Natural vibration analysis of plates with openings of different shape represents an important issue in naval architecture and ocean engineering applications. In this paper, a procedure for vibration analysis of plates with openings and arbitrary edge constraints is presented. It is based on the assumed mode method, where natural frequencies and modes are determined by solving an eigenvalue problem of a multi-degree-of-freedom system matrix equation derived by using Lagrange's equations of motion. The presented solution represents an extension of a procedure for natural vibration analysis of rectangular plates without openings, which has been recently presented in the literature. The effect of an opening is taken into account in an intuitive way, i.e. by subtracting its energy from the total plate energy without opening. Illustrative numerical examples include dynamic analysis of rectangular plates with rectangular, elliptic, circular as well as oval openings with various plate thicknesses and different combinations of boundary conditions. The results are compared with those obtained by the finite element method (FEM) as well as those available in the relevant literature, and very good agreement is achieved.

  3. Mapping biological entities using the longest approximately common prefix method

    PubMed Central

    2014-01-01

    Background The significant growth in the volume of electronic biomedical data in recent decades has pointed to the need for approximate string matching algorithms that can expedite tasks such as named entity recognition, duplicate detection, terminology integration, and spelling correction. The task of source integration in the Unified Medical Language System (UMLS) requires considerable expert effort despite the presence of various computational tools. This problem warrants the search for a new method for approximate string matching and its UMLS-based evaluation. Results This paper introduces the Longest Approximately Common Prefix (LACP) method as an algorithm for approximate string matching that runs in linear time. We compare the LACP method for performance, precision and speed to nine other well-known string matching algorithms. As test data, we use two multiple-source samples from the Unified Medical Language System (UMLS) and two SNOMED Clinical Terms-based samples. In addition, we present a spell checker based on the LACP method. Conclusions The Longest Approximately Common Prefix method completes its string similarity evaluations in less time than all nine string similarity methods used for comparison. The Longest Approximately Common Prefix outperforms these nine approximate string matching methods in its Maximum F1 measure when evaluated on three out of the four datasets, and in its average precision on two of the four datasets. PMID:24928653

  4. Performance Improvement Assuming Complexity

    ERIC Educational Resources Information Center

    Rowland, Gordon

    2007-01-01

    Individual performers, work teams, and organizations may be considered complex adaptive systems, while most current human performance technologies appear to assume simple determinism. This article explores the apparent mismatch and speculates on future efforts to enhance performance if complexity rather than simplicity is assumed. Included are…

  5. Investigations of the influence of common approximations in scatterometry for dimensional nanometrology

    NASA Astrophysics Data System (ADS)

    Endres, J.; Diener, A.; Wurm, M.; Bodermann, B.

    2014-04-01

    Scatterometry is a common tool for the dimensional characterization of periodic nanostructures. It is an indirect measurement method, where the dimensions and geometry of the structures under test are reconstructed from the measured scatterograms applying inverse rigorous calculations. This approach is numerically very elaborate so that usually a number of approximations are used. The influence of each approximation has to be analysed to quantify its contribution to the uncertainty budget. This is a fundamental step to achieve traceability. In this paper, we experimentally investigate two common approximations: the effect of a finite illumination spot size and the application of a more advanced structure model for the reconstruction. We show that the illumination spot size affects the sensitivity to sample inhomogeneities but has no influence on the reconstruction parameters, whereas additional corner rounding of the trapezoidal grating profile significantly improves the reconstruction result.

  6. Assume-Guarantee Testing

    NASA Technical Reports Server (NTRS)

    Blundell, Colin; Giannakopoulou, Dimitra; Pasareanu, Corina S.

    2005-01-01

    Verification techniques for component-based systems should ideally be able to predict properties of the assembled system through analysis of individual components before assembly. This work introduces such a modular technique in the context of testing. Assume-guarantee testing relies on the (automated) decomposition of key system-level requirements into local component requirements at design time. Developers can verify the local requirements by checking components in isolation; failed checks may indicate violations of system requirements, while valid traces from different components compose via the assume-guarantee proof rule to potentially provide system coverage. These local requirements also form the foundation of a technique for efficient predictive testing of assembled systems: given a correct system run, this technique can predict violations by alternative system runs without constructing those runs. We discuss the application of our approach to testing a multi-threaded NASA application, where we treat threads as components.

  7. Common fixed points in best approximation for Banach operator pairs with Ciric type I-contractions

    NASA Astrophysics Data System (ADS)

    Hussain, N.

    2008-02-01

    The common fixed point theorems, similar to those of Ciric [Lj.B. Ciric, On a common fixed point theorem of a Gregus type, Publ. Inst. Math. (Beograd) (N.S.) 49 (1991) 174-178; Lj.B. Ciric, On Diviccaro, Fisher and Sessa open questions, Arch. Math. (Brno) 29 (1993) 145-152; Lj.B. Ciric, On a generalization of Gregus fixed point theorem, Czechoslovak Math. J. 50 (2000) 449-458], Fisher and Sessa [B. Fisher, S. Sessa, On a fixed point theorem of Gregus, Internat. J. Math. Math. Sci. 9 (1986) 23-28], Jungck [G. Jungck, On a fixed point theorem of Fisher and Sessa, Internat. J. Math. Math. Sci. 13 (1990) 497-500] and Mukherjee and Verma [R.N. Mukherjee, V. Verma, A note on fixed point theorem of Gregus, Math. Japon. 33 (1988) 745-749], are proved for a Banach operator pair. As applications, common fixed point and approximation results for Banach operator pair satisfying Ciric type contractive conditions are obtained without the assumption of linearity or affinity of either T or I. Our results unify and generalize various known results to a more general class of noncommuting mappings.

  8. Approximating common fixed points of asymptotically quasi-nonexpansive mappings by a k+1-step iterative scheme with error terms

    NASA Astrophysics Data System (ADS)

    Xiao, Jian-Zhong; Sun, Jing; Huang, Xuan

    2010-02-01

    In this paper a k+1-step iterative scheme with error terms involving k+1 asymptotically quasi-nonexpansive mappings is studied. In usual Banach spaces, some sufficient and necessary conditions are given for the iterative scheme to approximate a common fixed point. In uniformly convex Banach spaces, power equicontinuity for a mapping is introduced and a series of new convergence theorems are established. Several known results in the current literature are extended and refined.

  9. Undamped critical speeds of rotor systems using assumed modes

    NASA Astrophysics Data System (ADS)

    Nelson, H. D.; Chen, W. J.

    1993-07-01

    A procedure is presented to reduce the DOF of a discrete rotordynamics model by utilizing an assumed-modes Rayleigh-Ritz approximation. Many possibilities exist for the assumed modes and any reasonable choice will yield a reduced-order model with adequate accuracy for most applications. The procedure provides an option which can be implemented with relative ease and may prove beneficial for many applications where computational efficiency is particularly important.

  10. Commonality.

    ERIC Educational Resources Information Center

    Beaton, Albert E., Jr.

    Commonality analysis is an attempt to understand the relative predictive power of the regressor variables, both individually and in combination. The squared multiple correlation is broken up into elements assigned to each individual regressor and to each possible combination of regressors. The elements have the property that the appropriate sums…

  11. Assume-Guarantee Reasoning for Deadlock

    DTIC Science & Technology

    2006-09-01

    and non-circular assume-guarantee rules [Pnueli 85, de Roever 98, Barringer 03]. Amla and colleagues have presented a sound and complete assume...guarantee method in the context of an abstract process composition framework [ Amla 03]. However, they do not discuss deadlock detection or explore the use of...NY: Springer-Verlag, July 2005. [ Amla 03] Amla , N.; Emerson, E. A.; Namjoshi, K. S.; & Trefler, R. J. “Abstract Patterns of Compositional Reasoning

  12. Empirical progress and nomic truth approximation revisited.

    PubMed

    Kuipers, Theo A F

    2014-06-01

    In my From Instrumentalism to Constructive Realism (2000) I have shown how an instrumentalist account of empirical progress can be related to nomic truth approximation. However, it was assumed that a strong notion of nomic theories was needed for that analysis. In this paper it is shown, in terms of truth and falsity content, that the analysis already applies when, in line with scientific common sense, nomic theories are merely assumed to exclude certain conceptual possibilities as nomic possibilities.

  13. UN projections assume fertility decline, mortality increase.

    PubMed

    Haub, C

    1998-12-01

    This article summarizes the latest findings from the UN Population Division's 1998 review of World Population Estimates and Projections. The revisions reflect lower future population size and faster rates of fertility and mortality decline. The medium variant of population projection for 2050 indicates 8.9 billion, which is 458 million lower than projected in 1996 and 924 million lower than projected in 1994. The changes are due to changes in developing countries. Africa's changes accounted for over 50% of the change. The UN medium projection assumes that the desire for fewer children and effective contraceptive practice will continue and that the availability of family planning services will increase. The revisions are also attributed to the widespread prevalence of AIDS in sub-Saharan Africa and greater chances for lower fertility in developing countries. AIDS mortality may decrease average life expectancy in 29 African countries by 7 years. The UN medium projection assumes a decline in fertility from 2.7 children/woman during 1995-2000 to 2.0 children/woman by 2050. The UN high variant is 10.7 billion by 2050; the low variant is 7.3 billion. It is concluded that efforts of national governments and international agencies have contributed to increased access to reproductive health services and subsequent fertility decline. Future declines will depend on accessibility issues. Despite declines, world population is still growing by 78 million annually. Even countries such as Botswana, with 25% of the population infected with HIV/AIDS, will double in size by 2050.

  14. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  15. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  16. 24 CFR 203.512 - Free assumability; exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not...

  17. 24 CFR 203.512 - Free assumability; exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not...

  18. 24 CFR 203.512 - Free assumability; exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not...

  19. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  20. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  1. 24 CFR 203.512 - Free assumability; exceptions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not...

  2. 24 CFR 234.66 - Free assumability; exceptions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...

  3. Harmonic Vibrational Frequencies: Approximate Global Scaling Factors for TPSS, M06, and M11 Functional Families Using Several Common Basis Sets.

    PubMed

    Kashinski, D O; Chase, G M; Nelson, R G; Di Nallo, O E; Scales, A N; VanderLey, D L; Byrd, E F C

    2017-03-23

    We propose new approximate global multiplicative scaling factors for the DFT calculation of ground state harmonic vibrational frequencies using functionals from the TPSS, M06, and M11 functional families with standard correlation consistent cc-pVxZ and aug-cc-pVxZ (x = D, T, and Q), 6-311G split valence family, Sadlej and Sapporo polarized triple-ζ basis sets. Results for B3LYP, CAM-B3LYP, B3PW91, PBE, and PBE0 functionals with these basis sets are also reported. A total of 99 harmonic frequencies were calculated for 26 gas-phase organic and nonorganic molecules typically found in detonated solid propellant residue. Our proposed approximate multiplicative scaling factors are determined using a least-squares approach comparing the computed harmonic frequencies to experimental counterparts well established in the scientific literature. A comparison of our work to previously published global scaling factors is made to verify method reliability and the applicability of our molecular test set.

  4. Perceptual and Emotional Effects of Assuming a Disability.

    ERIC Educational Resources Information Center

    Raines, Shanan R.; And Others

    The effects of assuming a disability in changing attitudes towards persons with disabilities were assessed in 18 undergraduate students who were enrolled in an introductory rehabilitation counseling course. The subjects were instructed to engage in two levels of assumed disability (one-hand bound and two-hands bound) in three settings (private…

  5. Construction and immunogenicity study of a 297-bp humanized HIV V3 DNA of an approximated last common ancestor in mice.

    PubMed

    Sirivichayakul, Sunee; Tirawatnapong, Thaweesak; Ruxrungtham, Kiat; Oelrichs, Robert; Lorenzen, Sven-Lver; Xin, Ke-Qin; Okuda, Kenji; Phanuphak, Praphan

    2004-03-01

    DNA immunization represents one of the promising HIV-1 vaccine approaches. To overcome the obstacle of genetic variation, we used the last common ancestor (LCA) or "center-of-the-tree" approach to study a DNA fragment of the HIV-1 envelope surrounding the V3 region. A humanized codon of the 297-bp consensus ancestral sequence of the HIV-1 envelope (codons 291-391) was derived from the 80 most recent HIV-1 isolates from the 8 circulating HIV-1 subtypes worldwide. This 297-bp humanized "multi-clade" V3 DNA was amplified by a PCR-based technique. The PCR product was well expressed in vitro whereas the corresponding non-humanized V3 DNA (subtype A/E) could not be expressed. However, both V3 DNA constructs as well as the full-length HIV-1 envelope construct (A/E) were found to be immunogenic in mice by the footpad-swelling assay. Moreover, intracellular and extracellular interferon-gamma could be detected upon in vitro stimulation of spleen cells although the response was relatively weak. Further improvement of our humanized V3 DNA is needed.

  6. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  7. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  8. Study on Beijing University Returned Overseas Students Assuming Leadership Posts

    ERIC Educational Resources Information Center

    Chinese Education and Society, 2004

    2004-01-01

    In response to requests from the Central Committee's Organization Department and the Organization Department of the Beijing Municipal Party Committee, a monographic study on the subject of Beijing University's returned overseas students assuming leadership posts, was conducted. Information was obtained in various quarters by means of informal…

  9. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false Compartments assumed flooded: general. 174.075 Section 174.075 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore...

  10. 46 CFR 174.075 - Compartments assumed flooded: general.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Compartments assumed flooded: general. 174.075 Section 174.075 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore...

  11. Approximation algorithms

    PubMed Central

    Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.

    1997-01-01

    Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525

  12. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  13. 17. Photographic copy of photograph. Location unknown but assumed to ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    17. Photographic copy of photograph. Location unknown but assumed to be uper end of canal. Features no longer extant. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation service. Annual Report, Fiscal Year 1925. Vol. I, Narrative and Photographs, Irrigation District #4, California and Southern Arizona, RG 75, Entry 655, Box 28, National Archives, Washington, DC.) Photographer unknown. MAIN (TITLED FLORENCE) CANAL, WASTEWAY, SLUICEWAY, & BRIDGE, 1/26/25. - San Carlos Irrigation Project, Marin Canal, Amhurst-Hayden Dam to Picacho Reservoir, Coolidge, Pinal County, AZ

  14. Statistical motor number estimation assuming a binomial distribution.

    PubMed

    Blok, Joleen H; Visser, Gerhard H; de Graaf, Sándor; Zwarts, Machiel J; Stegeman, Dick F

    2005-02-01

    The statistical method of motor unit number estimation (MUNE) uses the natural stochastic variation in a muscle's compound response to electrical stimulation to obtain an estimate of the number of recruitable motor units. The current method assumes that this variation follows a Poisson distribution. We present an alternative that instead assumes a binomial distribution. Results of computer simulations and of a pilot study on 19 healthy subjects showed that the binomial MUNE values are considerably higher than those of the Poisson method, and in better agreement with the results of other MUNE techniques. In addition, simulation results predict that the performance in patients with severe motor unit loss will be better for the binomial than Poisson method. The adapted method remains closer to physiology, because it can accommodate the increase in activation probability that results from rising stimulus intensity. It does not need recording windows as used with the Poisson method, and is therefore less user-dependent and more objective and quicker in its operation. For these reasons, we believe that the proposed modifications may lead to significant improvements in the statistical MUNE technique.

  15. Organohalogens in nature: More widespread than previously assumed

    SciTech Connect

    Asplund, G.; Grimvall, A. )

    1991-08-01

    Although the natural production of organohalogens has been observed in several studies, it is generally assumed to be much smaller than the industrial production of these compounds. Nevertheless, two important natural sources have been known since the 1970s: red algae in marine ecosystems produce large amounts of brominated compounds, and methyl halides of natural origin are present in the atmosphere. During the past few years it has been shown that organohalogens are so widespread in groundwater, surface water, and soil that all samples in the studies referred to contain measurable amounts of absorbable organohalogens (AOX). The authors document the widespread occurrence of organohalogens in unpolluted soil and water and discuss possible sources of these compounds. It has been suggested that these organohalogens originate from long-range atmospheric transport of industrially produced compounds. The authors review existing evidence of enzymatically mediated halogenation of organic matter in soil and show that, most probably, natural halogenation in the terrestrial environment is the largest source.

  16. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    DOE PAGES

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less

  17. Students Learn Statistics When They Assume a Statistician's Role.

    ERIC Educational Resources Information Center

    Sullivan, Mary M.

    Traditional elementary statistics instruction for non-majors has focused on computation. Rarely have students had an opportunity to interact with real data sets or to use questioning to drive data analysis, common activities among professional statisticians. Inclusion of data gathering and analysis into whole class and small group activities…

  18. Sensitivity of Global Warming Potentials to the assumed background atmosphere

    SciTech Connect

    Wuebbles, D.J.; Patten, K.O.

    1992-03-05

    This is the first in a series of papers in which we will examine various aspects of the Global Warming Potential (GWP) concept and the sensitivity and uncertainties associated with the GWP values derived for the 1992 updated scientific assessment report of the Intergovernmental Panel on Climate Change (IPCC). One of the authors of this report (DJW) helped formulate the GWP concept for the first IPCC report in 1990. The Global Warming Potential concept was developed for that report as an attempt to fulfill the request from policymakers for a way of relating the potential effects on climate from various greenhouse gases, in much the same way as the Ozone Depletion Potential (ODP) concept (Wuebbles, 1981) is used in policy analyses related to concerns about the relative effects of CFCs and other compounds on stratospheric ozone destruction. We are also coauthors of the section on radiative forcing and Global Warming Potentials for the 1992 IPCC update; however, there was too little time to prepare much in the way of new research material for that report. Nonetheless, we have recognized for some time that there are a number of uncertainties and limitations associated with the definition of GWPs used in both the original and new IPCC reports. In this paper, we examine one of those uncertainties, namely, the effect of the assumed background atmospheric concentrations on the derived GWPs. Later papers will examine the sensitivity of GWPs to other uncertainties and limitations in the current concept.

  19. Pythagorean Approximations and Continued Fractions

    ERIC Educational Resources Information Center

    Peralta, Javier

    2008-01-01

    In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…

  20. A 4-node assumed-stress hybrid shell element with rotational degrees of freedom

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad A.

    1990-01-01

    An assumed-stress hybrid/mixed 4-node quadrilateral shell element is introduced that alleviates most of the deficiencies associated with such elements. The formulation of the element is based on the assumed-stress hybrid/mixed method using the Hellinger-Reissner variational principle. The membrane part of the element has 12 degrees of freedom including rotational or drilling degrees of freedom at the nodes. The bending part of the element also has 12 degrees of freedom. The bending part of the element uses the Reissner-Mindlin plate theory which takes into account the transverse shear contributions. The element formulation is derived from an 8-node isoparametric element. This process is accomplished by assuming quadratic variations for both in-plane and out-of-plane displacement fields and linear variations for both in-plane and out-of-plane rotation fields along the edges of the element. In addition, the degrees of freedom at midside nodes are approximated in terms of the degrees of freedom at corner nodes. During this process the rotational degrees of freedom at the corner nodes enter into the formulation of the element. The stress field are expressed in the element natural-coordinate system such that the element remains invariant with respect to node numbering.

  1. Computing functions by approximating the input

    NASA Astrophysics Data System (ADS)

    Goldberg, Mayer

    2012-12-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.

  2. Second Approximation to Conical Flows

    DTIC Science & Technology

    1950-12-01

    Public Release WRIGHT AIR DEVELOPMENT CENTER AF-WP-(B)-O-29 JUL 53 100 NOTICES ’When Government drawings, specifications, or other data are used V...so that the X, the approximation always depends on the ( "/)th, etc. Here the second approximation, i.e., the terms in C and 62, are computed and...the scheme shown in Fig. 1, the isentropic equations of motion are (cV-X2) +~X~C 6 +- 4= -x- 1 It is assumed that + Ux !E . $O’/ + (8) Introducing Eqs

  3. Computing Functions by Approximating the Input

    ERIC Educational Resources Information Center

    Goldberg, Mayer

    2012-01-01

    In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…

  4. Interpolation and Approximation Theory.

    ERIC Educational Resources Information Center

    Kaijser, Sten

    1991-01-01

    Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)

  5. 25 CFR 170.610 - What IRR Program functions may a tribe assume under ISDEAA?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 1 2011-04-01 2011-04-01 false What IRR Program functions may a tribe assume under... Agreements Under Isdeaa § 170.610 What IRR Program functions may a tribe assume under ISDEAA? A tribe may assume all IRR Program functions and activities that are otherwise contractible under a...

  6. Approximate flavor symmetries

    SciTech Connect

    Rasin, A.

    1994-04-01

    We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.

  7. Approximation of Laws

    NASA Astrophysics Data System (ADS)

    Niiniluoto, Ilkka

    2014-03-01

    Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).

  8. 76 FR 4933 - Environmental Review Procedures for Entities Assuming HUD Environmental Review Responsibilities...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-27

    ... Responsibilities; Notice of Proposed Information Collection: Comment Request AGENCY: Office of the Assistant...: Environmental Review Procedures for Entities Assuming HUD Environmental Responsibilities. OMB Control...

  9. The benefits of tight glycemic control in critical illness: Sweeter than assumed?

    PubMed

    Gardner, Andrew John

    2014-12-01

    Hyperglycemia has long been observed amongst critically ill patients and associated with increased mortality and morbidity. Tight glycemic control (TGC) is the clinical practice of controlling blood glucose (BG) down to the "normal" 4.4-6.1 mmol/L range of a healthy adult, aiming to avoid any potential deleterious effects of hyperglycemia. The ground-breaking Leuven trials reported a mortality benefit of approximately 10% when using this technique, which led many to endorse its benefits. In stark contrast, the multi-center normoglycemia in intensive care evaluation-survival using glucose algorithm regulation (NICE-SUGAR) trial, not only failed to replicate this outcome, but showed TGC appeared to be harmful. This review attempts to re-analyze the current literature and suggests that hope for a benefit from TGC should not be so hastily abandoned. Inconsistencies in study design make a like-for-like comparison of the Leuven and NICE-SUGAR trials challenging. Inadequate measures preventing hypoglycemic events are likely to have contributed to the increased mortality observed in the NICE-SUGAR treatment group. New technologies, including predictive models, are being developed to improve the safety of TGC, primarily by minimizing hypoglycemia. Intensive Care Units which are unequipped in trained staff and monitoring capacity would be unwise to attempt TGC, especially considering its yet undefined benefit and the deleterious nature of hypoglycemia. International recommendations now advise clinicians to ensure critically ill patients maintain a BG of <10 mmol/L. Despite encouraging evidence, currently we can only speculate and remain optimistic that the benefit of TGC in clinical practice is sweeter than assumed.

  10. Quantifying the impact on hyporheic flow of assuming homogenous hydraulic conductivity distributions within permeameters

    NASA Astrophysics Data System (ADS)

    Stonedahl, S. H.; Cooper, D. G.; Everingham, J. M.; Kraciun, M. K.; Stonedahl, F.

    2012-12-01

    Hydraulic conductivity (K) is an important sediment property related to the speed with which water flows through sediments. It affects hyporheic uptake and residence time distributions, which are critical to assessing solute transport and nutrient depletion in streams. In this study we investigated the effect of millimeter-scale K variability on measurements that use one of the simplest in situ measurement techniques, the falling-head permeameter test. In a laboratory setting vertical K values and their variability were calculated for a variety of sands. We created composite systems by layering these sands and measured their respective K values. Spatial head distributions for these composite systems were modeled using the finite difference capability of MODFLOW with inputs of head levels, boundaries, and known localized K values. These head distributions were then used to calculate the volumetric flux through the column, which was used in the Hvorslev constant-head equation to calculate vertical K values. We found that these simulated system K values reproduced the same qualitative trends as the laboratory measurements, and provided a good quantitative match in some cases. We then used the model to select distinct heterogeneous K distributions (i.e. layered, randomly distributed, and systematically increasing) that have the same simulated system K value. These K distributions were used in a two-dimensional dune/ripple-scale pumping model to approximate hyporheic residence time distributions and provide estimates of the error associated with the assumed homogeneity of the K distributions. The results have direct implications for both field studies where hydraulic conductivity is being measured and also for determining the level of detail that should be included in computational models.inite difference model of the constant-head permeameter

  11. The Motivation of Teachers to Assume the Role of Cooperating Teacher

    ERIC Educational Resources Information Center

    Jonett, Connie L. Foye

    2009-01-01

    The Motivation of Teachers to Assume the Role of Cooperating Teacher This study explored a phenomenological understanding of the motivation and influences that cause experienced teachers to assume pedagogical training of student teachers through the role of cooperating teacher. The research question guiding the study was what motivates teachers to…

  12. Pre-Service Teachers' Personal Epistemic Beliefs and the Beliefs They Assume Their Pupils to Have

    ERIC Educational Resources Information Center

    Rebmann, Karin; Schloemer, Tobias; Berding, Florian; Luttenberger, Silke; Paechter, Manuela

    2015-01-01

    In their workaday life, teachers are faced with multiple complex tasks. How they carry out these tasks is also influenced by their epistemic beliefs and the beliefs they assume their pupils hold. In an empirical study, pre-service teachers' epistemic beliefs and those they assume of their pupils were investigated in the setting of teacher…

  13. 24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... evaluation of the environmental issues and take responsibility for the scope and content of the EA in... assume environmental review responsibilities? 1000.20 Section 1000.20 Housing and Urban Development... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is...

  14. Indexing the approximate number system.

    PubMed

    Inglis, Matthew; Gilmore, Camilla

    2014-01-01

    Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects.

  15. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.

  16. A genome-wide search for genes predisposing to manic-depression, assuming autosomal dominant inheritance

    SciTech Connect

    Coon, H.; Jensen, S.; Hoff, M.; Holik, J.; Plaetke, R.; Reimherr, F.; Wender, P.; Leppert, M.; Byerley, W. )

    1993-06-01

    Manic-depressive illness (MDI), also known as [open quotes]bipolar affective disorder[close quotes], is a common and devastating neuropsychiatric illness. Although pivotal biochemical alterations underlying the disease are unknown, results of family, twin, and adoption studies consistently implicate genetic transmission in the pathogenesis of MDI. In order to carry out linkage analysis, the authors ascertained eight moderately sized pedigrees containing multiple cases of the disease. For a four-allele marker mapping at 5 cM from the disease gene, the pedigree sample has >97% power to detect a dominant allele under genetic homogeneity and has >73% power under 20% heterogeneity. To date, the eight pedigrees have been genotyped with 328 polymorphic DNA loci throughout the genome. When autosomal dominant inheritance was assumed, 273 DNA markers gave lod scores <[minus]2.0 at [theta] = .05, and 4 DNA marker loci yielded lod scores >1 (chromosome 5 -- D5S39, D5S43, and D5S62; chromosome 11 -- D11S85). Of the markers giving lod scores >1, only D5S62 continued to show evidence for linkage when the affected-pedigree-member method was used. The D5S62 locus maps to distal 5q, a region containing neurotransmitter-receptor genes for dopamine, norepinephrine, glutamate, and gamma-aminobutyric acid. Although additional work in this region may be warranted, the linkage results should be interpreted as preliminary data, as 68 unaffected individuals are not past the age of risk. 72 refs., 2 tabs.

  17. Green Ampt approximations

    NASA Astrophysics Data System (ADS)

    Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.

    2005-10-01

    The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.

  18. Effect of Assumed Damage and Location on the Delamination Onset Predictions for Skin-Stiffener Debonding

    NASA Technical Reports Server (NTRS)

    Paris, Isabelle L.; Krueger, Ronald; OBrien, T. Kevin

    2004-01-01

    The difference in delamination onset predictions based on the type and location of the assumed initial damage are compared in a specimen consisting of a tapered flange laminate bonded to a skin laminate. From previous experimental work, the damage was identified to consist of a matrix crack in the top skin layer followed by a delamination between the top and second skin layer (+45 deg./-45 deg. interface). Two-dimensional finite elements analyses were performed for three different assumed flaws and the results show a considerable reduction in critical load if an initial delamination is assumed to be present, both under tension and bending loads. For a crack length corresponding to the peak in the strain energy release rate, the delamination onset load for an assumed initial flaw in the bondline is slightly higher than the critical load for delamination onset from an assumed skin matrix crack, both under tension and bending loads. As a result, assuming an initial flaw in the bondline is simpler while providing a critical load relatively close to the real case. For the configuration studied, a small delamination might form at a lower tension load than the critical load calculated for a 12.7 mm (0.5") delamination, but it would grow in a stable manner. For the bending case, assuming an initial flaw of 12.7 mm (0.5") is conservative, the crack would grow unstably.

  19. Preparing for Upheaval in North Korea: Assuming North Korean Regime Collapse

    DTIC Science & Technology

    2013-12-01

    defense agreement between North Korea and China but also pro-Chinese North Korean elites’ requests for Chinese help are likely to justify Chinese...PREPARING FOR UPHEAVAL IN NORTH KOREA : ASSUMING NORTH KOREAN REGIME COLLAPSE by Kwonwoo Kim December 2013 Thesis Advisor: Wade Huntley Second...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE PREPARING FOR UPHEAVAL IN NORTH KOREA : ASSUMING NORTH KOREAN REGIME COLLAPSE 5

  20. Intrinsic Nilpotent Approximation.

    DTIC Science & Technology

    1985-06-01

    RD-A1II58 265 INTRINSIC NILPOTENT APPROXIMATION(U) MASSACHUSETTS INST 1/2 OF TECH CAMBRIDGE LAB FOR INFORMATION AND, DECISION UMCLRSSI SYSTEMS C...TYPE OF REPORT & PERIOD COVERED Intrinsic Nilpotent Approximation Technical Report 6. PERFORMING ORG. REPORT NUMBER LIDS-R-1482 7. AUTHOR(.) S...certain infinite-dimensional filtered Lie algebras L by (finite-dimensional) graded nilpotent Lie algebras or g . where x E M, (x,,Z) E T*M/O. It

  1. Anomalous diffraction approximation limits

    NASA Astrophysics Data System (ADS)

    Videen, Gorden; Chýlek, Petr

    It has been reported in a recent article [Liu, C., Jonas, P.R., Saunders, C.P.R., 1996. Accuracy of the anomalous diffraction approximation to light scattering by column-like ice crystals. Atmos. Res., 41, pp. 63-69] that the anomalous diffraction approximation (ADA) accuracy does not depend on particle refractive index, but instead is dependent on the particle size parameter. Since this is at odds with previous research, we thought these results warranted further discussion.

  2. [The relationship between assumed-competence and communication about learning in high school].

    PubMed

    Kodaira, Hideshi; Aoki, Naoko; Matsuoka, Mirei; Hayamizu, Toshihiko

    2008-08-01

    This study investigated the relationship between assumed-competence (based on undervaluing others in general belief) and learning-related communication. Two-hundred-seventy-one high school students completed a questionnaire measured assumed-competence, engagement in study-related conversations with friends (planned courses after high school, students' own achievements in learning, school subjects they like and dislike, anxiety about failure, criticism of others), help-seeking behavior directed towards teachers and friends, and help-giving to friends. Students who had high assumed-competence tended to brag about their own achievements, criticize their teachers' methods, and talk negatively about their friends' academic failures. Furthermore, assumed-competence correlated positively with avoidance of help-seeking from friends, avoidance of help-giving to friends, and giving away answers on assignments. These types of help-seeking and help-giving behaviors are apparently not connected with learning, given that people with high assumed-competence tended not to seek help from friends or help friends in appropriate ways. The present results indicate that assumed-competence could be an obstruction to the formation of good relationships with others.

  3. Approximate spatial reasoning

    NASA Technical Reports Server (NTRS)

    Dutta, Soumitra

    1988-01-01

    Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.

  4. Approximate kernel competitive learning.

    PubMed

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  5. Common Cold

    MedlinePlus

    ... nose, coughing - everyone knows the symptoms of the common cold. It is probably the most common illness. In ... avoid colds. There is no cure for the common cold. For relief, try Getting plenty of rest Drinking ...

  6. Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders

    PubMed Central

    Shay, Blake; Weber, Robert J.

    2015-01-01

    Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals. PMID:27621512

  7. Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders.

    PubMed

    Shay, Blake; Weber, Robert J

    2015-11-01

    Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals.

  8. Three dimensional potential and current distributions in a Hall generator with assumed velocity profiles

    NASA Technical Reports Server (NTRS)

    Stankiewicz, N.; Palmer, R. W.

    1972-01-01

    Three-dimensional potential and current distributions in a Faraday segmented MHD generator operating in the Hall mode are computed. Constant conductivity and a Hall parameter of 1.0 is assumed. The electric fields and currents are assumed to be coperiodic with the electrode structure. The flow is assumed to be fully developed and a family of power-law velocity profiles, ranging from parabolic to turbulent, is used to show the effect of the fullness of the velocity profile. Calculation of the square of the current density shows that nonequilibrium heating is not likely to occur along the boundaries. This seems to discount the idea that the generator insulating walls are regions of high conductivity and are therefore responsible for boundary-layer shorting, unless the shorting is a surface phenomenon on the insulating material.

  9. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.

    1998-06-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  10. Multicriteria approximation through decomposition

    SciTech Connect

    Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |

    1997-12-01

    The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.

  11. On Stochastic Approximation.

    ERIC Educational Resources Information Center

    Wolff, Hans

    This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…

  12. Approximating Integrals Using Probability

    ERIC Educational Resources Information Center

    Maruszewski, Richard F., Jr.; Caudle, Kyle A.

    2005-01-01

    As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…

  13. Optimal Control for TB disease with vaccination assuming endogeneous reactivation and exogeneous reinfection

    NASA Astrophysics Data System (ADS)

    Anggriani, N.; Wicaksono, B. C.; Supriatna, A. K.

    2016-06-01

    Tuberculosis (TB) is one of the deadliest infectious disease in the world which caused by Mycobacterium tuberculosis. The disease is spread through the air via the droplets from the infectious persons when they are coughing. The World Health Organization (WHO) has paid a special attention to the TB by providing some solution, for example by providing BCG vaccine that prevent an infected person from becoming an active infectious TB. In this paper we develop a mathematical model of the spread of the TB which assumes endogeneous reactivation and exogeneous reinfection factors. We also assume that some of the susceptible population are vaccinated. Furthermore we investigate the optimal vaccination level for the disease.

  14. Bowing-reactivity trends in EBR-II assuming zero-swelling ducts

    SciTech Connect

    Meneghetti, D.

    1994-03-01

    Predicted trends of duct-bowing reactivities for the Experimental Breeder Reactor II (EBR-II) are correlated with predicted row-wise duct deflections assuming use of idealized zero-void-swelling subassembly ducts. These assume no irradiation induced swellings of ducts but include estimates of the effects of irradiation-creep relaxation of thermally induced bowing stresses. The results illustrate the manners in which at-power creeps may affect subsequent duct deflections at zero power and thereby the trends of the bowing component of a subsequent power reactivity decrement.

  15. Optimizing the Zeldovich approximation

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.

    1994-01-01

    We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment

  16. [Return-to-work results of depressive employees in correlation to assumed chronification].

    PubMed

    Poersch, M

    2007-06-01

    Return-to-work results of 52 depressive employees were examined in 4 subgroups with assumed different chronification. A maximum of chronification was assumed if motivation for a return-to-work was below 5 points (1-8 BWM-Scale) and sickness-absence was longer than 52 weeks ("Chronic-Group"). Minimum chronification was assumed if motivation was 5 points and more (1-8 BWM-Scale) and sickness-absence was below 52 weeks ("Motivated Group"). The "ambivalently motivated" had a return-to-work motivation of 5 points and more and a sickness-absence longer than 52 weeks, the "ambivalently demotivated" subgroup had a return-to-work-motivation of below 5 points and a sickness-absence below 52 weeks. The "motivated" subgroup achieved a return-to-work of 100%, the ambivalent motivated of 67%, the "ambivalent-demotivated" of 33%, the "chronic" of 9.5%. In spite of the small numbers, the return-to-work-results of these four subgroups divided by (a) duration of sickness absence and (b) motovation for a return back to work seemed to show a notable inverse correlation with the assumed chronification of depressive ill employess.

  17. Assumed strain formulation for the four-node quadrilateral with improved in-plane bending behaviour

    NASA Astrophysics Data System (ADS)

    Stolarski, Henryk K.; Chen, Yung-I.

    1995-04-01

    A new assumed strain quadrilateral element with highly accurate in-plane bending behavior is presented for plane stress and plane strain analysis. The basic idea of the formulation consists in identification of various modes of deformation and then in proper modification of the strain field in some of these modes. In particular, the strain operator corresponding to the in-plane bending modes is modified to simulate the strain field resulting from the assumptions usually made in structural mechanics. The modification of the strain field leads to the assumed strain operator on the element level. As a result, the so-called shear and membrane locking phenomena are alleviated. The element exhibits remarkable success in bending-dominated problems even when severely distorted and high aspect ratio meshes are used. Another advantage of the present assumed strain element is that locking for nearly incompressible materials is also mitigated. While this assumed strain element passes the patch test only for the parallelogram shapes, the element provides convergent solutions as long as the initially general form of the element approaches a parallelogram shape with the refinement of the mesh.

  18. A Model for Teacher Effects from Longitudinal Data without Assuming Vertical Scaling

    ERIC Educational Resources Information Center

    Mariano, Louis T.; McCaffrey, Daniel F.; Lockwood, J. R.

    2010-01-01

    There is an increasing interest in using longitudinal measures of student achievement to estimate individual teacher effects. Current multivariate models assume each teacher has a single effect on student outcomes that persists undiminished to all future test administrations (complete persistence [CP]) or can diminish with time but remains…

  19. 25 CFR 170.610 - What IRR Program functions may a tribe assume under ISDEAA?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What IRR Program functions may a tribe assume under ISDEAA? 170.610 Section 170.610 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER INDIAN RESERVATION ROADS PROGRAM Service Delivery for Indian Reservation Roads Contracts...

  20. How Public High School Students Assume Cooperative Roles to Develop Their EFL Speaking Skills

    ERIC Educational Resources Information Center

    Parra Espinel, Julie Natalie; Fonseca Canaría, Diana Carolina

    2010-01-01

    This study describes an investigation we carried out in order to identify how the specific roles that 7th grade public school students assumed when they worked cooperatively were related to their development of speaking skills in English. Data were gathered through interviews, field notes, students' reflections and audio recordings. The findings…

  1. Regressive logistic models for familial diseases: a formulation assuming an underlying liability model.

    PubMed Central

    Demenais, F M

    1991-01-01

    Statistical models have been developed to delineate the major-gene and non-major-gene factors accounting for the familial aggregation of complex diseases. The mixed model assumes an underlying liability to the disease, to which a major gene, a multifactorial component, and random environment contribute independently. Affection is defined by a threshold on the liability scale. The regressive logistic models assume that the logarithm of the odds of being affected is a linear function of major genotype, phenotypes of antecedents and other covariates. An equivalence between these two approaches cannot be derived analytically. I propose a formulation of the regressive logistic models on the supposition of an underlying liability model of disease. Relatives are assumed to have correlated liabilities to the disease; affected persons have liabilities exceeding an estimable threshold. Under the assumption that the correlation structure of the relatives' liabilities follows a regressive model, the regression coefficients on antecedents are expressed in terms of the relevant familial correlations. A parsimonious parameterization is a consequence of the assumed liability model, and a one-to-one correspondence with the parameters of the mixed model can be established. The logits, derived under the class A regressive model and under the class D regressive model, can be extended to include a large variety of patterns of family dependence, as well as gene-environment interactions. PMID:1897524

  2. The Impact of Assumed Knowledge Entry Standards on Undergraduate Mathematics Teaching in Australia

    ERIC Educational Resources Information Center

    King, Deborah; Cattlin, Joann

    2015-01-01

    Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who…

  3. Applied Routh approximation

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1978-01-01

    The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

  4. Bone marrow mesenchymal stem cells can differentiate and assume corneal keratocyte phenotype

    PubMed Central

    Liu, Hongshan; Zhang, Jianhua; Liu, Chia-Yang; Hayashi, Yasuhito; Kao, Winston W-Y

    2012-01-01

    Abstract It remains elusive as to what bone marrow (BM) cell types infiltrate into injured and/or diseased tissues and subsequently differentiate to assume the phenotype of residential cells, for example, neurons, cardiac myocytes, keratocytes, etc., to repair damaged tissue. Here, we examined the possibility of whether BM cell invasion via circulation into uninjured and injured corneas could assume a keratocyte phenotype, using chimeric mice generated by transplantation of enhanced green fluorescent protein (EGFP)+ BM cells into keratocan null (Kera−/−) and lumican null (Lum−/−) mice. EGFP+ BM cells assumed dendritic cell morphology, but failed to synthesize corneal-specific keratan sulfate proteoglycans, that is KS-lumican and KS-keratocan. In contrast, some EGFP+ BM cells introduced by intrastromal transplantation assumed keratocyte phenotypes. Furthermore, BM cells were isolated from Kera-Cre/ZEG mice, a double transgenic mouse line in which cells expressing keratocan become EGFP+ due to the synthesis of Cre driven by keratocan promoter. Three days after corneal and conjunctival transplantations of such BM cells into Kera−/− mice, green keratocan positive cells were found in the cornea, but not in conjunctiva. It is worthy to note that transplanted BM cells were rejected in 4 weeks. MSC isolated from BM were used to examine if BM mesenchymal stem cells (BM-MSC) could assume keratocyte phenotype. When BM-MSC were intrastromal-transplanted into Kera−/− mice, they survived in the cornea without any immune and inflammatory responses and expressed keratocan in Kera−/− mice. These observations suggest that corneal intrastromal transplantation of BM-MSC may be an effective treatment regimen for corneal diseases involving dysfunction of keratocytes. PMID:21883890

  5. Topics in Metric Approximation

    NASA Astrophysics Data System (ADS)

    Leeb, William Edward

    This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.

  6. Common cold

    MedlinePlus

    ... this page: //medlineplus.gov/ency/article/000678.htm Common cold To use the sharing features on this page, please enable JavaScript. The common cold most often causes a runny nose, nasal congestion, ...

  7. Approximate option pricing

    SciTech Connect

    Chalasani, P.; Saias, I.; Jha, S.

    1996-04-08

    As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.

  8. Approximate Qualitative Temporal Reasoning

    DTIC Science & Technology

    2001-01-01

    i.e., their boundaries can be placed in such a way that they coincide with the cell boundaries of the appropriate partition of the time-line. (Think of...respect to some appropriate partition of the time-line. For example, I felt well on Saturday. When I measured my temperature I had a fever on Monday and on...Bittner / Approximate Qualitative Temporal Reasoning 49 [27] I. A. Goralwalla, Y. Leontiev , M. T. Özsu, D. Szafron, and C. Combi. Temporal granularity for

  9. Eight-moment approximation solar wind models

    NASA Technical Reports Server (NTRS)

    Olsen, Espen Lyngdal; Leer, Egil

    1995-01-01

    Heat conduction from the corona is important in the solar wind energy budget. Until now all hydrodynamic solar wind models have been using the collisionally dominated gas approximation for the heat conductive flux. Observations of the solar wind show particle distribution functions which deviate significantly from a Maxwellian, and it is clear that the solar wind plasma is far from collisionally dominated. We have developed a numerical model for the solar wind which solves the full equation for the heat conductive flux together with the conservation equations for mass, momentum, and energy. The equations are obtained by taking moments of the Boltzmann equation, using an 8-moment approximation for the distribution function. For low-density solar winds the 8-moment approximation models give results which differ significantly from the results obtained in models assuming the gas to be collisionally dominated. The two models give more or less the same results in high density solar winds.

  10. ANS shell elements with improved transverse shear accuracy. [Assumed Natural Coordinate Strain

    NASA Technical Reports Server (NTRS)

    Jensen, Daniel D.; Park, K. C.

    1992-01-01

    A method of forming assumed natural coordinate strain (ANS) plate and shell elements is presented. The ANS method uses equilibrium based constraints and kinematic constraints to eliminate hierarchical degrees of freedom which results in lower order elements with improved stress recovery and displacement convergence. These techniques make it possible to easily implement the element into the standard finite element software structure, and a modified shape function matrix can be used to create consistent nodal loads.

  11. Catalogue of maximum crack opening stress for CC(T) specimen assuming large strain condition

    NASA Astrophysics Data System (ADS)

    Graba, Marcin

    2013-06-01

    In this paper, values for the maximum opening crack stress and its distance from crack tip are determined for various elastic-plastic materials for centre cracked plate in tension (CC(T) specimen) are presented. Influences of yield strength, the work-hardening exponent and the crack length on the maximum opening stress were tested. The author has provided some comments and suggestions about modelling FEM assuming large strain formulation.

  12. The impact of assumed knowledge entry standards on undergraduate mathematics teaching in Australia

    NASA Astrophysics Data System (ADS)

    King, Deborah; Cattlin, Joann

    2015-10-01

    Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who may be returning to study, or who are changing career paths but do not have particular prerequisite study, now have more flexible pathways. However, there is mounting evidence to indicate that there are also significant negative impacts associated with assumed knowledge approaches, with large numbers of students enrolling in degrees without the stated assumed knowledge. For students, there are negative impacts on pass rates and retention rates and limitations to pathways within particular degrees. For institutions, the necessity to offer additional mathematics subjects at a lower level than normal and more support services for under-prepared students impacts on workloads and resources. In this paper, we discuss early research from the First Year in Maths project, which begins to shed light on the realities of a system that may in fact be too flexible.

  13. Comparing nadir and limb observations of polar mesospheric clouds: The effect of the assumed particle size distribution

    NASA Astrophysics Data System (ADS)

    Bailey, Scott M.; Thomas, Gary E.; Hervig, Mark E.; Lumpe, Jerry D.; Randall, Cora E.; Carstens, Justin N.; Thurairajah, Brentha; Rusch, David W.; Russell, James M.; Gordley, Larry L.

    2015-05-01

    Nadir viewing observations of Polar Mesospheric Clouds (PMCs) from the Cloud Imaging and Particle Size (CIPS) instrument on the Aeronomy of Ice in the Mesosphere (AIM) spacecraft are compared to Common Volume (CV), limb-viewing observations by the Solar Occultation For Ice Experiment (SOFIE) also on AIM. CIPS makes multiple observations of PMC-scattered UV sunlight from a given location at a variety of geometries and uses the variation of the radiance with scattering angle to determine a cloud albedo, particle size distribution, and Ice Water Content (IWC). SOFIE uses IR solar occultation in 16 channels (0.3-5 μm) to obtain altitude profiles of ice properties including the particle size distribution and IWC in addition to temperature, water vapor abundance, and other environmental parameters. CIPS and SOFIE made CV observations from 2007 to 2009. In order to compare the CV observations from the two instruments, SOFIE observations are used to predict the mean PMC properties observed by CIPS. Initial agreement is poor with SOFIE predicting particle size distributions with systematically smaller mean radii and a factor of two more albedo and IWC than observed by CIPS. We show that significantly improved agreement is obtained if the PMC ice is assumed to contain 0.5% meteoric smoke by mass, in agreement with previous studies. We show that the comparison is further improved if an adjustment is made in the CIPS data processing regarding the removal of Rayleigh scattered sunlight below the clouds. This change has an effect on the CV PMC, but is negligible for most of the observed clouds outside the CV. Finally, we examine the role of the assumed shape of the ice particle size distribution. Both experiments nominally assume the shape is Gaussian with a width parameter roughly half of the mean radius. We analyze modeled ice particle distributions and show that, for the column integrated ice distribution, Log-normal and Exponential distributions better represent the range

  14. Clays, common

    USGS Publications Warehouse

    Virta, R.L.

    1998-01-01

    Part of a special section on the state of industrial minerals in 1997. The state of the common clay industry worldwide for 1997 is discussed. Sales of common clay in the U.S. increased from 26.2 Mt in 1996 to an estimated 26.5 Mt in 1997. The amount of common clay and shale used to produce structural clay products in 1997 was estimated at 13.8 Mt.

  15. Hierarchical Approximate Bayesian Computation

    PubMed Central

    Turner, Brandon M.; Van Zandt, Trisha

    2013-01-01

    Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436

  16. Systematic approach for simultaneously correcting the band-gap andp-dseparation errors of common cation III-V or II-VI binaries in density functional theory calculations within a local density approximation

    SciTech Connect

    Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang

    2015-07-31

    We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.

  17. Does the rapid appearance of life on Earth suggest that life is common in the universe?

    PubMed

    Lineweaver, Charles H; Davis, Tamara M

    2002-01-01

    It is sometimes assumed that the rapidity of biogenesis on Earth suggests that life is common in the Universe. Here we critically examine the assumptions inherent in this if-life-evolved-rapidly-life-must-be-common argument. We use the observational constraints on the rapidity of biogenesis on Earth to infer the probability of biogenesis on terrestrial planets with the same unknown probability of biogenesis as the Earth. We find that on such planets, older than approximately 1 Gyr, the probability of biogenesis is > 13% at the 95% confidence level. This quantifies an important term in the Drake Equation but does not necessarily mean that life is common in the Universe.

  18. Traction free finite elements with the assumed stress hybrid model. M.S. Thesis, 1981

    NASA Technical Reports Server (NTRS)

    Kafie, Kurosh

    1991-01-01

    An effective approach in the finite element analysis of the stress field at the traction free boundary of a solid continuum was studied. Conventional displacement and assumed stress finite elements were used in the determination of stress concentrations around circular and elliptical holes. Specialized hybrid elements were then developed to improve the satisfaction of prescribed traction boundary conditions. Results of the stress analysis indicated that finite elements which exactly satisfy the free stress boundary conditions are the most accurate and efficient in such problems. A general approach for hybrid finite elements which incorporate traction free boundaries of arbitrary geometry was formulated.

  19. A variational justification of the assumed natural strain formulation of finite elements

    NASA Technical Reports Server (NTRS)

    Militello, Carmelo; Felippa, Carlos A.

    1991-01-01

    The objective is to study the assumed natural strain (ANS) formulation of finite elements from a variational standpoint. The study is based on two hybrid extensions of the Reissner-type functional that uses strains and displacements as independent fields. One of the forms is a genuine variational principle that contains an independent boundary traction field, whereas the other one represents a restricted variational principle. Two procedures for element level elimination of the strain field are discussed, and one of them is shown to be equivalent to the inclusion of incompatible displacement modes. Also, the 4-node C(exp 0) plate bending quadrilateral element is used to illustrate applications of this theory.

  20. Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element

    NASA Technical Reports Server (NTRS)

    Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.

    1993-01-01

    Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.

  1. Assumed strain distributions for a finite strip plate bending element using Mindlin-Reissner plate theory

    NASA Technical Reports Server (NTRS)

    Chulya, Abhisak; Mullen, Robert L.

    1989-01-01

    A linear finite strip plate element based on Mindlin-Reissner plate theory is developed. The analysis is suitable for both thin and thick plates. In the formulation, new transverse shear strains are introduced and assumed constant in each two-node linear strip. The element stiffness matrix is explicitly formulated for efficient computation and computer implementation. Numerical results showing the efficiency and predictive capability of the element for the analysis of plates are presented for different support and loading conditions and a wide range of thicknesses. No sign of shear locking is observed with the newly developed element.

  2. Student Commons

    ERIC Educational Resources Information Center

    Gordon, Douglas

    2010-01-01

    Student commons are no longer simply congregation spaces for students with time on their hands. They are integral to providing a welcoming environment and effective learning space for students. Many student commons have been transformed into spaces for socialization, an environment for alternative teaching methods, a forum for large group meetings…

  3. Countably QC-Approximating Posets

    PubMed Central

    Mao, Xuxin; Xu, Luoshan

    2014-01-01

    As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730

  4. An assumed-stress hybrid 4-node shell element with drilling degrees of freedom

    NASA Technical Reports Server (NTRS)

    Aminpour, M. A.

    1992-01-01

    An assumed-stress hybrid/mixed 4-node quadrilateral shell element is introduced that alleviates most of the deficiencies associated with such elements. The formulation of the element is based on the assumed-stress hybrid/mixed method using the Hellinger-Reissner variational principle. The membrane part of the element has 12 degrees of freedom including rotational or 'drilling' degrees of freedom at the nodes. The bending part of the element also has 12 degrees of freedom. The bending part of the element uses the Reissner-Mindlin plate theory which takes into account the transverse shear contributions. The element formulation is derived from an 8-node isoparametric element by expressing the midside displacement degrees of freedom in terms of displacement and rotational degrees of freedom at corner nodes. The element passes the patch test, is nearly insensitive to mesh distortion, does not 'lock', possesses the desirable invariance properties, has no hidden spurious modes, and for the majority of test cases used in this paper produces more accurate results than the other elements employed herein for comparison.

  5. Children's Everyday Learning by Assuming Responsibility for Others: Indigenous Practices as a Cultural Heritage Across Generations.

    PubMed

    Fernández, David Lorente

    2015-01-01

    This chapter uses a comparative approach to examine the maintenance of Indigenous practices related with Learning by Observing and Pitching In in two generations--parent generation and current child generation--in a Central Mexican Nahua community. In spite of cultural changes and the increase of Western schooling experience, these practices persist, to different degrees, as a Nahua cultural heritage with close historical relations to the key value of cuidado (stewardship). The chapter explores how children learn the value of cuidado in a variety of everyday activities, which include assuming responsibility in many social situations, primarily in cultivating corn, raising and protecting domestic animals, health practices, and participating in family ceremonial life. The chapter focuses on three main points: (1) Cuidado (assuming responsibility for), in the Nahua socio-cultural context, refers to the concepts of protection and "raising" as well as fostering other beings, whether humans, plants, or animals, to reach their potential and fulfill their development. (2) Children learn cuidado by contributing to family endeavors: They develop attention and self-motivation; they are capable of responsible actions; and they are able to transform participation to achieve the status of a competent member of local society. (3) This collaborative participation allows children to continue the cultural tradition and to preserve a Nahua heritage at a deeper level in a community in which Nahuatl language and dress have disappeared, and people do not identify themselves as Indigenous.

  6. Aseismic Slips Preceding Ruptures Assumed for Anomalous Seismicities and Crustal Deformations

    NASA Astrophysics Data System (ADS)

    Ogata, Y.

    2007-12-01

    If aseismic slips occurs on a fault or its deeper extension, both seismicity and geodetic records around the source should be affected. Such anomalies are revealed to have occurred during the last several years leading up to the October 2004 Chuetsu Earthquake of M6.8, the March 2007 Noto Peninsula Earthquake of M6.9, and the July 2007 Chuetsu-Oki Earthquake of M6.8, which occurred successively in the near-field, central Japan. Seismic zones of negative and positive increments of the Coulomb failure stress, assuming such slips, show seismic quiescence and activation, respectively, relative to the predicted rate by the ETAS model. These are further supported by transient crustal movement around the source preceding the rupture. Namely, time series of the baseline distance records between a numbers of the permanent GPS stations deviated from the predicted trend, with the trend of different slope that is basically consistent with the horizontal displacements of the stations due to the assumed slips. References Ogata, Y. (2007) Seismicity and geodetic anomalies in a wide area preceding the Niigata-Ken-Chuetsu Earthquake of October 23, 2004, central Japan, J. Geophys. Res. 112, in press.

  7. Perceiving others' personalities: examining the dimensionality, assumed similarity to the self, and stability of perceiver effects.

    PubMed

    Srivastava, Sanjay; Guglielmo, Steve; Beer, Jennifer S

    2010-03-01

    In interpersonal perception, "perceiver effects" are tendencies of perceivers to see other people in a particular way. Two studies of naturalistic interactions examined perceiver effects for personality traits: seeing a typical other as sympathetic or quarrelsome, responsible or careless, and so forth. Several basic questions were addressed. First, are perceiver effects organized as a global evaluative halo, or do perceptions of different traits vary in distinct ways? Second, does assumed similarity (as evidenced by self-perceiver correlations) reflect broad evaluative consistency or trait-specific content? Third, are perceiver effects a manifestation of stable beliefs about the generalized other, or do they form in specific contexts as group-specific stereotypes? Findings indicated that perceiver effects were better described by a differentiated, multidimensional structure with both trait-specific content and a higher order global evaluation factor. Assumed similarity was at least partially attributable to trait-specific content, not just to broad evaluative similarity between self and others. Perceiver effects were correlated with gender and attachment style, but in newly formed groups, they became more stable over time, suggesting that they grew dynamically as group stereotypes. Implications for the interpretation of perceiver effects and for research on personality assessment and psychopathology are discussed.

  8. Effects of assumed tow architecture on the predicted moduli and stresses in woven composites

    NASA Technical Reports Server (NTRS)

    Chapman, Clinton Dane

    1994-01-01

    This study deals with the effect of assumed tow architecture on the elastic material properties and stress distributions of plain weave woven composites. Specifically, the examination of how a cross-section is assumed to sweep-out the tows of the composite is examined in great detail. The two methods studied are extrusion and translation. This effect is also examined to determine how sensitive this assumption is to changes in waviness ratio. 3D finite elements were used to study a T300/Epoxy plain weave composite with symmetrically stacked mats. 1/32nd of the unit cell is shown to be adequate for analysis of this type of configuration with the appropriate set of boundary conditions. At low waviness, results indicate that for prediction of elastic properties, either method is adequate. At high waviness, certain elastic properties become more sensitive to the method used. Stress distributions at high waviness ratio are shown to vary greatly depending on the type of loading applied. At low waviness, both methods produce similar results.

  9. The effect of Fisher information matrix approximation methods in population optimal design calculations.

    PubMed

    Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C

    2016-12-01

    With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.

  10. Bayesian designs of phase II oncology trials to select maximum effective dose assuming monotonic dose-response relationship

    PubMed Central

    2014-01-01

    Background For many molecularly targeted agents, the probability of response may be assumed to either increase or increase and then plateau in the tested dose range. Therefore, identifying the maximum effective dose, defined as the lowest dose that achieves a pre-specified target response and beyond which improvement in the response is unlikely, becomes increasingly important. Recently, a class of Bayesian designs for single-arm phase II clinical trials based on hypothesis tests and nonlocal alternative prior densities has been proposed and shown to outperform common Bayesian designs based on posterior credible intervals and common frequentist designs. We extend this and related approaches to the design of phase II oncology trials, with the goal of identifying the maximum effective dose among a small number of pre-specified doses. Methods We propose two new Bayesian designs with continuous monitoring of response rates across doses to identify the maximum effective dose, assuming monotonicity of the response rate across doses. The first design is based on Bayesian hypothesis tests. To determine whether each dose level achieves a pre-specified target response rate and whether the response rates between doses are equal, multiple statistical hypotheses are defined using nonlocal alternative prior densities. The second design is based on Bayesian model averaging and also uses nonlocal alternative priors. We conduct simulation studies to evaluate the operating characteristics of the proposed designs, and compare them with three alternative designs. Results In terms of the likelihood of drawing a correct conclusion using similar between-design average sample sizes, the performance of our proposed design based on Bayesian hypothesis tests and nonlocal alternative priors is more robust than that of the other designs. Specifically, the proposed Bayesian hypothesis test-based design has the largest probability of being the best design among all designs under comparison and

  11. Shear viscosity in the postquasistatic approximation

    SciTech Connect

    Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.

    2010-05-15

    We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.

  12. 42 CFR 137.292 - How do Self-Governance Tribes assume environmental responsibilities for construction projects...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false How do Self-Governance Tribes assume environmental...-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the Act ? Self-Governance Tribes assume environmental responsibilities by: (a) Adopting a resolution...

  13. QCI Common

    SciTech Connect

    McCaskey, Alexander J.

    2016-11-18

    There are many common software patterns and utilities for the ORNL Quantum Computing Institute that can and should be shared across projects. Otherwise we find duplication of code which adds unwanted complexity. This is a software product seeks to alleviate this by providing common utilities such as object factories, graph data structures, parameter input mechanisms, etc., for other software products within the ORNL Quantum Computing Institute. This work enables pure basic research, has no export controlled utilities, and has no real commercial value.

  14. Analysis of an object assumed to contain “Red Mercury”

    NASA Astrophysics Data System (ADS)

    Obhođaš, Jasmina; Sudac, Davorin; Blagus, Saša; Valković, Vladivoj

    2007-08-01

    After having been informed about an attempt of illicit trafficking, the Organized Crime Division of the Zagreb Police Authority confiscated in November 2003 a hand size metal cylinder suspected to contain "Red Mercury" (RM). The sample assumed to contain RM was analyzed with two nondestructive analytical methods in order to obtain information about the nature of the investigated object, namely, activation analysis with 14.1 MeV neutrons and EDXRF analysis. The activation analysis with 14.1 MeV neutrons showed that the container and its contents were characterized by the following chemical elements: Hg, Fe, Cr and Ni. By using EDXRF analysis, it was shown that the elements Fe, Cr and Ni were constituents of the capsule. Therefore, it was concluded that these three elements were present in the capsule only, while the content of the unknown material was Hg. Antimony as a hypothetical component of red mercury was not detected.

  15. Distance fields on unstructured grids: Stable interpolation, assumed gradients, collision detection and gap function.

    PubMed

    Wolff, Sebastian; Bucher, Christian

    2013-06-01

    This article presents a novel approach to collision detection based on distance fields. A novel interpolation ensures stability of the distances in the vicinity of complex geometries. An assumed gradient formulation is introduced leading to a [Formula: see text]-continuous distance function. The gap function is re-expressed allowing penalty and Lagrange multiplier formulations. The article introduces a node-to-element integration for first order elements, but also discusses signed distances, partial updates, intermediate surfaces, mortar methods and higher order elements. The algorithm is fast, simple and robust for complex geometries and self contact. The computed tractions conserve linear and angular momentum even in infeasible contact. Numerical examples illustrate the new algorithm in three dimensions.

  16. Analysis of a photonic nanojet assuming a focused incident beam instead of a plane wave

    NASA Astrophysics Data System (ADS)

    Dong, Aotuo; Su, Chin

    2014-12-01

    The analysis of a photonic nanojet formed by dielectric spheres almost always assumes that the incident field is a plane wave. In this work, using vector spherical harmonics representations, we analyze the case of a more realistic incident field consisting of a focused beam formed by a microscope objective. Also included is the situation in which the sphere is not at the focal plane of the focus beam. We find that the dimension of the nanojet beam waist is less sensitive with respect to the azimuthal angle when compared with the plane wave case. Also, by shifting the particle away from the focal plane, the nanojet beam waist can be positioned outside the particle which otherwise would be inside or at the particle surface. Inherently, no such adjustment is possible with an incident plane wave assumption.

  17. The sensitivity of latent heat flux to the air humidity approximations used in ocean circulation models

    NASA Technical Reports Server (NTRS)

    Liu, W. Timothy; Niiler, Pearn P.

    1990-01-01

    In deriving the surface latent heat flux with the bulk formula for the thermal forcing of some ocean circulation models, two approximations are commonly made to bypass the use of atmospheric humidity in the formula. The first assumes a constant relative humidity, and the second supposes that the sea-air humidity difference varies linearly with the saturation humidity at sea surface temperature. Using climatological fields derived from the Marine Deck and long time series from ocean weather stations, the errors introduced by these two assumptions are examined. It is shown that the errors reach above 100 W/sq m over western boundary currents and 50 W/sq m over the tropical ocean. The two approximations also introduce erroneous seasonal and spatial variabilities with magnitudes over 50 percent of the observed variabilities.

  18. Estimating option values of solar radiation management assuming that climate sensitivity is uncertain.

    PubMed

    Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa

    2016-05-24

    Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion.

  19. Estimating option values of solar radiation management assuming that climate sensitivity is uncertain

    PubMed Central

    Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa

    2016-01-01

    Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM’s actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM’s side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990–2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion. PMID:27162346

  20. Approximate probability distributions of the master equation.

    PubMed

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  1. Approximate probability distributions of the master equation

    NASA Astrophysics Data System (ADS)

    Thomas, Philipp; Grima, Ramon

    2015-07-01

    Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.

  2. Radial diffusion in Saturn's radiation belts - A modeling analysis assuming satellite and ring E absorption

    NASA Technical Reports Server (NTRS)

    Hood, L. L.

    1983-01-01

    A modeling analysis is carried out of six experimental phase space density profiles for nearly equatorially mirroring protons using methods based on the approach of Thomsen et al. (1977). The form of the time-averaged radial diffusion coefficient D(L) that gives an optimal fit to the experimental profiles is determined under the assumption that simple satellite plus Ring E absorption of inwardly diffusing particles and steady-state radial diffusion are the dominant physical processes affecting the proton data in the L range that is modeled. An extension of the single-satellite model employed by Thomsen et al. to a model that includes multisatellite and ring absorption is described, and the procedures adopted for estimating characteristic satellite and ring absorption times are defined. The results obtained in applying three representative solid-body absorption models to evaluate D(L) in the range where L is between 4 and 16 are reported, and a study is made of the sensitivity of the preferred amplitude and L dependence for D(L) to the assumed model parameters. The inferred form of D(L) is then compared with that which would be predicted if various proposed physical mechanisms for driving magnetospheric radial diffusion are operative at Saturn.

  3. Defining modeling parameters for juniper trees assuming pleistocene-like conditions at the NTS

    SciTech Connect

    Tarbox, S.R.; Cochran, J.R.

    1994-12-31

    This paper addresses part of Sandia National Laboratories` (SNL) efforts to assess the long-term performance of the Greater Confinement Disposal (GCD) facility located on the Nevada Test Site (NTS). Of issue is whether the GCD site complies with 40 CFR 191 standards set for transuranic (TRU) waste burial. SNL has developed a radionuclide transport model which can be used to assess TRU radionuclide movement away from the GCD facility. An earlier iteration of the model found that radionuclide uptake and release by plants is an important aspect of the system to consider. Currently, the shallow-rooted plants at the NTS do not pose a threat to the integrity of the GCD facility. However, the threat increases substantially it deeper-rooted woodland species migrate to the GCD facility, given a shift to a wetter climate. The model parameters discussed here will be included in the next model iteration which assumes a climate shift will provide for the growth of juniper trees at the GCD facility. Model parameters were developed using published data and wherever possible, data were taken from juniper and pinon-juniper studies that mirrored as many aspects of the GCD facility as possible.

  4. Assumed--stress hybrid elements with drilling degrees of freedom for nonlinear analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr. (Principal Investigator)

    1996-01-01

    The goal of this research project is to develop assumed-stress hybrid elements with rotational degrees of freedom for analyzing composite structures. During the first year of the three-year activity, the effort was directed to further assess the AQ4 shell element and its extensions to buckling and free vibration problems. In addition, the development of a compatible 2-node beam element was to be accomplished. The extensions and new developments were implemented in the Computational Structural Mechanics Testbed COMET. An assessment was performed to verify the implementation and to assess the performance of these elements in terms of accuracy. During the second and third years, extensions to geometrically nonlinear problems were developed and tested. This effort involved working with the nonlinear solution strategy as well as the nonlinear formulation for the elements. This research has resulted in the development and implementation of two additional element processors (ES22 for the beam element and ES24 for the shell elements) in COMET. The software was developed using a SUN workstation and has been ported to the NASA Langley Convex named blackbird. Both element processors are now part of the baseline version of COMET.

  5. Is the perception of 3D shape from shading based on assumed reflectance and illumination?

    PubMed Central

    Todd, James T.; Egan, Eric J. L.; Phillips, Flip

    2014-01-01

    The research described in the present article was designed to compare three types of image shading: one generated with a Lambertian BRDF and homogeneous illumination such that image intensity was determined entirely by local surface orientation irrespective of position; one that was textured with a linear intensity gradient, such that image intensity was determined entirely by local surface position irrespective of orientation; and another that was generated with a Lambertian BRDF and inhomogeneous illumination such that image intensity was influenced by both position and orientation. A gauge figure adjustment task was used to measure observers' perceptions of local surface orientation on the depicted surfaces, and the probe points included 60 pairs of regions that both had the same orientation. The results show clearly that observers' perceptions of these three types of stimuli were remarkably similar, and that probe regions with similar apparent orientations could have large differences in image intensity. This latter finding is incompatible with any process for computing shape from shading that assumes any plausible reflectance function combined with any possible homogeneous illumination. PMID:26034561

  6. Wetware, Hardware, or Software Incapacitation: Observational Methods to Determine When Autonomy Should Assume Control

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Gregory, Irene M.

    2014-01-01

    Control-theoretic modeling of human operator's dynamic behavior in manual control tasks has a long, rich history. There has been significant work on techniques used to identify the pilot model of a given structure. This research attempts to go beyond pilot identification based on experimental data to develop a predictor of pilot behavior. Two methods for pre-dicting pilot stick input during changing aircraft dynamics and deducing changes in pilot behavior are presented This approach may also have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot. With this ability to detect changes in piloting behavior, the possibility now exists to mediate human adverse behaviors, hardware failures, and software anomalies with autono-my that may ameliorate these undesirable effects. However, appropriate timing of when au-tonomy should assume control is dependent on criticality of actions to safety, sensitivity of methods to accurately detect these adverse changes, and effects of changes in levels of auto-mation of the system as a whole.

  7. Epidemiology of child pedestrian casualty rates: can we assume spatial independence?

    PubMed

    Hewson, Paul J

    2005-07-01

    Child pedestrian injuries are often investigated by means of ecological studies, yet are clearly part of a complex spatial phenomena. Spatial dependence within such ecological analyses have rarely been assessed, yet the validity of basic statistical techniques rely on a number of independence assumptions. Recent work from Canada has highlighted the potential for modelling spatial dependence within data that was aggregated in terms of the number of road casualties who were resident in a given geographical area. Other jurisdictions aggregate data in terms of the number of casualties in the geographical area in which the collision took place. This paper contrasts child pedestrian casualty data from Devon County UK, which has been aggregated by both methods. A simple ecological model, with minimally useful covaraties relating to measures of child deprivation, provides evidence that data aggregated in terms of the casualty's home location cannot be assumed to be spatially independent and that for analysis of these data to be valid there must be some accounting for spatial auto-correlation within the model structure. Conversely, data aggregated in terms of the collision location (as is usual in the UK) was found to be spatially independent. Whilst the spatial model is clearly more complex it provided a superior fit to that seen with either collision aggregated or non-spatial models. Of more importance, the ecological level association between deprivation and casualty rate is much lower once the spatial structure is accounted for, highlighting the importance using appropriately structured models.

  8. Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Gurfinkel, Arie

    2010-01-01

    We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules

  9. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  10. DALI: Derivative Approximation for LIkelihoods

    NASA Astrophysics Data System (ADS)

    Sellentin, Elena

    2015-07-01

    DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.

  11. Follow and Assume: The Operational Reserve in Security, Stability, Reconstruction, and Transition Operations

    DTIC Science & Technology

    2007-06-15

    unwritten contract which sets the context for the carrot or the stick. The stick does not have to mean killings or burning of housing and food stores...the commonality between SSRT and COIN, and lack of planned capability for SSRT and COIN skill sets in either the Active or Reserve component. This study...Command and General Staff College or any other governmental agency. (References to this study should include the foregoing statement.) iii

  12. Taylor Approximations and Definite Integrals

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2007-01-01

    We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)

  13. The Effects on Tsunami Hazard Assessment in Chile of Assuming Earthquake Scenarios with Spatially Uniform Slip

    NASA Astrophysics Data System (ADS)

    Carvajal, Matías; Gubler, Alejandra

    2016-12-01

    We investigated the effect that along-dip slip distribution has on the near-shore tsunami amplitudes and on coastal land-level changes in the region of central Chile (29°-37°S). Here and all along the Chilean megathrust, the seismogenic zone extends beneath dry land, and thus, tsunami generation and propagation is limited to its seaward portion, where the sensitivity of the initial tsunami waveform to dislocation model inputs, such as slip distribution, is greater. We considered four distributions of earthquake slip in the dip direction, including a spatially uniform slip source and three others with typical bell-shaped slip patterns that differ in the depth range of slip concentration. We found that a uniform slip scenario predicts much lower tsunami amplitudes and generally less coastal subsidence than scenarios that assume bell-shaped distributions of slip. Although the finding that uniform slip scenarios underestimate tsunami amplitudes is not new, it has been largely ignored for tsunami hazard assessment in Chile. Our simulations results also suggest that uniform slip scenarios tend to predict later arrival times of the leading wave than bell-shaped sources. The time occurrence of the largest wave at a specific site is also dependent on how the slip is distributed in the dip direction; however, other factors, such as local bathymetric configurations and standing edge waves, are also expected to play a role. Arrival time differences are especially critical in Chile, where tsunamis arrive earlier than elsewhere. We believe that the results of this study will be useful to both public and private organizations for mapping tsunami hazard in coastal areas along the Chilean coast, and, therefore, help reduce the risk of loss and damage caused by future tsunamis.

  14. Engineering evaluation of alternatives: Managing the assumed leak from single-shell Tank 241-T-101

    SciTech Connect

    Brevick, C.H.; Jenkins, C.

    1996-02-01

    At mid-year 1992, the liquid level gage for Tank 241-T-101 indicated that 6,000 to 9,000 gal had leaked. Because of the liquid level anomaly, Tank 241-T-101 was declared an assumed leaker on October 4, 1992. SSTs liquid level gages have been historically unreliable. False readings can occur because of instrument failures, floating salt cake, and salt encrustation. Gages frequently self-correct and tanks show no indication of leak. Tank levels cannot be visually inspected and verified because of high radiation fields. The gage in Tank 241-T-101 has largely corrected itself since the mid-year 1992 reading. Therefore, doubt exists that a leak has occurred, or that the magnitude of the leak poses any immediate environmental threat. While reluctance exists to use valuable DST space unnecessarily, there is a large safety and economic incentive to prevent or mitigate release of tank liquid waste into the surrounding environment. During the assessment of the significance of the Tank 241-T-101 liquid level gage readings, Washington State Department of Ecology determined that Westinghouse Hanford Company was not in compliance with regulatory requirements, and directed transfer of the Tank 241-T-101 liquid contents into a DST. Meanwhile, DOE directed WHC to examine reasonable alternatives/options for safe interim management of Tank 241-T-101 wastes before taking action. The five alternatives that could be used to manage waste from a leaking SST are: (1) No-Action, (2) In-Tank Stabilization, (3) External Tank Stabilization, (4) Liquid Retrieval, and (5) Total Retrieval. The findings of these examinations are reported in this study.

  15. Making the Common Good Common

    ERIC Educational Resources Information Center

    Chase, Barbara

    2011-01-01

    How are independent schools to be useful to the wider world? Beyond their common commitment to educate their students for meaningful lives in service of the greater good, can they educate a broader constituency and, thus, share their resources and skills more broadly? Their answers to this question will be shaped by their independence. Any…

  16. Benthic grazers and suspension feeders: Which one assumes the energetic dominance in Königshafen?

    NASA Astrophysics Data System (ADS)

    Asmus, H.

    1994-06-01

    Size-frequency histograms of biomass, secondary production, respiration and energy flow of 4 dominant macrobenthic communities of the intertidal bay of Königshafen were analysed and compared. In the shallow sandy flats ( Nereis-Corophium-belt [ N.C.-belt], seagrass-bed and Arenicola-flat) a bimodal size-frequency histogram of biomass, secondary production, respiration and energy flow was found with a first peak formed by individuals within a size range of 0.10 to 0.32 mg ash free dry weight (AFDW). In this size range, the small prosobranch Hydrobia ulvae was the dominant species, showing maximal biomass as well as secondary production, respiration and energy flow in the seagrass-bed. The second peak on the size-frequency histogram was formed by the polychaete Nereis diversicolor with individual weights of 10 to 18 mg AFDW in the N.C.-belt, and by Arenicola marina with individual weights of 100 to 562 mg AFDW in both of the other sand flats. Biomass, productivity, respiration and energy flow of these polychaetes increased from the Nereis-Corophium-belt, to the seagrass-bed, and to the Arenicola-flat. Mussel beds surpassed all other communities in biomass and the functional parameters mentioned above. Size-frequency histograms of these parameters were distinctly unimodal with a maximum at an individual size of 562 to 1000 mg AFDW. This size group was dominated by adult specimens of Mytilus edulis. Averaged over the total area, the size-frequency histogram of energy flow of all intertidal flats of Königshafen showed one peak built by Hydrobia ulvae and a second one, mainly formed by M. edulis. Assuming that up to 10% of the intertidal area is covered by mussel beds, the maximum of the size-specific energy flow will be formed by Mytilus. When only 1% is covered by mussel beds, then the energy flow is dominated by H. ulvae. Both animals represent different trophic types and their dominance in energy flow has consequences for the food web and the carbon flow of the

  17. Internal Structure and Mineralogy of Differentiated Asteroids Assuming Chondritic Bulk Composition: The Case of Vesta

    NASA Technical Reports Server (NTRS)

    Toplis, M. J.; Mizzon, H.; Forni, O.; Monnereau, M.; Prettyman, T. H.; McSween, H. Y.; McCoy, T. J.; Mittlefehldt, D. W.; DeSanctis, M. C.; Raymond, C. A.; Russell, C. T.

    2012-01-01

    Bulk composition (including oxygen content) is a primary control on the internal structure and mineralogy of differentiated asteroids. For example, oxidation state will affect core size, as well as Mg# and pyroxene content of the silicate mantle. The Howardite-Eucrite-Diogenite class of meteorites (HED) provide an interesting test-case of this idea, in particular in light of results of the Dawn mission which provide information on the size, density and differentiation state of Vesta, the parent body of the HED's. In this work we explore plausible bulk compositions of Vesta and use mass-balance and geochemical modelling to predict possible internal structures and crust/mantle compositions and mineralogies. Models are constrained to be consistent with known HED samples, but the approach has the potential to extend predictions to thermodynamically plausible rock types that are not necessarily present in the HED collection. Nine chondritic bulk compositions are considered (CI, CV, CO, CM, H, L, LL, EH, EL). For each, relative proportions and densities of the core, mantle, and crust are quantified. Considering that the basaltic crust has the composition of the primitive eucrite Juvinas and assuming that this crust is in thermodynamic equilibrium with the residual mantle, it is possible to calculate how much iron is in metallic form (in the core) and how much in oxidized form (in the mantle and crust) for a given bulk composition. Of the nine bulk compositions tested, solutions corresponding to CI and LL groups predicted a negative metal fraction and were not considered further. Solutions for enstatite chondrites imply significant oxidation relative to the starting materials and these solutions too are considered unlikely. For the remaining bulk compositions, the relative proportion of crust to bulk silicate is typically in the range 15 to 20% corresponding to crustal thicknesses of 15 to 20 km for a porosity-free Vesta-sized body. The mantle is predicted to be largely

  18. 42 CFR 137.291 - May Self-Governance Tribes carry out construction projects without assuming these Federal...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...-Governance Tribes carry out construction projects without assuming these Federal environmental... 42 Public Health 1 2010-10-01 2010-10-01 false May Self-Governance Tribes carry out construction projects without assuming these Federal environmental responsibilities? 137.291 Section 137.291...

  19. 12 CFR Appendix L to Part 1026 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. L Appendix L to Part 1026—Assumed Loan Periods...

  20. 12 CFR Appendix L to Part 1026 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. L Appendix L to Part 1026—Assumed Loan Periods...

  1. 12 CFR Appendix L to Part 226 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates...

  2. 12 CFR Appendix L to Part 226 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates...

  3. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... General Payment Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed...

  4. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... General Payment Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed...

  5. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed by Medicare. This...

  6. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 3 2011-10-01 2011-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed by Medicare. This...

  7. 42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 3 2012-10-01 2012-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... General Payment Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed...

  8. The Cell Cycle Switch Computes Approximate Majority

    NASA Astrophysics Data System (ADS)

    Cardelli, Luca; Csikász-Nagy, Attila

    2012-09-01

    Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.

  9. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  10. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  11. Combining global and local approximations

    SciTech Connect

    Haftka, R.T. )

    1991-09-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.

  12. Phenomenological applications of rational approximants

    NASA Astrophysics Data System (ADS)

    Gonzàlez-Solís, Sergi; Masjuan, Pere

    2016-08-01

    We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.

  13. Embedding impedance approximations in the analysis of SIS mixers

    NASA Technical Reports Server (NTRS)

    Kerr, A. R.; Pan, S.-K.; Withington, S.

    1992-01-01

    Future millimeter-wave radio astronomy instruments will use arrays of many SIS receivers, either as focal plane arrays on individual radio telescopes, or as individual receivers on the many antennas of radio interferometers. Such applications will require broadband integrated mixers without mechanical tuners. To produce such mixers, it will be necessary to improve present mixer design techniques, most of which use the three-frequency approximation to Tucker's quantum mixer theory. This paper examines the adequacy of three approximations to Tucker's theory: (1) the usual three-frequency approximation which assumes a sinusoidal LO voltage at the junction, and a short-circuit at all frequencies above the upper sideband; (2) a five-frequency approximation which allows two LO voltage harmonics and five small-signal sidebands; and (3) a quasi five-frequency approximation in which five small-signal sidebands are allowed, but the LO voltage is assumed sinusoidal. These are compared with a full harmonic-Newton solution of Tucker's equations, including eight LO harmonics and their corresponding sidebands, for realistic SIS mixer circuits. It is shown that the accuracy of the three approximations depends strongly on the value of omega R(sub N)C for the SIS junctions used. For large omega R(sub N)C, all three approximations approach the eight-harmonic solution. For omega R(sub N)C values in the range 0.5 to 10, the range of most practical interest, the quasi five-frequency approximation is a considerable improvement over the three-frequency approximation, and should be suitable for much design work. For the realistic SIS mixers considered here, the five-frequency approximation gives results very close to those of the eight-harmonic solution. Use of these approximations, where appropriate, considerably reduces the computational effort needed to analyze an SIS mixer, and allows the design and optimization of mixers using a personal computer.

  14. An evaluation of the assumed beta probability density function subgrid-scale model for large eddy simulation of nonpremixed, turbulent combustion with heat release

    SciTech Connect

    Wall, Clifton; Boersma, Bendiks Jan; Moin, Parviz

    2000-10-01

    The assumed beta distribution model for the subgrid-scale probability density function (PDF) of the mixture fraction in large eddy simulation of nonpremixed, turbulent combustion is tested, a priori, for a reacting jet having significant heat release (density ratio of 5). The assumed beta distribution is tested as a model for both the subgrid-scale PDF and the subgrid-scale Favre PDF of the mixture fraction. The beta model is successful in approximating both types of PDF but is slightly more accurate in approximating the normal (non-Favre) PDF. To estimate the subgrid-scale variance of mixture fraction, which is required by the beta model, both a scale similarity model and a dynamic model are used. Predictions using the dynamic model are found to be more accurate. The beta model is used to predict the filtered value of a function chosen to resemble the reaction rate. When no model is used, errors in the predicted value are of the same order as the actual value. The beta model is found to reduce this error by about a factor of two, providing a significant improvement. (c) 2000 American Institute of Physics.

  15. An approximate Riemann solver for hypervelocity flows

    NASA Technical Reports Server (NTRS)

    Jacobs, Peter A.

    1991-01-01

    We describe an approximate Riemann solver for the computation of hypervelocity flows in which there are strong shocks and viscous interactions. The scheme has three stages, the first of which computes the intermediate states assuming isentropic waves. A second stage, based on the strong shock relations, may then be invoked if the pressure jump across either wave is large. The third stage interpolates the interface state from the two initial states and the intermediate states. The solver is used as part of a finite-volume code and is demonstrated on two test cases. The first is a high Mach number flow over a sphere while the second is a flow over a slender cone with an adiabatic boundary layer. In both cases the solver performs well.

  16. Approximating Functions with Exponential Functions

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2005-01-01

    The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…

  17. Approximate circuits for increased reliability

    DOEpatents

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-12-22

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  18. Approximate circuits for increased reliability

    SciTech Connect

    Hamlet, Jason R.; Mayo, Jackson R.

    2015-08-18

    Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.

  19. Approximation Preserving Reductions among Item Pricing Problems

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.

  20. Accidental overdose in the deep shade of night: a warning on the assumed safety of 'natural substances'.

    PubMed

    Chadwick, Andrew; Ash, Abigail; Day, James; Borthwick, Mark

    2015-11-05

    There is an increasing use of herbal remedies and medicines, with a commonly held belief that natural substances are safe. We present the case of a 50-year-old woman who was a trained herbalist and had purchased an 'Atropa belladonna (deadly nightshade) preparation'. Attempting to combat her insomnia, late one evening she deliberately ingested a small portion of this, approximately 50 mL. Unintentionally, this was equivalent to a very large (15 mg) dose of atropine and she presented in an acute anticholinergic syndrome (confused, tachycardic and hypertensive) to our accident and emergency department. She received supportive management in our intensive treatment unit including mechanical ventilation. Fortunately, there were no long-term sequelae from this episode. However, this dramatic clinical presentation does highlight the potential dangers posed by herbal remedies. Furthermore, this case provides clinicians with an important insight into potentially dangerous products available legally within the UK. To help clinicians' understanding of this our discussion explains the manufacture and 'dosing' of the A. belladonna preparation.

  1. Self-other agreement and assumed similarity in neuroticism, extraversion, and trait affect: distinguishing the effects of form and content.

    PubMed

    Beer, Andrew; Watson, David; McDade-Montez, Elizabeth

    2013-12-01

    Trait Negative Affect (NA) and Positive Affect (PA) are strongly associated with Neuroticism and Extraversion, respectively. Nevertheless, measures of the former tend to show substantially weaker self-other agreement-and stronger assumed similarity correlations-than scales assessing the latter. The current study separated the effects of item content versus format on agreement and assumed similarity using two different sets of Neuroticism and Extraversion measures and two different indicators of NA and PA (N = 381 newlyweds). Neuroticism and Extraversion consistently showed stronger agreement than NA and PA; in addition, however, scales with more elaborated items yielded significantly higher agreement correlations than those based on single adjectives. Conversely, the trait affect scales yielded stronger assumed similarity correlations than the personality scales; these coefficients were strongest for the adjectival measures of trait affect. Thus, our data establish a significant role for both content and format in assumed similarity and self-other agreement.

  2. Accuracy of approximate inversion schemes in quantitative photacoustic imaging

    NASA Astrophysics Data System (ADS)

    Hochuli, Roman; Beard, Paul C.; Cox, Ben

    2014-03-01

    Five numerical phantoms were developed to investigate the accuracy of approximate inversion schemes in the reconstruction of oxygen saturation in photoacoustic imaging. In particular, two types of inversion are considered: Type I, an inversion that assumes fluence is unchanged between illumination wavelengths, and Type II, a method that assumes known background absorption and scattering coefficients to partially correct for the fluence. These approaches are tested in tomography (PAT) and acoustic-resolution microscopy mode (AR-PAM). They are found to produce accurate values of oxygen saturation in a blood vessel of interest at shallow depth - less than 3mm for PAT and less than 1mm for AR-PAM.

  3. Finite difference methods for approximating Heaviside functions

    NASA Astrophysics Data System (ADS)

    Towers, John D.

    2009-05-01

    We present a finite difference method for discretizing a Heaviside function H(u(x→)), where u is a level set function u:Rn ↦ R that is positive on a bounded region Ω⊂Rn. There are two variants of our algorithm, both of which are adapted from finite difference methods that we proposed for discretizing delta functions in [J.D. Towers, Two methods for discretizing a delta function supported on a level set, J. Comput. Phys. 220 (2007) 915-931; J.D. Towers, Discretizing delta functions via finite differences and gradient normalization, Preprint at http://www.miracosta.edu/home/jtowers/; J.D. Towers, A convergence rate theorem for finite difference approximations to delta functions, J. Comput. Phys. 227 (2008) 6591-6597]. We consider our approximate Heaviside functions as they are used to approximate integrals over Ω. We prove that our first approximate Heaviside function leads to second order accurate quadrature algorithms. Numerical experiments verify this second order accuracy. For our second algorithm, numerical experiments indicate at least third order accuracy if the integrand f and ∂Ω are sufficiently smooth. Numerical experiments also indicate that our approximations are effective when used to discretize certain singular source terms in partial differential equations. We mostly focus on smooth f and u. By this we mean that f is smooth in a neighborhood of Ω, u is smooth in a neighborhood of ∂Ω, and the level set u(x)=0 is a manifold of codimension one. However, our algorithms still give reasonable results if either f or u has jumps in its derivatives. Numerical experiments indicate approximately second order accuracy for both algorithms if the regularity of the data is reduced in this way, assuming that the level set u(x)=0 is a manifold. Numerical experiments indicate that dependence on the placement of Ω with respect to the grid is quite small for our algorithms. Specifically, a grid shift results in an O(hp) change in the computed solution

  4. Approximate number and approximate time discrimination each correlate with school math abilities in young children.

    PubMed

    Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin

    2016-01-01

    What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics.

  5. Approximating subtree distances between phylogenies.

    PubMed

    Bonet, Maria Luisa; St John, Katherine; Mahindru, Ruchi; Amenta, Nina

    2006-10-01

    We give a 5-approximation algorithm to the rooted Subtree-Prune-and-Regraft (rSPR) distance between two phylogenies, which was recently shown to be NP-complete. This paper presents the first approximation result for this important tree distance. The algorithm follows a standard format for tree distances. The novel ideas are in the analysis. In the analysis, the cost of the algorithm uses a "cascading" scheme that accounts for possible wrong moves. This accounting is missing from previous analysis of tree distance approximation algorithms. Further, we show how all algorithms of this type can be implemented in linear time and give experimental results.

  6. Dual approximations in optimal control

    NASA Technical Reports Server (NTRS)

    Hager, W. W.; Ianculescu, G. D.

    1984-01-01

    A dual approximation for the solution to an optimal control problem is analyzed. The differential equation is handled with a Lagrange multiplier while other constraints are treated explicitly. An algorithm for solving the dual problem is presented.

  7. Mathematical algorithms for approximate reasoning

    NASA Technical Reports Server (NTRS)

    Murphy, John H.; Chay, Seung C.; Downs, Mary M.

    1988-01-01

    Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away

  8. Exponential approximations in optimal design

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.

    1990-01-01

    One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.

  9. Approximating random quantum optimization problems

    NASA Astrophysics Data System (ADS)

    Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.

    2013-06-01

    We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.

  10. Local discontinuous Galerkin approximations to Richards’ equation

    NASA Astrophysics Data System (ADS)

    Li, H.; Farthing, M. W.; Dawson, C. N.; Miller, C. T.

    2007-03-01

    We consider the numerical approximation to Richards' equation because of its hydrological significance and intrinsic merit as a nonlinear parabolic model that admits sharp fronts in space and time that pose a special challenge to conventional numerical methods. We combine a robust and established variable order, variable step-size backward difference method for time integration with an evolving spatial discretization approach based upon the local discontinuous Galerkin (LDG) method. We formulate the approximation using a method of lines approach to uncouple the time integration from the spatial discretization. The spatial discretization is formulated as a set of four differential algebraic equations, which includes a mass conservation constraint. We demonstrate how this system of equations can be reduced to the solution of a single coupled unknown in space and time and a series of local constraint equations. We examine a variety of approximations at discontinuous element boundaries, permeability approximations, and numerical quadrature schemes. We demonstrate an optimal rate of convergence for smooth problems, and compare accuracy and efficiency for a wide variety of approaches applied to a set of common test problems. We obtain robust and efficient results that improve upon existing methods, and we recommend a future path that should yield significant additional improvements.

  11. Investigating Material Approximations in Spacecraft Radiation Analysis

    NASA Technical Reports Server (NTRS)

    Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.

    2011-01-01

    During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.

  12. Planetary ephemerides approximation for radar astronomy

    NASA Technical Reports Server (NTRS)

    Sadr, R.; Shahshahani, M.

    1991-01-01

    The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.

  13. Approximated solutions to Born-Infeld dynamics

    NASA Astrophysics Data System (ADS)

    Ferraro, Rafael; Nigro, Mauro

    2016-02-01

    The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.

  14. Approximation techniques of a selective ARQ protocol

    NASA Astrophysics Data System (ADS)

    Kim, B. G.

    Approximations to the performance of selective automatic repeat request (ARQ) protocol with lengthy acknowledgement delays are presented. The discussion is limited to packet-switched communication systems in a single-hop environment such as found with satellite systems. It is noted that retransmission of errors after ARQ is a common situation. ARQ techniques, e.g., stop-and-wait and continuous, are outlined. A simplified queueing analysis of the selective ARQ protocol shows that exact solutions with long delays are not feasible. Two approximation models are formulated, based on known exact behavior of a system with short delays. The buffer size requirements at both ends of a communication channel are cited as significant factor for accurate analysis, and further examinations of buffer overflow and buffer lock-out probability and avoidance are recommended.

  15. A method for approximating acoustic-field-amplitude uncertainty caused by environmental uncertainties.

    PubMed

    James, Kevin R; Dowling, David R

    2008-09-01

    In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.

  16. Approximating the Geisser-Greenhouse sphericity estimator and its applications to diffusion tensor imaging.

    PubMed

    Clement-Spychala, Meagan E; Couper, David; Zhu, Hongtu; Muller, Keith E

    2010-01-01

    The diffusion tensor imaging (DTI) protocol characterizes diffusion anisotropy locally in space, thus providing rich detail about white matter tissue structure. Although useful metrics for diffusion tensors have been defined, statistical properties of the measures have been little studied. Assuming homogeneity within a region leads to being able to apply Wishart distribution theory. First, it will be shown that common DTI metrics are simple functions of known test statistics. The average diffusion coefficient (ADC) corresponds to the trace of a Wishart, and is also described as the generalized (multivariate) variance, the average variance of the principal components. Therefore ADC has a known exact distribution (a positively weighted quadratic form in Gaussians) as well as a simple and accurate approximation (Satterthwaite) in terms of a scaled chi square. Of particular interest is that fractional anisotropy (FA) values for given regions of interest are functions of the Geisser-Greenhouse (GG) sphericity estimator. The GG sphericity estimator can be approximated well by a linear transformation of a squared beta random variable. Simulated data demonstrates that the fits work well for simulated diffusion tensors. Applying traditional density estimation techniques for a beta to histograms of FA values from a region allow representing the histogram of hundreds or thousands of values in terms of just two estimates for the beta parameters. Thus using the approximate distribution eliminates the "curse of dimensionality" for FA values. A parallel result holds for ADC.

  17. Rational approximations for tomographic reconstructions

    NASA Astrophysics Data System (ADS)

    Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas

    2013-06-01

    We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp-Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image.

  18. Gadgets, approximation, and linear programming

    SciTech Connect

    Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.

    1996-12-31

    We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.

  19. Approximation of Dynamical System's Separatrix Curves

    NASA Astrophysics Data System (ADS)

    Cavoretto, Roberto; Chaudhuri, Sanjay; De Rossi, Alessandra; Menduni, Eleonora; Moretti, Francesca; Rodi, Maria Caterina; Venturino, Ezio

    2011-09-01

    In dynamical systems saddle points partition the domain into basins of attractions of the remaining locally stable equilibria. This problem is rather common especially in population dynamics models, like prey-predator or competition systems. In this paper we construct programs for the detection of points lying on the separatrix curve, i.e. the curve which partitions the domain. Finally, an efficient algorithm, which is based on the Partition of Unity method with local approximants given by Wendland's functions, is used for reconstructing the separatrix curve.

  20. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  1. Approximating spatially exclusive invasion processes

    NASA Astrophysics Data System (ADS)

    Ross, Joshua V.; Binder, Benjamin J.

    2014-05-01

    A number of biological processes, such as invasive plant species and cell migration, are composed of two key mechanisms: motility and reproduction. Due to the spatially exclusive interacting behavior of these processes a cellular automata (CA) model is specified to simulate a one-dimensional invasion process. Three (independence, Poisson, and 2D-Markov chain) approximations are considered that attempt to capture the average behavior of the CA. We show that our 2D-Markov chain approximation accurately predicts the state of the CA for a wide range of motility and reproduction rates.

  2. Heat pipe transient response approximation.

    SciTech Connect

    Reid, R. S.

    2001-01-01

    A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper.

  3. 42 CFR 137.300 - Since Federal environmental responsibilities are new responsibilities, which may be assumed by...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Federal environmental responsibilities assumed by the Self-Governance Tribe. ... 42 Public Health 1 2010-10-01 2010-10-01 false Since Federal environmental responsibilities are... additional funds available to Self-Governance Tribes to carry out these formerly inherently...

  4. 42 CFR 137.286 - Do Self-Governance Tribes become Federal agencies when they assume these Federal environmental...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects in... performing these Federal environmental responsibilities, Self-Governance Tribes will be considered the... 42 Public Health 1 2010-10-01 2010-10-01 false Do Self-Governance Tribes become Federal...

  5. Beyond an Assumed Mother-Child Symbiosis in Nutritional Guidelines: The Everyday Reasoning behind Complementary Feeding Decisions

    ERIC Educational Resources Information Center

    Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte

    2014-01-01

    Researchers question the implications of the way in which "motherhood" is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs of the child and the interest and focus of the…

  6. Step 4: Provides the Birthing Woman With Freedom of Movement to Walk, Move, Assume Positions of Her Choice

    PubMed Central

    Storton, Sharon

    2007-01-01

    Step 4 of the Ten Steps of Mother-Friendly Care insures that women have the freedom to walk, move, and assume positions of their choice during labor and birth. The rationales and the evidence in support of this step are presented. PMID:18523670

  7. 12 CFR Appendix L to Part 226 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a)...

  8. 12 CFR Appendix L to Part 226 - Assumed Loan Periods for Computations of Total Annual Loan Cost Rates

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a)...

  9. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute...

  10. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute...

  11. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute...

  12. 9 CFR 72.15 - Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute...

  13. 25 CFR 1000.87 - How does the AFA specify the services provided, functions performed, and responsibilities assumed...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 2 2012-04-01 2012-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...

  14. 25 CFR 1000.87 - How does the AFA specify the services provided, functions performed, and responsibilities assumed...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...

  15. 25 CFR 1000.87 - How does the AFA specify the services provided, functions performed, and responsibilities assumed...

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 25 Indians 2 2011-04-01 2011-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...

  16. 25 CFR 1000.87 - How does the AFA specify the services provided, functions performed, and responsibilities assumed...

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 2 2014-04-01 2014-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...

  17. 25 CFR 1000.87 - How does the AFA specify the services provided, functions performed, and responsibilities assumed...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 2 2013-04-01 2013-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...

  18. 25 CFR 224.64 - How may a tribe assume management of development of different types of energy resources?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for...

  19. 25 CFR 224.64 - How may a tribe assume management of development of different types of energy resources?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for...

  20. 25 CFR 224.64 - How may a tribe assume management of development of different types of energy resources?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for...

  1. 25 CFR 224.64 - How may a tribe assume management of development of different types of energy resources?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for...

  2. 25 CFR 224.64 - How may a tribe assume management of development of different types of energy resources?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for...

  3. Testing the frozen flow approximation

    NASA Technical Reports Server (NTRS)

    Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro

    1993-01-01

    We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.

  4. Ab initio dynamical vertex approximation

    NASA Astrophysics Data System (ADS)

    Galler, Anna; Thunström, Patrik; Gunacker, Patrik; Tomczak, Jan M.; Held, Karsten

    2017-03-01

    Diagrammatic extensions of dynamical mean-field theory (DMFT) such as the dynamical vertex approximation (DΓ A) allow us to include nonlocal correlations beyond DMFT on all length scales and proved their worth for model calculations. Here, we develop and implement an Ab initio DΓ A approach (AbinitioDΓ A ) for electronic structure calculations of materials. The starting point is the two-particle irreducible vertex in the two particle-hole channels which is approximated by the bare nonlocal Coulomb interaction and all local vertex corrections. From this, we calculate the full nonlocal vertex and the nonlocal self-energy through the Bethe-Salpeter equation. The AbinitioDΓ A approach naturally generates all local DMFT correlations and all nonlocal G W contributions, but also further nonlocal correlations beyond: mixed terms of the former two and nonlocal spin fluctuations. We apply this new methodology to the prototypical correlated metal SrVO3.

  5. Potential of the approximation method

    SciTech Connect

    Amano, K.; Maruoka, A.

    1996-12-31

    Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction.

  6. Nonlinear Filtering and Approximation Techniques

    DTIC Science & Technology

    1991-09-01

    Shwartz), Academic Press (1991). [191 M.Cl. ROUTBAUD, Fiting lindairc par morceaux avec petit bruit d’obserration, These. Universit6 de Provence ( 1990...Kernel System (GKS), Academic Press (1983). 181 H.J. KUSHNER, Probability methods for approximations in stochastic control and for elliptic equations... Academic Press (1977). [9] F. LE GLAND, Time discretization of nonlinear filtering equations, in: 28th. IEEE CDC, Tampa, pp. 2601-2606. IEEE Press (1989

  7. Reliable Function Approximation and Estimation

    DTIC Science & Technology

    2016-08-16

    Journal on Mathematical Analysis 47 (6), 2015. 4606-4629. (P3) The Sample Complexity of Weighted Sparse Approximation. B. Bah and R. Ward. IEEE...solving systems of quadratic equations. S. Sanghavi, C. White, and R. Ward. Results in Mathematics , 2016. (O5) Relax, no need to round: Integrality of...Theoretical Computer Science. (O6) A unified framework for linear dimensionality reduction in L1. F Krahmer and R Ward. Results in Mathematics , 2014. 1-23

  8. Working Memory in Nonsymbolic Approximate Arithmetic Processing: A Dual-Task Study with Preschoolers

    ERIC Educational Resources Information Center

    Xenidou-Dervou, Iro; van Lieshout, Ernest C. D. M.; van der Schoot, Menno

    2014-01-01

    Preschool children have been proven to possess nonsymbolic approximate arithmetic skills before learning how to manipulate symbolic math and thus before any formal math instruction. It has been assumed that nonsymbolic approximate math tasks necessitate the allocation of Working Memory (WM) resources. WM has been consistently shown to be an…

  9. Approximate Counting of Graphical Realizations.

    PubMed

    Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.

  10. Approximate Counting of Graphical Realizations

    PubMed Central

    2015-01-01

    In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994

  11. Computer Experiments for Function Approximations

    SciTech Connect

    Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C

    2007-10-15

    This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.

  12. Approximate reasoning using terminological models

    NASA Technical Reports Server (NTRS)

    Yen, John; Vaidya, Nitin

    1992-01-01

    Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.

  13. Methodology for approximating and implementing fixed-point approximations of cosines for order-16 DCT

    NASA Astrophysics Data System (ADS)

    Hinds, Arianne T.

    2011-09-01

    Spatial transformations whose kernels employ sinusoidal functions for the decorrelation of signals remain as fundamental components of image and video coding systems. Practical implementations are designed in fixed precision for which the most challenging task is to approximate these constants with values that are both efficient in terms of complexity and accurate with respect to their mathematical definitions. Scaled architectures, for example, as used in the implementations of the order-8 Discrete Cosine Transform and its corresponding inverse both specified in ISO/IEC 23002-2 (MPEG C Pt. 2), can be utilized to mitigate the complexity of these approximations. That is, the implementation of the transform can be designed such that it is completed in two stages: 1) the main transform matrix in which the sinusoidal constants are roughly approximated, and 2) a separate scaling stage to further refine the approximations. This paper describes a methodology termed the Common Factor Method, for finding fixed-point approximations of such irrational values suitable for use in scaled architectures. The order-16 Discrete Cosine Transform provides a framework in which to demonstrate the methodology, but the methodology itself can be employed to design fixed-point implementations of other linear transformations.

  14. Detection of the earth with the SETI microwave observing system assumed to be operating out in the Galaxy

    NASA Technical Reports Server (NTRS)

    Billingham, John; Tarter, Jill

    1989-01-01

    The maximum range is calculated at which radar signals from the earth could be detected by a search system similar to the NASA SETI Microwave Observing Project (SETI MOP) assumed to be operating out in the Galaxy. Figures are calculated for the Targeted Search and for the Sky Survey parts of the MOP, both planned to be operating in the 1990s. The probability of detection is calculated for the two most powerful transmitters, the planetary radar at Arecibo (Puerto Rico) and the ballistic missile early warning systems (BMEWSs), assuming that the terrestrial radars are only in the eavesdropping mode. It was found that, for the case of a single transmitter within the maximum range, the highest probability is for the sky survey detecting BMEWSs; this is directly proportional to BMEWS sky coverage and is therefore 0.25.

  15. Civil Aviation: U.S. Efforts Improved Afghan Capabilities, but the Afghan Government Did Not Assume Airspace Management as Planned

    DTIC Science & Technology

    2015-05-01

    Special Inspector General for Afghanistan Reconstruction SIGAR 15-58 Audit Report Civil Aviation : U.S. Efforts Improved Afghan...Capabilities, but the Afghan Government Did Not Assume Airspace Management as Planned SIGAR 15-58-AR/Civil Aviation SIGAR M A Y 2015 Report Documentation...3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Civil Aviation : U.S. Efforts Improved Afghan Capabilities, but the Afghan

  16. Detection of the Earth with the SETI microwave observing system assumed to be operating out in the galaxy

    NASA Technical Reports Server (NTRS)

    Billingham, J.; Tarter, J.

    1992-01-01

    This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.

  17. Detection of the Earth with the SETI microwave observing system assumed to be operating out in the galaxy.

    PubMed

    Billingham, J; Tarter, J

    1992-01-01

    This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.

  18. New Tests of the Fixed Hotspot Approximation

    NASA Astrophysics Data System (ADS)

    Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.

    2005-05-01

    We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of

  19. Fermion tunneling beyond semiclassical approximation

    SciTech Connect

    Majhi, Bibhas Ranjan

    2009-02-15

    Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys. 06 (2008) 095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.

  20. Improved non-approximability results

    SciTech Connect

    Bellare, M.; Sudan, M.

    1994-12-31

    We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.

  1. Generalized Gradient Approximation Made Simple

    SciTech Connect

    Perdew, J.P.; Burke, K.; Ernzerhof, M.

    1996-10-01

    Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}

  2. Approximate transferability in conjugated polyalkenes

    NASA Astrophysics Data System (ADS)

    Eskandari, Keiamars; Mandado, Marcos; Mosquera, Ricardo A.

    2007-03-01

    QTAIM computed atomic and bond properties, as well as delocalization indices (obtained from electron densities computed at HF, MP2 and B3LYP levels) of several linear and branched conjugated polyalkenes and O- and N-containing conjugated polyenes have been employed to assess approximate transferable CH groups. The values of these properties indicate the effects of the functional group extend to four CH groups, whereas those of the terminal carbon affect up to three carbons. Ternary carbons also modify significantly the properties of atoms in α, β and γ.

  3. Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices

    PubMed Central

    Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher

    2015-01-01

    We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667

  4. Wavelet Approximation in Data Assimilation

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.

  5. Laguerre approximation of random foams

    NASA Astrophysics Data System (ADS)

    Liebscher, André

    2015-09-01

    Stochastic models for the microstructure of foams are valuable tools to study the relations between microstructure characteristics and macroscopic properties. Owing to the physical laws behind the formation of foams, Laguerre tessellations have turned out to be suitable models for foams. Laguerre tessellations are weighted generalizations of Voronoi tessellations, where polyhedral cells are formed through the interaction of weighted generator points. While both share the same topology, the cell curvature of foams allows only an approximation by Laguerre tessellations. This makes the model fitting a challenging task, especially when the preservation of the local topology is required. In this work, we propose an inversion-based approach to fit a Laguerre tessellation model to a foam. The idea is to find a set of generator points whose tessellation best fits the foam's cell system. For this purpose, we transform the model fitting into a minimization problem that can be solved by gradient descent-based optimization. The proposed algorithm restores the generators of a tessellation if it is known to be Laguerre. If, as in the case of foams, no exact solution is possible, an approximative solution is obtained that maintains the local topology.

  6. Common Variable Immunodeficiency.

    PubMed

    Saikia, Biman; Gupta, Sudhir

    2016-04-01

    Common variable immunodeficiency (CVID) is the most common primary immunodeficiency of young adolescents and adults which also affects the children. The disease remains largely under-diagnosed in India and Southeast Asian countries. Although in majority of cases it is sporadic, disease may be inherited in a autosomal recessive pattern and rarely, in autosomal dominant pattern. Patients, in addition to frequent sino-pulmonary infections, are also susceptible to various autoimmune diseases and malignancy, predominantly lymphoma and leukemia. Other characteristic lesions include lymphocytic and granulomatous interstitial lung disease, and nodular lymphoid hyperplasia of gut. Diagnosis requires reduced levels of at least two immunoglobulin isotypes: IgG with IgA and/or IgM and impaired specific antibody response to vaccines. A number of gene mutations have been described in CVID; however, these genetic alterations account for less than 20% of cases of CVID. Flow cytometry aptly demonstrates a disturbed B cell homeostasis with reduced or absent memory B cells and increased CD21(low) B cells and transitional B cell populations. Approximately one-third of patients with CVID also display T cell functional defects. Immunoglobulin therapy remains the mainstay of treatment. Immunologists and other clinicians in India and other South East Asian countries need to be aware of CVID so that early diagnosis can be made, as currently, majority of these patients still go undiagnosed.

  7. Analytical approximations for spiral waves

    SciTech Connect

    Löber, Jakob Engel, Harald

    2013-12-15

    We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.

  8. Approximating metal-insulator transitions

    NASA Astrophysics Data System (ADS)

    Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej

    2015-12-01

    We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.

  9. Analytical approximations for spiral waves.

    PubMed

    Löber, Jakob; Engel, Harald

    2013-12-01

    We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R(0). For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R(+)) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R(+) with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.

  10. Fast Approximate Quadratic Programming for Graph Matching

    PubMed Central

    Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624

  11. Approximate Bayesian computation with functional statistics.

    PubMed

    Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K

    2013-03-26

    Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.

  12. Fast approximate quadratic programming for graph matching.

    PubMed

    Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E

    2015-01-01

    Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance.

  13. On Integral Upper Limits Assuming Power-law Spectra and the Sensitivity in High-energy Astronomy

    NASA Astrophysics Data System (ADS)

    Ahnen, Max L.

    2017-02-01

    The high-energy non-thermal universe is dominated by power-law-like spectra. Therefore, results in high-energy astronomy are often reported as parameters of power-law fits, or, in the case of a non-detection, as an upper limit assuming the underlying unseen spectrum behaves as a power law. In this paper, I demonstrate a simple and powerful one-to-one relation of the integral upper limit in the two-dimensional power-law parameter space into the spectrum parameter space and use this method to unravel the so-far convoluted question of the sensitivity of astroparticle telescopes.

  14. Mean-Field Approximation to the Hydrophobic Hydration in the Liquid-Vapor Interface of Water.

    PubMed

    Abe, Kiharu; Sumi, Tomonari; Koga, Kenichiro

    2016-03-03

    A mean-field approximation to the solvation of nonpolar solutes in the liquid-vapor interface of aqueous solutions is proposed. It is first remarked with a numerical illustration that the solvation of a methane-like solute in bulk liquid water is accurately described by the mean-field theory of liquids, the main idea of which is that the probability (Pcav) of finding a cavity in the solvent that can accommodate the solute molecule and the attractive interaction energy (uatt) that the solute would feel if it is inserted in such a cavity are both functions of the solvent density alone. It is then assumed that the basic idea is still valid in the liquid-vapor interface, but Pcav and uatt are separately functions of different coarse-grained local densities, not functions of a common local density. Validity of the assumptions is confirmed for the solvation of the methane-like particle in the interface of model water at temperatures between 253 and 613 K. With the mean-field approximation extended to the inhomogeneous system the local solubility profiles across the interface at various temperatures are calculated from Pcav and uatt obtained at a single temperature. The predicted profiles are in excellent agreement with those obtained by the direct calculation of the excess chemical potential over an interfacial region where the solvent local density varies most rapidly.

  15. Approximate analytic solutions to the NPDD: Short exposure approximations

    NASA Astrophysics Data System (ADS)

    Close, Ciara E.; Sheridan, John T.

    2014-04-01

    There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.

  16. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  17. Validity of the Weizsäcker-Williams approximation and the analysis of beam dump experiments: Production of a new scalar boson

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Sheng; McKeen, David; Miller, Gerald A.

    2017-02-01

    Beam dump experiments have been used to search for new particles with null results interpreted in terms of limits on masses mϕ and coupling constants ɛ . However these limits have been obtained by using approximations [including the Weizsäcker-Williams (WW) approximation] or Monte-Carlo simulations. We display methods, using a new scalar boson as an example, to obtain the cross section and the resulting particle production numbers without using approximations or Monte-Carlo simulations. We show that the approximations cannot be used to obtain accurate values of cross sections. The corresponding exclusion plots differ by substantial amounts when seen on a linear scale. In the event of a discovery, we generate pseudodata (assuming given values of mϕ and ɛ ) in the currently allowed regions of parameter space. The use of approximations to analyze the pseudodata for the future experiments is shown to lead to considerable errors in determining the parameters. Furthermore, a new region of parameter space can be explored without using one of the common approximations, mϕ≫me. Our method can be used as a consistency check for Monte-Carlo simulations.

  18. Approximation methods for combined thermal/structural design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Shore, C. P.

    1979-01-01

    Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.

  19. Adiabatic approximation and fluctuations in exciton-polariton condensates

    NASA Astrophysics Data System (ADS)

    Bobrovska, Nataliya; Matuszewski, Michał

    2015-07-01

    We study the relation between the models commonly used to describe the dynamics of nonresonantly pumped exciton-polariton condensates, namely the ones described by the complex Ginzburg-Landau equation, and by the open-dissipative Gross-Pitaevskii equation including a separate equation for the reservoir density. In particular, we focus on the validity of the adiabatic approximation and small density fluctuations approximation that allow one to reduce the coupled condensate-reservoir dynamics to a single partial differential equation. We find that the adiabatic approximation consists of three independent analytical conditions that have to be fulfilled simultaneously. By investigating stochastic versions of the two corresponding models, we verify that the breakdown of these approximations can lead to discrepancies in correlation lengths and distributions of fluctuations. Additionally, we consider the phase diffusion and number fluctuations of a condensate in a box, and show that self-consistent description requires treatment beyond the typical Bogoliubov approximation.

  20. Phase field modeling of brittle fracture for enhanced assumed strain shells at large deformations: formulation and finite element implementation

    NASA Astrophysics Data System (ADS)

    Reinoso, J.; Paggi, M.; Linder, C.

    2017-02-01

    Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.

  1. A new observation-based fitting method assuming an elliptical CME frontal shape and a variable speed

    NASA Astrophysics Data System (ADS)

    Rollett, T.; Moestl, C.; Isavnin, A.; Boakes, P. D.; Kubicka, M.; Amerstorfer, U. V.

    2015-12-01

    In this study, we present a new method for forecasting arrival times and speeds of coronal mass ejections (CMEs) at any location in the inner heliosphere. This new approach assumes a highly adjustable geometrical shape of the CME front with a variable CME width and a variable curvature of the frontal part, i.e. the assumed geometry is elliptical. An elliptic conversion (ElCon) method is applied to observations from STEREO's heliospheric imagers to convert the angular observations into a unit of radial distance from the Sun. This distance profile of the CME apex is then fitted using the drag-based model (DBM) to comprise the deceleration or acceleration CMEs experience during propagation. The outcome of both methods is then utilized as input for the Ellipse Evolution (ElEvo) model, forecasting the shock arrival times and speeds of CMEs at any position in interplanetary space. We introduce the combination of these three methods as the new ElEvoHI method. To demonstrate the applicability of ElEvoHI we present the forecast of 20 CMEs and compare it to the results from other forecasting utilities. Such a forecasting method is going to be useful when STEREO Ahead is again observing the space between the Sun and Earth, or when an L4/L5 space weather mission is in operation.

  2. No Common Opinion on the Common Core

    ERIC Educational Resources Information Center

    Henderson, Michael B.; Peterson, Paul E.; West, Martin R.

    2015-01-01

    According to the three authors of this article, the 2014 "EdNext" poll yields four especially important new findings: (1) Opinion with respect to the Common Core has yet to coalesce. The idea of a common set of standards across the country has wide appeal, and the Common Core itself still commands the support of a majority of the public.…

  3. Generalized stationary phase approximations for mountain waves

    NASA Astrophysics Data System (ADS)

    Knight, H.; Broutman, D.; Eckermann, S. D.

    2016-04-01

    Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.

  4. Multidimensional stochastic approximation Monte Carlo

    NASA Astrophysics Data System (ADS)

    Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .

  5. Femtolensing: Beyond the semiclassical approximation

    NASA Technical Reports Server (NTRS)

    Ulmer, Andrew; Goodman, Jeremy

    1995-01-01

    Femtolensoing is a gravitational lensing effect in which the magnification is a function not only of the position and sizes of the source and lens, but also of the wavelength of light. Femtolensing is the only known effect of 10(exp -13) - 10(exp -16) solar mass) dark-matter objects and may possibly be detectable in cosmological gamma-ray burst spectra. We present a new and efficient algorithm for femtolensing calculation in general potentials. The physical optics results presented here differ at low frequencies from the semiclassical approximation, in which the flux is attributed to a finite number of mutually coherent images. At higher frequencies, our results agree well with the semicalssical predictions. Applying our method to a point-mass lens with external shear, we find complex events that have structure at both large and small spectral resolution. In this way, we show that femtolensing may be observable for lenses up to 10(exp -11) solar mass, much larger than previously believed. Additionally, we discuss the possibility of a search femtolensing of white dwarfs in the Large Magellanic Cloud at optical wavelengths.

  6. A consistent collinear triad approximation for operational wave models

    NASA Astrophysics Data System (ADS)

    Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.

    2016-08-01

    In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.

  7. Improved assumed-stress hybrid shell element with drilling degrees of freedom for linear stress, buckling, and free vibration analyses

    NASA Technical Reports Server (NTRS)

    Rengarajan, Govind; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    An improved four-node quadrilateral assumed-stress hybrid shell element with drilling degrees of freedom is presented. The formulation is based on Hellinger-Reissner variational principle and the shape functions are formulated directly for the four-node element. The element has 12 membrane degrees of freedom and 12 bending degrees of freedom. It has nine independent stress parameters to describe the membrane stress resultant field and 13 independent stress parameters to describe the moment and transverse shear stress resultant field. The formulation encompasses linear stress, linear buckling, and linear free vibration problems. The element is validated with standard tests cases and is shown to be robust. Numerical results are presented for linear stress, buckling, and free vibration analyses.

  8. THOR: A New Higher-Order Closure Assumed PDF Subgrid-Scale Parameterization; Evaluation and Application to Low Cloud Feedbacks

    NASA Astrophysics Data System (ADS)

    Firl, G. J.; Randall, D. A.

    2013-12-01

    The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been

  9. Rates of energy transfer between tryptophans and hemes in hemoglobin, assuming that the heme is a planar oscillator.

    PubMed Central

    Gryczynski, Z; Tenenholz, T; Bucci, E

    1992-01-01

    Using the Förster equations we have estimated the rate of energy transfer from tryptophans to hemes in hemoglobin. Assuming an isotropic distribution of the transition moments of the heme in the plane of the porphyrin, we computed the orientation factors and the consequent transfer rates from the crystallographic coordinates of human oxy- and deoxy-hemoglobin. It appears that the orientation factors do not play a limiting role in regulating the energy transfer and that the rates are controlled almost exclusively by the intrasubunit separations between tryptophans and hemes. In intact hemoglobin tetramers the intrasubunit separations are such as to reduce lifetimes to 5 and 15 ps/ns of tryptophan lifetime. Lifetimes of several hundred picoseconds would be allowed by the intersubunit separations, but intersubunits transfer becomes important only when one heme per tetramer is absent or does not accept transfer. If more than one heme per tetramer is absent lifetimes of more than 1 ns would appear. PMID:1420905

  10. Post-Gaussian approximations in phase ordering kinetics

    NASA Astrophysics Data System (ADS)

    Mazenko, Gene F.

    1994-05-01

    Existing theories for the growth of order in unstable systems have successfully exploited the use of a Gaussian auxiliary field. The limitations imposed on such theories by assuming this field to be Gaussian have recently become clearer. In this paper it is shown how this Gaussian restriction can be removed in order to obtain improved approximations for the scaling properties of such systems. In particular it is shown how the improved theory can explain the recent numerical results of Blundell, Bray, and Sattler [Phys. Rev. E 48, 2476 (1993)] which are in qualitative disagreement with Gaussian theories.

  11. An Examination of New Paradigms for Spline Approximations.

    PubMed

    Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A

    2006-01-01

    Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.

  12. On Statistical Methods for Common Mean and Reference Confidence Intervals in Interlaboratory Comparisons for Temperature

    NASA Astrophysics Data System (ADS)

    Witkovský, Viktor; Wimmer, Gejza; Ďuriš, Stanislav

    2015-08-01

    We consider a problem of constructing the exact and/or approximate coverage intervals for the common mean of several independent distributions. In a metrological context, this problem is closely related to evaluation of the interlaboratory comparison experiments, and in particular, to determination of the reference value (estimate) of a measurand and its uncertainty, or alternatively, to determination of the coverage interval for a measurand at a given level of confidence, based on such comparison data. We present a brief overview of some specific statistical models, methods, and algorithms useful for determination of the common mean and its uncertainty, or alternatively, the proper interval estimator. We illustrate their applicability by a simple simulation study and also by example of interlaboratory comparisons for temperature. In particular, we shall consider methods based on (i) the heteroscedastic common mean fixed effect model, assuming negligible laboratory biases, (ii) the heteroscedastic common mean random effects model with common (unknown) distribution of the laboratory biases, and (iii) the heteroscedastic common mean random effects model with possibly different (known) distributions of the laboratory biases. Finally, we consider a method, recently suggested by Singh et al., for determination of the interval estimator for a common mean based on combining information from independent sources through confidence distributions.

  13. Comparison of the Radiative Two-Flux and Diffusion Approximations

    NASA Technical Reports Server (NTRS)

    Spuckler, Charles M.

    2006-01-01

    Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and

  14. Calculations of scattered light from rigid polymers by Shifrin and Rayleigh-Debye approximations.

    PubMed Central

    Bishop, M F

    1989-01-01

    We show that the commonly used Rayleigh-Debye method for calculating light scattering can lead to significant errors when used for describing scattering from dilute solutions of long rigid polymers, errors that can be overcome by use of the easily applied Shifrin approximation. In order to show the extent of the discrepancies between the two methods, we have performed calculations at normal incidence both for polarized and unpolarized incident light with the scattering intensity determined as a function of polarization angle and of scattering angle, assuming that the incident light is in a spectral region where the absorption of hemoglobin is small. When the Shifrin method is used, the calculated intensities using either polarized or unpolarized scattered light give information about the alignment of polymers, a feature that is lost in the Rayleigh-Debye approximation because the effect of the asymmetric shape of the scatterer on the incoming polarized electric field is ignored. Using sickle hemoglobin polymers as an example, we have calculated the intensity of light scattering using both approaches and found that, for totally aligned polymers within parallel planes, the difference can be as large as 25%, when the incident electric field is perpendicular to the polymers, for near forward or near backward scattering (0 degrees or 180 degrees scattering angle), but becomes zero as the scattering angle approaches 90 degrees. For randomly oriented polymers within a plane, or for incident unpolarized light for either totally oriented or randomly oriented polymers, the difference between the two results for near forward or near backward scattering is approximately 15%. PMID:2605302

  15. Diffusive approximation for unsteady mud flows with backwater effect

    NASA Astrophysics Data System (ADS)

    Di Cristo, Cristiana; Iervolino, Michele; Vacca, Andrea

    2015-07-01

    The adoption of the Diffusive Wave (DW) instead of the Full Dynamic (FD) model in the analysis of mud flood routing within the shallow-water framework may provide a significant reduction of the computational effort, and the knowledge of the conditions in which this approximation may be employed is therefore important. In this paper, the applicability of the DW approximation of a depth-integrated Herschel-Bulkley model is investigated through linear analysis. Assuming as the initial condition a steady hypocritical decelerated flow, induced by downstream backwater, the propagation characteristics of a small perturbation predicted by the DW and FD models are compared. The results show that the spatial variation on the initial profile may preclude the application of DW model with a prescribed accuracy. Whenever the method is applicable, the rising time of the mud flood must satisfy additional constraints, whose dependence on the flow depth, along with the Froude number and the rheological parameters, is deeply analyzed and discussed.

  16. Approximate likelihood for large irregularly spaced spatial data

    PubMed Central

    Fuentes, Montserrat

    2008-01-01

    SUMMARY Likelihood approaches for large irregularly spaced spatial datasets are often very difficult, if not infeasible, to implement due to computational limitations. Even when we can assume normality, exact calculations of the likelihood for a Gaussian spatial process observed at n locations requires O(n3) operations. We present a version of Whittle’s approximation to the Gaussian log likelihood for spatial regular lattices with missing values and for irregularly spaced datasets. This method requires O(nlog2n) operations and does not involve calculating determinants. We present simulations and theoretical results to show the benefits and the performance of the spatial likelihood approximation method presented here for spatial irregularly spaced datasets and lattices with missing values. We apply these methods to estimate the spatial structure of sea surface temperatures (SST) using satellite data with missing values. PMID:19079638

  17. Dynamical observer for a flexible beam via finite element approximations

    NASA Technical Reports Server (NTRS)

    Manitius, Andre; Xia, Hong-Xing

    1994-01-01

    The purpose of this view-graph presentation is a computational investigation of the closed-loop output feedback control of a Euler-Bernoulli beam based on finite element approximation. The observer is part of the classical observer plus state feedback control, but it is finite-dimensional. In the theoretical work on the subject it is assumed (and sometimes proved) that increasing the number of finite elements will improve accuracy of the control. In applications, this may be difficult to achieve because of numerical problems. The main difficulty in computing the observer and simulating its work is the presence of high frequency eigenvalues in the finite-element model and poor numerical conditioning of some of the system matrices (e.g. poor observability properties) when the dimension of the approximating system increases. This work dealt with some of these difficulties.

  18. On the convergence of difference approximations to scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Osher, Stanley; Tadmor, Eitan

    1988-01-01

    A unified treatment is given for time-explicit, two-level, second-order-resolution (SOR), total-variation-diminishing (TVD) approximations to scalar conservation laws. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced to obtain results in terms of the latter. The existence of a cell entropy inequality is discussed, and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first-order accurate in general. Convergence for TVD-SOR schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.

  19. Energy flow: image correspondence approximation for motion analysis

    NASA Astrophysics Data System (ADS)

    Wang, Liangliang; Li, Ruifeng; Fang, Yajun

    2016-04-01

    We propose a correspondence approximation approach between temporally adjacent frames for motion analysis. First, energy map is established to represent image spatial features on multiple scales using Gaussian convolution. On this basis, energy flow at each layer is estimated using Gauss-Seidel iteration according to the energy invariance constraint. More specifically, at the core of energy invariance constraint is "energy conservation law" assuming that the spatial energy distribution of an image does not change significantly with time. Finally, energy flow field at different layers is reconstructed by considering different smoothness degrees. Due to the multiresolution origin and energy-based implementation, our algorithm is able to quickly address correspondence searching issues in spite of background noise or illumination variation. We apply our correspondence approximation method to motion analysis, and experimental results demonstrate its applicability.

  20. On the convergence of difference approximations to scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Osher, S.; Tadmor, E.

    1985-01-01

    A unified treatment of explicit in time, two level, second order resolution, total variation diminishing, approximations to scalar conservation laws are presented. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced and results in terms of the latter are obtained. The existence of a cell entropy inequality is discussed and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first order accurate in general. Convergence for total variation diminishing-second order resolution schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.

  1. Shallow ice approximation, second order shallow ice approximation, and full Stokes models: A discussion of their roles in palaeo-ice sheet modelling and development

    NASA Astrophysics Data System (ADS)

    Kirchner, N.; Ahlkrona, J.; Gowan, E. J.; Lötstedt, P.; Lea, J. M.; Noormets, R.; von Sydow, L.; Dowdeswell, J. A.; Benham, T.

    2016-09-01

    Full Stokes ice sheet models provide the most accurate description of ice sheet flow, and can therefore be used to reduce existing uncertainties in predicting the contribution of ice sheets to future sea level rise on centennial time-scales. The level of accuracy at which millennial time-scale palaeo-ice sheet simulations resolve ice sheet flow lags the standards set by Full Stokes models, especially, when Shallow Ice Approximation (SIA) models are used. Most models used in paleo-ice sheet modeling were developed at a time when computer power was very limited, and rely on several assumptions. At the time there was no means of verifying the assumptions by other than mathematical arguments. However, with the computer power and refined Full Stokes models available today, it is possible to test these assumptions numerically. In this paper, we review (Ahlkrona et al., 2013a) where such tests were performed and inaccuracies in commonly used arguments were found. We also summarize (Ahlkrona et al., 2013b) where the implications of the inaccurate assumptions are analyzed for two paleo-models - the SIA and the SOSIA. We review these works without resorting to mathematical detail, in order to make them accessible to a wider audience with a general interest in palaeo-ice sheet modelling. Specifically, we discuss two implications of relevance for palaeo-ice sheet modelling. First, classical SIA models are less accurate than assumed in their original derivation. Secondly, and contrary to previous recommendations, the SOSIA model is ruled out as a practicable tool for palaeo-ice sheet simulations. We conclude with an outlook concerning the new Ice Sheet Coupled Approximation Level (ISCAL) method presented in Ahlkrona et al. (2016), that has the potential to match the accuracy standards of full Stokes model on palaeo-timescales of tens of thousands of years, and to become an alternative to hybrid models currently used in palaeo-ice sheet modelling. The method is applied to an ice

  2. Cosmic shear covariance: the log-normal approximation

    NASA Astrophysics Data System (ADS)

    Hilbert, S.; Hartlap, J.; Schneider, P.

    2011-12-01

    Context. Accurate estimates of the errors on the cosmological parameters inferred from cosmic shear surveys require accurate estimates of the covariance of the cosmic shear correlation functions. Aims: We seek approximations to the cosmic shear covariance that are as easy to use as the common approximations based on normal (Gaussian) statistics, but yield more accurate covariance matrices and parameter errors. Methods: We derive expressions for the cosmic shear covariance under the assumption that the underlying convergence field follows log-normal statistics. We also derive a simplified version of this log-normal approximation by only retaining the most important terms beyond normal statistics. We use numerical simulations of weak lensing to study how well the normal, log-normal, and simplified log-normal approximations as well as empirical corrections to the normal approximation proposed in the literature reproduce shear covariances for cosmic shear surveys. We also investigate the resulting confidence regions for cosmological parameters inferred from such surveys. Results: We find that the normal approximation substantially underestimates the cosmic shear covariances and the inferred parameter confidence regions, in particular for surveys with small fields of view and large galaxy densities, but also for very wide surveys. In contrast, the log-normal approximation yields more realistic covariances and confidence regions, but also requires evaluating slightly more complicated expressions. However, the simplified log-normal approximation, although as simple as the normal approximation, yields confidence regions that are almost as accurate as those obtained from the log-normal approximation. The empirical corrections to the normal approximation do not yield more accurate covariances and confidence regions than the (simplified) log-normal approximation. Moreover, they fail to produce positive-semidefinite data covariance matrices in certain cases, rendering them

  3. Common NICU Equipment

    MedlinePlus

    ... care unit (NICU) > Common NICU equipment Common NICU equipment E-mail to a friend Please fill in ... understand how they can help your baby. What equipment is commonly used in the NICU? Providers use ...

  4. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Robert; Novack, Steven

    2015-01-01

    Space Launch System (SLS) Agenda: Objective; Key Definitions; Calculating Common Cause; Examples; Defense against Common Cause; Impact of varied Common Cause Failure (CCF) and abortability; Response Surface for various CCF Beta; Takeaways.

  5. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2012-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790

  6. HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.

    PubMed

    Fan, Jianqing; Liao, Yuan; Mincheva, Martina

    2011-01-01

    The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.

  7. Controlling a transfer trajectory with realistic impulses assumming perturbations in the Sun-Earth-Moon Quasi-Bicircular Problem

    NASA Astrophysics Data System (ADS)

    Leiva, A. M.; Briozzo, C. B.

    In a previous work we successfully implemented a control algorithm to stabilize unstable periodic orbits in the Sun-Earth-Moon Quasi-Bicircular Problem (QBCP). Applying the same techniques, in this work we stabilize an unstable trajectory performing fast transfers between the Earth and the Moon in a dynamical system similar to the QBCP but incorporating the gravitational perturbation of the planets Mercury, Venus, Mars, Jupiter, Saturn, Uranus, and Neptune, assumed to move on circular coplanar heliocentric orbits. In the control stage we used as a reference trajectory an unstable periodic orbit from the unperturbed QBCP. We performed 400 numerical experiments integrating the trajectories over time spans of ~40 years, taking for each one random values for the initial positions of the planets. In all cases the control impulses applied were larger than 20 cm/s, consistently with realistic implementations. The minimal and maximal yearly mean consumptions were ~10 m/s and ~71 m/s, respectively. FULL TEXT IN SPANISH

  8. Importance of the habitat choice behavior assumed when modeling the effects of food and temperature on fish populations

    USGS Publications Warehouse

    Wildhaber, Mark L.; Lamberson, Peter J.

    2004-01-01

    Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.

  9. Regionalism and the Defense of Southeast Asia: An Analysis of ASEAN’s Potential to Assume a Security Dimension.

    DTIC Science & Technology

    2014-09-26

    systems . Poten- tial sources of discord must be minimized and sacrifices must inevitably be made by individual nations in support of more important common...be completed; however, the potential for irrigation , navigation, * and hydroelectric power offered by the Mekong River which flows for 2,600 miles...ZOPFAN proposal calling for accommodation between the capitalist and socialist systems may have contributed to a more stable Southeast Asia by

  10. Approximate analytic solutions to coupled nonlinear Dirac equations

    NASA Astrophysics Data System (ADS)

    Khare, Avinash; Cooper, Fred; Saxena, Avadh

    2017-03-01

    We consider the coupled nonlinear Dirac equations (NLDEs) in 1 + 1 dimensions with scalar-scalar self-interactions g12 / 2 (ψ bar ψ) 2 + g22/2 (ϕ bar ϕ) 2 + g32 (ψ bar ψ) (ϕ bar ϕ) as well as vector-vector interactions of the form g1/22 (ψ bar γμ ψ) (ψ bar γμ ψ) + g22/2 (ϕ bar γμ ϕ) (ϕ bar γμ ϕ) + g32 (ψ bar γμ ψ) (ϕ bar γμ ϕ). Writing the two components of the assumed rest frame solution of the coupled NLDE equations in the form ψ =e - iω1 t {R1 cos ⁡ θ ,R1 sin ⁡ θ }, ϕ =e - iω2 t {R2 cos ⁡ η ,R2 sin ⁡ η }, and assuming that θ (x) , η (x) have the same functional form they had when g3 = 0, which is an approximation consistent with the conservation laws, we then find approximate analytic solutions for Ri (x) which are valid for small values of g32 / g22 and g32 / g12. In the nonrelativistic limit we show that both of these coupled models go over to the same coupled nonlinear Schrödinger equation for which we obtain two exact pulse solutions vanishing at x → ± ∞.

  11. Approximate analytic solutions to coupled nonlinear Dirac equations

    DOE PAGES

    Khare, Avinash; Cooper, Fred; Saxena, Avadh

    2017-01-30

    Here, we consider the coupled nonlinear Dirac equations (NLDEs) in 1+11+1 dimensions with scalar–scalar self-interactions g12/2(more » $$\\bar{ψ}$$ψ)2 + g22/2($$\\bar{Φ}$$Φ)2 + g23($$\\bar{ψ}$$ψ)($$\\bar{Φ}$$Φ) as well as vector–vector interactions g12/2($$\\bar{ψ}$$γμψ)($$\\bar{ψ}$$γμψ) + g22/2($$\\bar{Φ}$$γμΦ)($$\\bar{Φ}$$γμΦ) + g23($$\\bar{ψ}$$γμψ)($$\\bar{Φ}$$γμΦ). Writing the two components of the assumed rest frame solution of the coupled NLDE equations in the form ψ=e–iω1tR1cosθ,R1sinθΦ=e–iω2tR2cosη,R2sinη, and assuming that θ(x),η(x) have the same functional form they had when g3 = 0, which is an approximation consistent with the conservation laws, we then find approximate analytic solutions for Ri(x) which are valid for small values of g32/g22 and g32/g12. In the nonrelativistic limit we show that both of these coupled models go over to the same coupled nonlinear Schrödinger equation for which we obtain two exact pulse solutions vanishing at x → ±∞.« less

  12. Examining the exobase approximation: DSMC models of Titan's upper atmosphere

    NASA Astrophysics Data System (ADS)

    Tucker, Orenthal J.; Waalkes, William; Tenishev, Valeriy M.; Johnson, Robert E.; Bieler, Andre; Combi, Michael R.; Nagy, Andrew F.

    2016-07-01

    Chamberlain ([1963] Planet. Space Sci., 11, 901-960) described the use of the exobase layer to determine escape from planetary atmospheres, below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are deemed negligible. De La Haye et al. ([2007] Icarus., 191, 236-250) used this approximation to extract the energy deposition and non-thermal escape rates for Titan's atmosphere by fitting the Cassini Ion Neutral Mass Spectrometer (INMS) density data. De La Haye et al. assumed the gas distributions were composed of an enhanced population of super-thermal molecules (E >> kT) that could be described by a kappa energy distribution function (EDF), and they fit the data using the Liouville theorem. Here we fitted the data again, but we used the conventional form of the kappa EDF. The extracted kappa EDFs were then used with the Direct Simulation Monte Carlo (DSMC) technique (Bird [1994] Molecular Gas Dynamics and the Direct Simulation of Gas Flows) to evaluate the effect of collisions on the exospheric profiles. The INMS density data can be fit reasonably well with thermal and various non-thermal EDFs. However, the extracted energy deposition and escape rates are shown to depend significantly on the assumed exobase altitude, and the usefulness of such fits without directly modeling the collisions is unclear. Our DSMC results indicate that the kappa EDFs used in the Chamberlain approximation can lead to errors in determining the atmospheric temperature profiles and escape rates. Gas kinetic simulations are needed to accurately model measured exospheric density profiles, and to determine the altitude ranges where the Liouville method might be applicable.

  13. Common Career Technical Core: Common Standards, Common Vision for CTE

    ERIC Educational Resources Information Center

    Green, Kimberly

    2012-01-01

    This article provides an overview of the National Association of State Directors of Career Technical Education Consortium's (NASDCTEc) Common Career Technical Core (CCTC), a state-led initiative that was created to ensure that career and technical education (CTE) programs are consistent and high quality across the United States. Forty-two states,…

  14. Weight-Bearing Ankle Dorsiflexion Range of Motion—Can Side-to-Side Symmetry Be Assumed?

    PubMed Central

    Rabin, Alon; Kozol, Zvi; Spitzer, Elad; Finestone, Aharon S.

    2015-01-01

    Context: In clinical practice, the range of motion (ROM) of the noninvolved side often serves as the reference for comparison with the injured side. Previous investigations of non–weight-bearing (NWB) ankle dorsiflexion (DF) ROM measurements have indicated bilateral symmetry for the most part. Less is known about ankle DF measured under weight-bearing (WB) conditions. Because WB and NWB ankle DF are not strongly correlated, there is a need to determine whether WB ankle DF is also symmetrical in a healthy population. Objective: To determine whether WB ankle DF is bilaterally symmetrical. A secondary goal was to further explore the correlation between WB and NWB ankle DF ROM. Design: Cross-sectional study. Setting: Training facility of the Israeli Defense Forces. Patients or Other Participants: A total of 64 healthy males (age = 19.6 ± 1.0 years, height = 175.0 ± 6.4 cm, and body mass = 71.4 ± 7.7 kg). Main Outcome Measure(s): Dorsiflexion ROM in WB was measured with an inclinometer and DF ROM in NWB was measured with a universal goniometer. All measurements were taken bilaterally by a single examiner. Results: Weight-bearing ankle DF was greater on the nondominant side compared with the dominant side (P < .001). Non–weight-bearing ankle DF was not different between sides (P = .64). The correlation between WB and NWB DF was moderate, with the NWB DF measurement accounting for 30% to 37% of the variance of the WB measurement. Conclusions: Weight-bearing ankle DF ROM should not be assumed to be bilaterally symmetrical. These findings suggest that side-to-side differences in WB DF may need to be interpreted while considering which side is dominant. The difference in bilateral symmetry between the WB and NWB measurements, as well as the only moderate level of correlation between them, suggests that both measurements should be performed routinely. PMID:25329350

  15. The Canonical Luminous Blue Variable AG Car and Its Neighbor Hen 3-519 are Much Closer than Previously Assumed

    NASA Astrophysics Data System (ADS)

    Smith, Nathan; Stassun, Keivan G.

    2017-03-01

    The strong mass loss of Luminous Blue Variables (LBVs) is thought to play a critical role in massive-star evolution, but their place in the evolutionary sequence remains debated. A key to understanding their peculiar instability is their high observed luminosities, which often depends on uncertain distances. Here we report direct distances and space motions of four canonical Milky Way LBVs—AG Car, HR Car, HD 168607, and (candidate) Hen 3-519—from the Gaia first data release. Whereas the distances of HR Car and HD 168607 are consistent with previous literature estimates within the considerable uncertainties, Hen 3-519 and AG Car, both at ∼2 kpc, are much closer than the 6–8 kpc distances previously assumed. As a result, Hen 3-519 moves far from the locus of LBVs on the Hertzsprung–Russell diagram, making it a much less luminous object. For AG Car, considered a defining example of a classical LBV, its lower luminosity would also move it off the S Dor instability strip. Lower luminosities allow both AG Car and Hen 3-519 to have passed through a previous red supergiant phase, lower the mass estimates for their shell nebulae, and imply that binary evolution is needed to account for their peculiarities. These results may also impact our understanding of LBVs as potential supernova progenitors and their isolated environments. Improved distances will be provided in the Gaia second data release, which will include additional LBVs. AG Car and Hen 3-519 hint that this new information may alter our traditional view of LBVs.

  16. Signal Approximation with a Wavelet Neural Network

    DTIC Science & Technology

    1992-12-01

    specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .

  17. Rough Set Approximations in Formal Concept Analysis

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Daisuke; Murata, Atsuo; Li, Guo-Dong; Nagai, Masatake

    Conventional set approximations are based on a set of attributes; however, these approximations cannot relate an object to the corresponding attribute. In this study, a new model for set approximation based on individual attributes is proposed for interval-valued data. Defining an indiscernibility relation is omitted since each attribute value itself has a set of values. Two types of approximations, single- and multiattribute approximations, are presented. A multi-attribute approximation has two solutions: a maximum and a minimum solution. A maximum solution is a set of objects that satisfy the condition of approximation for at least one attribute. A minimum solution is a set of objects that satisfy the condition for all attributes. The proposed set approximation is helpful in finding the features of objects relating to condition attributes when interval-valued data are given. The proposed model contributes to feature extraction in interval-valued information systems.

  18. An approximation technique for jet impingement flow

    SciTech Connect

    Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.

    2015-03-10

    The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.

  19. Energy conservation - A test for scattering approximations

    NASA Technical Reports Server (NTRS)

    Acquista, C.; Holland, A. C.

    1980-01-01

    The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.

  20. Gutzwiller approximation in strongly correlated electron systems

    NASA Astrophysics Data System (ADS)

    Li, Chunhua

    Gutzwiller wave function is an important theoretical technique for treating local electron-electron correlations nonperturbatively in condensed matter and materials physics. It is concerned with calculating variationally the ground state wave function by projecting out multi-occupation configurations that are energetically costly. The projection can be carried out analytically in the Gutzwiller approximation that offers an approximate way of calculating expectation values in the Gutzwiller projected wave function. This approach has proven to be very successful in strongly correlated systems such as the high temperature cuprate superconductors, the sodium cobaltates, and the heavy fermion compounds. In recent years, it has become increasingly evident that strongly correlated systems have a strong propensity towards forming inhomogeneous electronic states with spatially periodic superstrutural modulations. A good example is the commonly observed stripes and checkerboard states in high- Tc superconductors under a variety of conditions where superconductivity is weakened. There exists currently a real challenge and demand for new theoretical ideas and approaches that treats strongly correlated inhomogeneous electronic states, which is the subject matter of this thesis. This thesis contains four parts. In the first part of the thesis, the Gutzwiller approach is formulated in the grand canonical ensemble where, for the first time, a spatially (and spin) unrestricted Gutzwiller approximation (SUGA) is developed for studying inhomogeneous (both ordered and disordered) quantum electronic states in strongly correlated electron systems. The second part of the thesis applies the SUGA to the t-J model for doped Mott insulators which led to the discovery of checkerboard-like inhomogeneous electronic states competing with d-wave superconductivity, consistent with experimental observations made on several families of high-Tc superconductors. In the third part of the thesis, new

  1. Compressive Imaging via Approximate Message Passing

    DTIC Science & Technology

    2015-09-04

    We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction

  2. Fractal Trigonometric Polynomials for Restricted Range Approximation

    NASA Astrophysics Data System (ADS)

    Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.

    2016-05-01

    One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.

  3. Confidence and coverage for Bland-Altman limits of agreement and their approximate confidence intervals.

    PubMed

    Carkeet, Andrew; Goh, Yee Teng

    2016-09-01

    Bland and Altman described approximate methods in 1986 and 1999 for calculating confidence limits for their 95% limits of agreement, approximations which assume large subject numbers. In this paper, these approximations are compared with exact confidence intervals calculated using two-sided tolerance intervals for a normal distribution. The approximations are compared in terms of the tolerance factors themselves but also in terms of the exact confidence limits and the exact limits of agreement coverage corresponding to the approximate confidence interval methods. Using similar methods the 50th percentile of the tolerance interval are compared with the k values of 1.96 and 2, which Bland and Altman used to define limits of agreements (i.e. [Formula: see text]+/- 1.96Sd and [Formula: see text]+/- 2Sd). For limits of agreement outer confidence intervals, Bland and Altman's approximations are too permissive for sample sizes <40 (1999 approximation) and <76 (1986 approximation). For inner confidence limits the approximations are poorer, being permissive for sample sizes of <490 (1986 approximation) and all practical sample sizes (1999 approximation). Exact confidence intervals for 95% limits of agreements, based on two-sided tolerance factors, can be calculated easily based on tables and should be used in preference to the approximate methods, especially for small sample sizes.

  4. Gene stacking strategies with doubled haploids derived from biparental crosses: theory and simulations assuming a finite number of loci.

    PubMed

    Melchinger, Albrecht E; Technow, Frank; Dhillon, Baldev S

    2011-12-01

    Recent progress in genotyping and doubled haploid (DH) techniques has created new opportunities for development of improved selection methods in numerous crops. Assuming a finite number of unlinked loci (ℓ) and a given total number (n) of individuals to be genotyped, we compared, by theory and simulations, three methods of marker-assisted selection (MAS) for gene stacking in DH lines derived from biparental crosses: (1) MAS for high values of the marker score (T, corresponding to the total number of target alleles) in the F(2) generation and subsequently among DH lines derived from the selected F(2) individual (Method 1), (2) MAS for augmented F(2) enrichment and subsequently for T among DH lines from the best carrier F(2) individual (Method 2), and (3) MAS for T among DH lines derived from the F(1) generation (Method 3). Our objectives were to (a) determine the optimum allocation of resources to the F(2) ([Formula: see text]) and DH generations [Formula: see text] for Methods 1 and 2 by simulations, (b) compare the efficiency of all three methods for gene stacking by simulations, and (c) develop theory to explain the general effect of selection on the segregation variance and interpret our simulation results. By theory, we proved that for smaller values of ℓ, the segregation variance of T among DH lines derived from F(2) individuals, selected for high values of T, can be much smaller than expected in the absence of selection. This explained our simulation results, showing that for Method 1, it is best to genotype more F(2) individuals than DH lines ([Formula: see text]), whereas under Method 2, the optimal ratio [Formula: see text] was close to 0.5. However, for ratios deviating moderately from the optimum, the mean [Formula: see text] of T in the finally selected DH line ([Formula: see text]) was hardly reduced. Method 3 had always the lowest mean [Formula: see text] of [Formula: see text] except for small numbers of loci (ℓ = 4) and is favorable only if

  5. An Approximation to the True Ability Distribution in the Binomial Error Model and Applications. Research Memorandum 79-5.

    ERIC Educational Resources Information Center

    Huynh, Huynh; Mandeville, Garrett K.

    Assuming that the density p of the true ability theta in the binomial test score model is continuous in the closed interval (0, 1), a Bernstein polynomial can be used to uniformly approximate p. Then via quadratic programming techniques, least-square estimates may be obtained for the coefficients defining the polynomial. The approximation, in turn…

  6. An Approximate Approach to Automatic Kernel Selection.

    PubMed

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  7. The exact solution of shear-lag problems in flat panels and box beams assumed rigid in the transverse direction

    NASA Technical Reports Server (NTRS)

    Hildebrand, Francis B

    1943-01-01

    A mathematical procedure is herein developed for obtaining exact solutions of shear-lag problems in flat panels and box beams: the method is based on the assumption that the amount of stretching of the sheets in the direction perpendicular to the direction of essential normal stresses is negligible. Explicit solutions, including the treatment of cut-outs, are given for several cases and numerical results are presented in graphic and tabular form. The general theory is presented in a from which further solutions can be readily obtained. The extension of the theory to cover certain cases of non-uniform cross section is indicated. Although the solutions are obtained in terms of infinite series, the present developments differ from those previously given in that, in practical cases, the series usually converge so rapidly that sufficient accuracy is afforded by a small number of terms. Comparisons are made in several cases between the present results and the corresponding solutions obtained by approximate procedures devised by Reissner and by Kuhn and Chiarito.

  8. Approximate Formula for the Vertical Asymptote of Projectile Motion in Midair

    ERIC Educational Resources Information Center

    Chudinov, Peter Sergey

    2010-01-01

    The classic problem of the motion of a point mass (projectile) thrown at an angle to the horizon is reviewed. The air drag force is taken into account with the drag factor assumed to be constant. An analytical approach is used for the investigation. An approximate formula is obtained for one of the characteristics of the motion--the vertical…

  9. Jacobian transformed and detailed balance approximations for photon induced scattering

    NASA Astrophysics Data System (ADS)

    Wienke, B. R.; Budge, K. G.; Chang, J. H.; Dahl, J. A.; Hungerford, A. L.

    2012-01-01

    Photon emission and scattering are enhanced by the number of photons in the final state, and the photon transport equation reflects this in scattering-emission kernels and source terms. This is often a complication in both theoretical and numerical analyzes, requiring approximations and assumptions about background and material temperatures, incident and exiting photon energies, local thermodynamic equilibrium, plus other related aspects of photon scattering and emission. We review earlier schemes parameterizing photon scattering-emission processes, and suggest two alternative schemes. One links the product of photon and electron distributions in the final state to the product in the initial state by Jacobian transformation of kinematical variables (energy and angle), and the other links integrands of scattering kernels in a detailed balance requirement for overall (integrated) induced effects. Compton and inverse Compton differential scattering cross sections are detailed in appropriate limits, numerical integrations are performed over the induced scattering kernel, and for tabulation induced scattering terms are incorporated into effective cross sections for comparisons and numerical estimates. Relativistic electron distributions are assumed for calculations. Both Wien and Planckian distributions are contrasted for impact on induced scattering as LTE limit points. We find that both transformed and balanced approximations suggest larger induced scattering effects at high photon energies and low electron temperatures, and smaller effects in the opposite limits, compared to previous analyzes, with 10-20% increases in effective cross sections. We also note that both approximations can be simply implemented within existing transport modules or opacity processors as an additional term in the effective scattering cross section. Applications and comparisons include effective cross sections, kernel approximations, and impacts on radiative transport solutions in 1D

  10. Common Tests for Arrhythmia

    MedlinePlus

    ... Venous Thromboembolism Aortic Aneurysm More Common Tests for Arrhythmia Updated:Dec 21,2016 Several tests can help ... View an animation of arrhythmia . Common Tests for Arrhythmia Holter monitor (continuous ambulatory electrocardiographic monitor) Suspected arrhythmias ...

  11. Finding Common Ground with the Common Core

    ERIC Educational Resources Information Center

    Moisan, Heidi

    2015-01-01

    This article examines the journey of museum educators at the Chicago History Museum in understanding the Common Core State Standards and implementing them in our work with the school audience. The process raised questions about our teaching philosophy and our responsibility to our audience. Working with colleagues inside and outside of our…

  12. How Common Is the Common Core?

    ERIC Educational Resources Information Center

    Thomas, Amande; Edson, Alden J.

    2014-01-01

    Since the introduction of the Common Core State Standards for Mathematics (CCSSM) in 2010, stakeholders in adopting states have engaged in a variety of activities to understand CCSSM standards and transition from previous state standards. These efforts include research, professional development, assessment and modification of curriculum resources,…

  13. Establishing Conventional Communication Systems: Is Common Knowledge Necessary?

    ERIC Educational Resources Information Center

    Barr, Dale J.

    2004-01-01

    How do communities establish shared communication systems? The Common Knowledge view assumes that symbolic conventions develop through the accumulation of common knowledge regarding communication practices among the members of a community. In contrast with this view, it is proposed that coordinated communication emerges a by-product of local…

  14. Approximations for column effect in airplane wing spars

    NASA Technical Reports Server (NTRS)

    Warner, Edward P; Short, Mac

    1927-01-01

    The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.

  15. Convergence of finite element approximations of large eddy motion.

    SciTech Connect

    Iliescu, T.; John, V.; Layton, W. J.; Mathematics and Computer Science; Otto-von-Guericke Univ.; Univ. of Pittsburgh

    2002-11-01

    This report considers 'numerical errors' in LES. Specifically, for one family of space filtered flow models, we show convergence of the finite element approximation of the model and give an estimate of the error. Keywords: Navier Stokes equations, large eddy simulation, finite element method I. INTRODUCTION Consider the (turbulent) flow of an incompressible fluid. One promising and common approach to the simulation of the motion of the large fluid structures is Large Eddy Simulation (LES). Various models are used in LES; a common one is to find (w, q), where w : {Omega}

  16. Canonical Commonality Analysis.

    ERIC Educational Resources Information Center

    Leister, K. Dawn

    Commonality analysis is a method of partitioning variance that has advantages over more traditional "OVA" methods. Commonality analysis indicates the amount of explanatory power that is "unique" to a given predictor variable and the amount of explanatory power that is "common" to or shared with at least one predictor…

  17. Knowledge representation for commonality

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1990-01-01

    Domain-specific knowledge necessary for commonality analysis falls into two general classes: commonality constraints and costing information. Notations for encoding such knowledge should be powerful and flexible and should appeal to the domain expert. The notations employed by the Commonality Analysis Problem Solver (CAPS) analysis tool are described. Examples are given to illustrate the main concepts.

  18. The JWKB approximation in loop quantum cosmology

    NASA Astrophysics Data System (ADS)

    Craig, David; Singh, Parampreet

    2017-01-01

    We explore the JWKB approximation in loop quantum cosmology in a flat universe with a scalar matter source. Exact solutions of the quantum constraint are studied at small volume in the JWKB approximation in order to assess the probability of tunneling to small or zero volume. Novel features of the approximation are discussed which appear due to the fact that the model is effectively a two-dimensional dynamical system. Based on collaborative work with Parampreet Singh.

  19. Approximate dynamic model of a turbojet engine

    NASA Technical Reports Server (NTRS)

    Artemov, O. A.

    1978-01-01

    An approximate dynamic nonlinear model of a turbojet engine is elaborated on as a tool in studying the aircraft control loop, with the turbojet engine treated as an actuating component. Approximate relationships linking the basic engine parameters and shaft speed are derived to simplify the problem, and to aid in constructing an approximate nonlinear dynamic model of turbojet engine performance useful for predicting aircraft motion.

  20. Bent approximations to synchrotron radiation optics

    SciTech Connect

    Heald, S.

    1981-01-01

    Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.

  1. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. III. Cylindrical approximations for heat waves traveling inwards

    SciTech Connect

    Berkel, M. van; Tamura, N.; Ida, K.; Hogeweij, G. M. D.; Zwart, H. J.; Inagaki, S.; Baar, M. R. de

    2014-11-15

    , cylindrical approximations are treated for heat waves traveling towards the plasma edge assuming a semi-infinite domain.

  2. Parabolic approximation method for the mode conversion-tunneling equation

    SciTech Connect

    Phillips, C.K.; Colestock, P.L.; Hwang, D.Q.; Swanson, D.G.

    1987-07-01

    The derivation of the wave equation which governs ICRF wave propagation, absorption, and mode conversion within the kinetic layer in tokamaks has been extended to include diffraction and focussing effects associated with the finite transverse dimensions of the incident wavefronts. The kinetic layer considered consists of a uniform density, uniform temperature slab model in which the equilibrium magnetic field is oriented in the z-direction and varies linearly in the x-direction. An equivalent dielectric tensor as well as a two-dimensional energy conservation equation are derived from the linearized Vlasov-Maxwell system of equations. The generalized form of the mode conversion-tunneling equation is then extracted from the Maxwell equations, using the parabolic approximation method in which transverse variations of the wave fields are assumed to be weak in comparison to the variations in the primary direction of propagation. Methods of solving the generalized wave equation are discussed. 16 refs.

  3. Parametric study of the Orbiter rollout using an approximate solution

    NASA Technical Reports Server (NTRS)

    Garland, B. J.

    1979-01-01

    An approximate solution to the motion of the Orbiter during rollout is used to perform a parametric study of the rollout distance required by the Orbiter. The study considers the maximum expected dispersions in the landing speed and the touchdown point. These dispersions are assumed to be correlated so that a fast landing occurs before the nominal touchdown point. The maximum rollout distance is required by the maximum landing speed with a 10 knot tailwind and the center of mass at the forward limit of its longitudinal travel. The maximum weight that can be stopped within 15,000 feet on a hot day at Kennedy Space Center is 248,800 pounds. The energy absorbed by the brakes would exceed the limit for reuse of the brakes.

  4. The Average Field Approximation for Almost Bosonic Extended Anyons

    NASA Astrophysics Data System (ADS)

    Lundholm, Douglas; Rougerie, Nicolas

    2015-12-01

    Anyons are 2D or 1D quantum particles with intermediate statistics, interpolating between bosons and fermions. We study the ground state of a large number N of 2D anyons, in a scaling limit where the statistics parameter α is proportional to N ^{-1} when N→ ∞ . This means that the statistics is seen as a "perturbation from the bosonic end". We model this situation in the magnetic gauge picture by bosons interacting through long-range magnetic potentials. We assume that these effective statistical gauge potentials are generated by magnetic charges carried by each particle, smeared over discs of radius R (extended anyons). Our method allows to take R→ 0 not too fast at the same time as N→ ∞ . In this limit we rigorously justify the so-called "average field approximation": the particles behave like independent, identically distributed bosons interacting via a self-consistent magnetic field.

  5. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  6. Approximate methods for equations of incompressible fluid

    NASA Astrophysics Data System (ADS)

    Galkin, V. A.; Dubovik, A. O.; Epifanov, A. A.

    2017-02-01

    Approximate methods on the basis of sequential approximations in the theory of functional solutions to systems of conservation laws is considered, including the model of dynamics of incompressible fluid. Test calculations are performed, and a comparison with exact solutions is carried out.

  7. Quirks of Stirling's Approximation

    ERIC Educational Resources Information Center

    Macrae, Roderick M.; Allgeier, Benjamin M.

    2013-01-01

    Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…

  8. Inversion and approximation of Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.

  9. An approximation for inverse Laplace transforms

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1981-01-01

    Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.

  10. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  11. Approximating maximum clique with a Hopfield network.

    PubMed

    Jagota, A

    1995-01-01

    In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.

  12. Error assessments of widely-used orbit error approximations in satellite altimetry

    NASA Technical Reports Server (NTRS)

    Tai, Chang-Kou

    1988-01-01

    From simulations, the orbit error can be assumed to be a slowly varying sine wave with a predominant wavelength comparable to the Earth's circumference. Thus, one can derive analytically the error committed in representing the orbit error along a segment of the satellite ground track by a bias; by a bias and tilt (linear approximation); or by a bias, tilt, and curvature (quadratic approximation). The result clearly agrees with what is obvious intuitively, i.e., (1) the fit is better with more parameters, and (2) as the length of the segment increases, the approximation gets worse. But more importantly, it provides a quantitative basis to evaluate the accuracy of past results and, in the future, to select the best approximation according to the required precision and the efficiency of various approximations.

  13. Dynamical opacity-sampling models of Mira variables - I. Modelling description and analysis of approximations

    NASA Astrophysics Data System (ADS)

    Ireland, M. J.; Scholz, M.; Wood, P. R.

    2008-12-01

    We describe the Cool Opacity-sampling Dynamic EXtended (CODEX) atmosphere models of Mira variable stars, and examine in detail the physical and numerical approximations that go in-to the model creation. The CODEX atmospheric models are obtained by computing the temperature and the chemical and radiative states of the atmospheric layers, assuming gas pressure and velocity profiles from Mira pulsation models, which extend from near the H-burning shell to the outer layers of the atmosphere. Although the code uses the approximation of Local Thermodynamic Equilibrium (LTE) and a grey approximation in the dynamical atmosphere code, many key observable quantities, such as infrared diameters and low-resolution spectra, are predicted robustly in spite of these approximations. We show that in visible light, radiation from Mira variables is dominated by fluorescence scattering processes, and that the LTE approximation likely underpredicts visible-band fluxes by a factor of 2.

  14. Polynomial approximations of a class of stochastic multiscale elasticity problems

    NASA Astrophysics Data System (ADS)

    Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing

    2016-06-01

    We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together

  15. APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD

    SciTech Connect

    Semerák, O.

    2015-02-10

    A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.

  16. Approximate Bruechner orbitals in electron propagator calculations

    SciTech Connect

    Ortiz, J.V.

    1999-12-01

    Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.

  17. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.

  18. Information geometry of mean-field approximation.

    PubMed

    Tanaka, T

    2000-08-01

    I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics.

  19. Campus Common Law

    ERIC Educational Resources Information Center

    Bakken, Gordon Morris

    1976-01-01

    Discusses the legal principle of common law as it applies to the personnel policies of colleges and universities in an attempt to define the parameters of campus common law and to clarify its relationship to written university policies and relevant state laws. (JG)

  20. Conceptualizing an Information Commons.

    ERIC Educational Resources Information Center

    Beagle, Donald

    1999-01-01

    Concepts from Strategic Alignment, a technology-management theory, are used to discuss the Information Commons as a new service-delivery model in academic libraries. The Information Commons, as a conceptual, physical, and instructional space, involves an organizational realignment from print to the digital environment. (Author)

  1. Common Eye Disorders

    MedlinePlus

    ... eye,” is the most common cause of vision impairment in children. Amblyopia is the medical term used ... the most common cause of permanent one-eye vision impairment among children and young and middle-aged adults. ...

  2. A Survey of Techniques for Approximate Computing

    DOE PAGES

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less

  3. AN APPROXIMATE EQUATION OF STATE OF SOLIDS.

    DTIC Science & Technology

    research. By generalizing experimental data and obtaining unified relations describing the thermodynamic properties of solids, and approximate equation of state is derived which can be applied to a wide class of materials. (Author)

  4. Approximation concepts for efficient structural synthesis

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Miura, H.

    1976-01-01

    It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.

  5. Approximation methods in gravitational-radiation theory

    NASA Astrophysics Data System (ADS)

    Will, C. M.

    1986-02-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913+16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. The author summarizes recent developments in two areas in which approximations are important: (1) the quadrupole approximation, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (2) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  6. A Survey of Techniques for Approximate Computing

    SciTech Connect

    Mittal, Sparsh

    2016-03-18

    Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is to provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.

  7. Computational aspects of pseudospectral Laguerre approximations

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele

    1989-01-01

    Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.

  8. The closure approximation in the hierarchy equations.

    NASA Technical Reports Server (NTRS)

    Adomian, G.

    1971-01-01

    The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.

  9. Approximate String Matching with Reduced Alphabet

    NASA Astrophysics Data System (ADS)

    Salmela, Leena; Tarhio, Jorma

    We present a method to speed up approximate string matching by mapping the factual alphabet to a smaller alphabet. We apply the alphabet reduction scheme to a tuned version of the approximate Boyer-Moore algorithm utilizing the Four-Russians technique. Our experiments show that the alphabet reduction makes the algorithm faster. Especially in the k-mismatch case, the new variation is faster than earlier algorithms for English data with small values of k.

  10. Polynomial approximation of functions in Sobolev spaces

    NASA Technical Reports Server (NTRS)

    Dupont, T.; Scott, R.

    1980-01-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  11. Polynomial approximation of functions in Sobolev spaces

    SciTech Connect

    Dupont, T.; Scott, R.

    1980-04-01

    Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.

  12. Nonadiabatic charged spherical evolution in the postquasistatic approximation

    SciTech Connect

    Rosales, L.; Barreto, W.; Peralta, C.; Rodriguez-Mueller, B.

    2010-10-15

    We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of dissipative and electrically charged distributions in general relativity. The numerical implementation of our approach leads to a solver which is globally second-order convergent. We evolve nonadiabatic distributions assuming an equation of state that accounts for the anisotropy induced by the electric charge. Dissipation is described by streaming-out or diffusion approximations. We match the interior solution, in noncomoving coordinates, with the Vaidya-Reissner-Nordstroem exterior solution. Two models are considered: (i) a Schwarzschild-like shell in the diffusion limit; and (ii) a Schwarzschild-like interior in the free-streaming limit. These toy models tell us something about the nature of the dissipative and electrically charged collapse. Diffusion stabilizes the gravitational collapse producing a spherical shell whose contraction is halted in a short characteristic hydrodynamic time. The streaming-out radiation provides a more efficient mechanism for emission of energy, redistributing the electric charge on the whole sphere, while the distribution collapses indefinitely with a longer hydrodynamic time scale.

  13. Near distance approximation in astrodynamical applications of Lambert's theorem

    NASA Astrophysics Data System (ADS)

    Rauh, Alexander; Parisi, Jürgen

    2014-01-01

    The smallness parameter of the approximation method is defined in terms of the non-dimensional initial distance between target and chaser satellite. In the case of a circular target orbit, compact analytical expressions are obtained for the interception travel time up to third order. For eccentric target orbits, an explicit result is worked out to first order, and the tools are prepared for numerical evaluation of higher order contributions. The possible transfer orbits are examined within Lambert's theorem. For an eventual rendezvous it is assumed that the directions of the angular momenta of the two orbits enclose an acute angle. This assumption, together with the property that the travel time should vanish with vanishing initial distance, leads to a condition on the admissible initial positions of the chaser satellite. The condition is worked out explicitly in the general case of an eccentric target orbit and a non-coplanar transfer orbit. The condition is local. However, since during a rendezvous maneuver, the chaser eventually passes through the local space, the condition propagates to non-local initial distances. As to quantitative accuracy, the third order approximation reproduces the elements of Mars, in the historical problem treated by Gauss, to seven decimals accuracy, and in the case of the International Space Station, the method predicts an encounter error of about 12 m for an initial distance of 70 km.

  14. Approximate controllability of a system of parabolic equations with delay

    NASA Astrophysics Data System (ADS)

    Carrasco, Alexander; Leiva, Hugo

    2008-09-01

    In this paper we give necessary and sufficient conditions for the approximate controllability of the following system of parabolic equations with delay: where [Omega] is a bounded domain in , D is an n×n nondiagonal matrix whose eigenvalues are semi-simple with nonnegative real part, the control and B[set membership, variant]L(U,Z) with , . The standard notation zt(x) defines a function from [-[tau],0] to (with x fixed) by zt(x)(s)=z(t+s,x), -[tau][less-than-or-equals, slant]s[less-than-or-equals, slant]0. Here [tau][greater-or-equal, slanted]0 is the maximum delay, which is supposed to be finite. We assume that the operator is linear and bounded, and [phi]0[set membership, variant]Z, [phi][set membership, variant]L2([-[tau],0];Z). To this end: First, we reformulate this system into a standard first-order delay equation. Secondly, the semigroup associated with the first-order delay equation on an appropriate product space is expressed as a series of strongly continuous semigroups and orthogonal projections related with the eigenvalues of the Laplacian operator (); this representation allows us to reduce the controllability of this partial differential equation with delay to a family of ordinary delay equations. Finally, we use the well-known result on the rank condition for the approximate controllability of delay system to derive our main result.

  15. Communication and common interest.

    PubMed

    Godfrey-Smith, Peter; Martínez, Manolo

    2013-01-01

    Explaining the maintenance of communicative behavior in the face of incentives to deceive, conceal information, or exaggerate is an important problem in behavioral biology. When the interests of agents diverge, some form of signal cost is often seen as essential to maintaining honesty. Here, novel computational methods are used to investigate the role of common interest between the sender and receiver of messages in maintaining cost-free informative signaling in a signaling game. Two measures of common interest are defined. These quantify the divergence between sender and receiver in their preference orderings over acts the receiver might perform in each state of the world. Sampling from a large space of signaling games finds that informative signaling is possible at equilibrium with zero common interest in both senses. Games of this kind are rare, however, and the proportion of games that include at least one equilibrium in which informative signals are used increases monotonically with common interest. Common interest as a predictor of informative signaling also interacts with the extent to which agents' preferences vary with the state of the world. Our findings provide a quantitative description of the relation between common interest and informative signaling, employing exact measures of common interest, information use, and contingency of payoff under environmental variation that may be applied to a wide range of models and empirical systems.

  16. Structural Reliability Analysis and Optimization: Use of Approximations

    NASA Technical Reports Server (NTRS)

    Grandhi, Ramana V.; Wang, Liping

    1999-01-01

    This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different

  17. An improved proximity force approximation for electrostatics

    SciTech Connect

    Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.

    2012-08-15

    A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.

  18. Norms of Descriptive Adjective Responses to Common Nouns.

    ERIC Educational Resources Information Center

    Robbins, Janet L.

    This paper gives the results of a controlled experiment on word association. The purpose was to establish norms of commonality of primary descriptive adjective responses to common nouns. The stimuli consisted of 203 common nouns selected from 10 everyday topics of conversation, approximately 20 from each topic. There were 350 subjects, 50% male,…

  19. Legendre-Tau approximation for functional differential equations. Part 3: Eigenvalue approximations and uniform stability

    NASA Technical Reports Server (NTRS)

    Ito, K.

    1984-01-01

    The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.

  20. Hybrid approximate message passing for generalized group sparsity

    NASA Astrophysics Data System (ADS)

    Fletcher, Alyson K.; Rangan, Sundeep

    2013-09-01

    We consider the problem of estimating a group sparse vector x ∈ Rn under a generalized linear measurement model. Group sparsity of x means the activity of different components of the vector occurs in groups - a feature common in estimation problems in image processing, simultaneous sparse approximation and feature selection with grouped variables. Unfortunately, many current group sparse estimation methods require that the groups are non-overlapping. This work considers problems with what we call generalized group sparsity where the activity of the different components of x are modeled as functions of a small number of boolean latent variables. We show that this model can incorporate a large class of overlapping group sparse problems including problems in sparse multivariable polynomial regression and gene expression analysis. To estimate vectors with such group sparse structures, the paper proposes to use a recently-developed hybrid generalized approximate message passing (HyGAMP) method. Approximate message passing (AMP) refers to a class of algorithms based on Gaussian and quadratic approximations of loopy belief propagation for estimation of random vectors under linear measurements. The HyGAMP method extends the AMP framework to incorporate priors on x described by graphical models of which generalized group sparsity is a special case. We show that the HyGAMP algorithm is computationally efficient, general and offers superior performance in certain synthetic data test cases.

  1. On the mathematical treatment of the Born-Oppenheimer approximation

    SciTech Connect

    Jecko, Thierry

    2014-05-15

    Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common use of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics.

  2. Managing the wildlife tourism commons.

    PubMed

    Pirotta, Enrico; Lusseau, David

    2015-04-01

    The nonlethal effects of wildlife tourism can threaten the conservation status of targeted animal populations. In turn, such resource depletion can compromise the economic viability of the industry. Therefore, wildlife tourism exploits resources that can become common pool and that should be managed accordingly. We used a simulation approach to test whether different management regimes (tax, tax and subsidy, cap, cap and trade) could provide socioecologically sustainable solutions. Such schemes are sensitive to errors in estimated management targets. We determined the sensitivity of each scenario to various realistic uncertainties in management implementation and in our knowledge of the population. Scenarios where time quotas were enforced using a tax and subsidy approach, or they were traded between operators were more likely to be sustainable. Importantly, sustainability could be achieved even when operators were assumed to make simple rational economic decisions. We suggest that a combination of the two regimes might offer a robust solution, especially on a small spatial scale and under the control of a self-organized, operator-level institution. Our simulation platform could be parameterized to mimic local conditions and provide a test bed for experimenting different governance solutions in specific case studies.

  3. ACS: ALMA Common Software

    NASA Astrophysics Data System (ADS)

    Chiozzi, Gianluca; Šekoranja, Matej

    2013-02-01

    ALMA Common Software (ACS) provides a software infrastructure common to all ALMA partners and consists of a documented collection of common patterns and components which implement those patterns. The heart of ACS is based on a distributed Component-Container model, with ACS Components implemented as CORBA objects in any of the supported programming languages. ACS provides common CORBA-based services such as logging, error and alarm management, configuration database and lifecycle management. Although designed for ALMA, ACS can and is being used in other control systems and distributed software projects, since it implements proven design patterns using state of the art, reliable technology. It also allows, through the use of well-known standard constructs and components, that other team members whom are not authors of ACS easily understand the architecture of software modules, making maintenance affordable even on a very large project.

  4. Common Misconceptions about Cholesterol

    MedlinePlus

    ... Venous Thromboembolism Aortic Aneurysm More Common Misconceptions about Cholesterol Updated:Apr 3,2017 Cholesterol can be both ... misconceptions about cholesterol. Click on each misconception about cholesterol to see the truth: My choices about diet ...

  5. How Common Is PTSD?

    MedlinePlus

    ... Z) Hepatitis HIV Mental Health Mental Health Home Suicide Prevention Substance Abuse Military Sexual Trauma PTSD Research ( ... Public, Family, & Friends How Common Is PTSD? Posttraumatic stress disorder (PTSD) can occur after you have been ...

  6. Barry Commoner Assails Petrochemicals

    ERIC Educational Resources Information Center

    Chemical and Engineering News, 1973

    1973-01-01

    Discusses Commoner's ideas on the social value of the petrochemical industry and his suggestions for curtailment or elimination of its productive operation to produce a higher environmental quality for mankind at a relatively low loss in social benefit. (CC)

  7. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2004-01-01

    Part of the 2003 industrial minerals review. The legislation, production, and consumption of common clay and shale are discussed. The average prices of the material and outlook for the market are provided.

  8. Common Causes of Stillbirth

    MedlinePlus

    ... one of the most common placental problems. The placenta separates (partially or completely) from the uterine wall ... or abnormal placement of the cord into the placenta. This can deprive the baby of oxygen. Infectious ...

  9. Commonly Consumed Food Commodities

    EPA Pesticide Factsheets

    Commonly consumed foods are those ingested for their nutrient properties. Food commodities can be either raw agricultural commodities or processed commodities, provided that they are the forms that are sold or distributed for human consumption. Learn more.

  10. Common Mental Health Issues

    ERIC Educational Resources Information Center

    Stock, Susan R.; Levine, Heidi

    2016-01-01

    This chapter provides an overview of common student mental health issues and approaches for student affairs practitioners who are working with students with mental illness, and ways to support the overall mental health of students on campus.

  11. Common peroneal nerve dysfunction

    MedlinePlus

    Neuropathy - common peroneal nerve; Peroneal nerve injury; Peroneal nerve palsy ... type of peripheral neuropathy (damage to nerves outside the brain ... nerve injuries. Damage to the nerve disrupts the myelin sheath ...

  12. Genomic Data Commons launches

    Cancer.gov

    The Genomic Data Commons (GDC), a unified data system that promotes sharing of genomic and clinical data between researchers, launched today with a visit from Vice President Joe Biden to the operations center at the University of Chicago.

  13. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2011-01-01

    The article discusses the latest developments in the global common clay and shale industry, particularly in the U.S. It claims that common clay and shale is mainly used in the manufacture of heavy clay products like brick, flue tile and sewer pipe. The main producing states in the U.S. include North Carolina, New York and Oklahoma. Among the firms that manufacture clay and shale-based products are Mid America Brick & Structural Clay Products LLC and Boral USA.

  14. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2006-01-01

    At present, 150 companies produce common clay and shale in 41 US states. According to the United States Geological Survey (USGS), domestic production in 2005 reached 24.8 Mt valued at $176 million. In decreasing order by tonnage, the leading producer states include North Carolina, Texas, Alabama, Georgia and Ohio. For the whole year, residential and commercial building construction remained the major market for common clay and shale products such as brick, drain tile, lightweight aggregate, quarry tile and structural tile.

  15. On uniform approximation of elliptic functions by Padé approximants

    NASA Astrophysics Data System (ADS)

    Khristoforov, Denis V.

    2009-06-01

    Diagonal Padé approximants of elliptic functions are studied. It is known that the absence of uniform convergence of such approximants is related to them having spurious poles that do not correspond to any singularities of the function being approximated. A sequence of piecewise rational functions is proposed, which is constructed from two neighbouring Padé approximants and approximates an elliptic function locally uniformly in the Stahl domain. The proof of the convergence of this sequence is based on deriving strong asymptotic formulae for the remainder function and Padé polynomials and on the analysis of the behaviour of a spurious pole. Bibliography: 23 titles.

  16. Estimation of distribution algorithms with Kikuchi approximations.

    PubMed

    Santana, Roberto

    2005-01-01

    The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables.

  17. Approximation of Bivariate Functions via Smooth Extensions

    PubMed Central

    Zhang, Zhihua

    2014-01-01

    For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316

  18. Ancilla-approximable quantum state transformations

    SciTech Connect

    Blass, Andreas; Gurevich, Yuri

    2015-04-15

    We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.

  19. Fast wavelet based sparse approximate inverse preconditioner

    SciTech Connect

    Wan, W.L.

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  20. Separable approximations of two-body interactions

    NASA Astrophysics Data System (ADS)

    Haidenbauer, J.; Plessas, W.

    1983-01-01

    We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.

  1. Approximate solutions of the hyperbolic Kepler equation

    NASA Astrophysics Data System (ADS)

    Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge

    2015-12-01

    We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.

  2. Approximation methods in gravitational-radiation theory

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1986-01-01

    The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.

  3. Analytical approximations to the Hotelling trace for digital x-ray detectors

    NASA Astrophysics Data System (ADS)

    Clarkson, Eric; Pineda, Angel R.; Barrett, Harrison H.

    2001-06-01

    The Hotelling trace is the signal-to-noise ratio for the ideal linear observer in a detection task. We provide an analytical approximation for this figure of merit when the signal is known exactly and the background is generated by a stationary random process, and the imaging system is an ideal digital x-ray detector. This approximation is based on assuming that the detector is infinite in extent. We test this approximation for finite-size detectors by comparing it to exact calculations using matrix inversion of the data covariance matrix. After verifying the validity of the approximation under a variety of circumstances, we use it to generate plots of the Hotelling trace as a function of pairs of parameters of the system, the signal and the background.

  4. Small-angle approximation to the transfer of narrow laser beams in anisotropic scattering media

    NASA Technical Reports Server (NTRS)

    Box, M. A.; Deepak, A.

    1981-01-01

    The broadening and the signal power detected of a laser beam traversing an anisotropic scattering medium were examined using the small-angle approximation to the radiative transfer equation in which photons suffering large-angle deflections are neglected. To obtain tractable answers, simple Gaussian and non-Gaussian functions for the scattering phase functions are assumed. Two other approximate approaches employed in the field to further simplify the small-angle approximation solutions are described, and the results obtained by one of them are compared with those obtained using small-angle approximation. An exact method for obtaining the contribution of each higher order scattering to the radiance field is examined but no results are presented.

  5. Common ecology quantifies human insurgency.

    PubMed

    Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F

    2009-12-17

    Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour.

  6. Exponential Approximations Using Fourier Series Partial Sums

    NASA Technical Reports Server (NTRS)

    Banerjee, Nana S.; Geer, James F.

    1997-01-01

    The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.

  7. Analogy between generalized Coddington equations and thin optical element approximation.

    PubMed

    Golub, Michael A

    2009-05-01

    Local wavefront curvature transformations at an arbitrarily shaped optical surface are commonly determined by generalized Coddington equations that are developed here via a local thin optical element approximation. Eikonal distributions of the incident and refracted beams are calculated and related by an eikonal transfer function of a local thin optical element located in close proximity to a given point at a tangent plane of an optical surface. Main coefficients and terms involved in the generalized Coddington equations are derived and explained as a local nonparaxial generalization for the customary paraxial wavefront transformations.

  8. A Randomized Approximate Nearest Neighbors Algorithm

    DTIC Science & Technology

    2010-09-14

    Introduction to Harmonic Analysis, Second edition, Dover Publi- cations (1976). [12] D. Knuth , Seminumerical Algorithms, vol. 2 of The Art of Computer ...ES) Yale University ,Department of Computer Science,New Haven,CT,06520 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY...may further assume that t > a2 and evaluate the cdf of D−a at t by computing the probability of D−a being smaller than t to obtain FD−a (t) = ∫ t a2

  9. Very fast approximate reconstruction of MR images.

    PubMed

    Angelidis, P A

    1998-11-01

    The ultra fast Fourier transform (UFFT) provides the means for a very fast computation of a magnetic resonance (MR) image, because it is implemented using only additions and no multiplications at all. It achieves this by approximating the complex exponential functions involved in the Fourier transform (FT) sum with computationally simpler periodic functions. This approximation introduces erroneous spectrum peaks of small magnitude. We examine the performance of this transform in some typical MRI signals. The results show that this transform can very quickly provide an MR image. It is proposed to be used as a replacement of the classically used FFT whenever a fast general overview of an image is required.

  10. Congruence Approximations for Entrophy Endowed Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Saini, Subhash (Technical Monitor)

    1998-01-01

    Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.

  11. Bronchopulmonary segments approximation using anatomical atlas

    NASA Astrophysics Data System (ADS)

    Busayarat, Sata; Zrimec, Tatjana

    2007-03-01

    Bronchopulmonary segments are valuable as they give more accurate localization than lung lobes. Traditionally, determining the segments requires segmentation and identification of segmental bronchi, which, in turn, require volumetric imaging data. In this paper, we present a method for approximating the bronchopulmonary segments for sparse data by effectively using an anatomical atlas. The atlas is constructed from a volumetric data and contains accurate information about bronchopulmonary segments. A new ray-tracing based image registration is used for transferring the information from the atlas to a query image. Results show that the method is able to approximate the segments on sparse HRCT data with slice gap up to 25 millimeters.

  12. Approximate learning algorithm in Boltzmann machines.

    PubMed

    Yasuda, Muneki; Tanaka, Kazuyuki

    2009-11-01

    Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.

  13. Analytical approximations for flow in compressible, saturated, one-dimensional porous media

    NASA Astrophysics Data System (ADS)

    Barry, D. A.; Lockington, D. A.; Jeng, D.-S.; Parlange, J.-Y.; Li, L.; Stagnitti, F.

    2007-04-01

    A nonlinear model for single-phase fluid flow in slightly compressible porous media is presented and solved approximately. The model assumes state equations for density, porosity, viscosity and permeability that are exponential functions of the fluid (either gas or liquid) pressure. The governing equation is transformed into a nonlinear diffusion equation. It is solved for a semi-infinite domain for either constant pressure or constant flux boundary conditions at the surface. The solutions obtained, although approximate, are extremely accurate as demonstrated by comparisons with numerical results. Predictions for the surface pressure resulting from a constant flux into a porous medium are compared with published experimental data.

  14. Dynamics of zonal flows: failure of wave-kinetic theory, and new geometrical optics approximations

    NASA Astrophysics Data System (ADS)

    Parker, Jeffrey B.

    2016-12-01

    The self-organisation of turbulence into regular zonal flows can be fruitfully investigated with quasi-linear methods and statistical descriptions. A wave-kinetic equation that assumes asymptotically large-scale zonal flows leads to ultraviolet divergence. From an exact description of quasi-linear dynamics emerges two better geometrical optics approximations. These involve not only the mean flow shear but also the second and third derivative of the mean flow. One approximation takes the form of a new wave-kinetic equation, but is only valid when the zonal flow is quasi-static and wave action is conserved.

  15. Power system commonality study

    NASA Astrophysics Data System (ADS)

    Littman, Franklin D.

    1992-07-01

    A limited top level study was completed to determine the commonality of power system/subsystem concepts within potential lunar and Mars surface power system architectures. A list of power system concepts with high commonality was developed which can be used to synthesize power system architectures which minimize development cost. Examples of potential high commonality power system architectures are given in this report along with a mass comparison. Other criteria such as life cycle cost (which includes transportation cost), reliability, safety, risk, and operability should be used in future, more detailed studies to select optimum power system architectures. Nineteen potential power system concepts were identified and evaluated for planetary surface applications including photovoltaic arrays with energy storage, isotope, and nuclear power systems. A top level environmental factors study was completed to assess environmental impacts on the identified power system concepts for both lunar and Mars applications. Potential power system design solutions for commonality between Mars and lunar applications were identified. Isotope, photovoltaic array (PVA), regenerative fuel cell (RFC), stainless steel liquid-metal cooled reactors (less than 1033 K maximum) with dynamic converters, and in-core thermionic reactor systems were found suitable for both lunar and Mars environments. The use of SP-100 thermoelectric (TE) and SP-100 dynamic power systems in a vacuum enclosure may also be possible for Mars applications although several issues need to be investigated further (potential single point failure of enclosure, mass penalty of enclosure and active pumping system, additional installation time and complexity). There are also technical issues involved with development of thermionic reactors (life, serviceability, and adaptability to other power conversion units). Additional studies are required to determine the optimum reactor concept for Mars applications. Various screening

  16. Fighting Crime by Fighting Misconceptions and Blind Spots in Policy Theories: An Evidence-Based Evaluation of Interventions and Assumed Causal Mechanisms

    ERIC Educational Resources Information Center

    van Noije, Lonneke; Wittebrood, Karin

    2010-01-01

    How effective are policy interventions to fight crime and how valid is the policy theory that underlies them? This is the twofold research question addressed in this article, which presents an evidence-based evaluation of Dutch social safety policy. By bridging the gap between actual effects and assumed effects, this study seeks to make fuller use…

  17. 41 CFR 302-10.206 - May my agency assume direct responsibility for the costs of preparing and transporting my mobile...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 41 Public Contracts and Property Management 4 2012-07-01 2012-07-01 false May my agency assume direct responsibility for the costs of preparing and transporting my mobile home? 302-10.206 Section 302-10.206 Public Contracts and Property Management Federal Travel Regulation System...

  18. Assumed white blood cell count of 8,000 cells/μL overestimates malaria parasite density in the Brazilian Amazon.

    PubMed

    Alves-Junior, Eduardo R; Gomes, Luciano T; Ribatski-Silva, Daniele; Mendes, Clebson Rodrigues J; Leal-Santos, Fabio A; Simões, Luciano R; Mello, Marcia Beatriz C; Fontes, Cor Jesus F

    2014-01-01

    Quantification of parasite density is an important component in the diagnosis of malaria infection. The accuracy of this estimation varies according to the method used. The aim of this study was to assess the agreement between the parasite density values obtained with the assumed value of 8,000 cells/μL and the automated WBC count. Moreover, the same comparative analysis was carried out for other assumed values of WBCs. The study was carried out in Brazil with 403 malaria patients who were infected in different endemic areas of the Brazilian Amazon. The use of a fixed WBC count of 8,000 cells/μL to quantify parasite density in malaria patients led to overestimated parasitemia and resulted in low reliability when compared to the automated WBC count. Assumed values ranging between 5,000 and 6,000 cells/μL, and 5,500 cells/μL in particular, showed higher reliability and more similar values of parasite density when compared between the 2 methods. The findings show that assumed WBC count of 5,500 cells/μL could lead to a more accurate estimation of parasite density for malaria patients in this endemic region.

  19. Molecular collisions. 11: Semiclassical approximation to atom-symmetric top rotational excitation

    NASA Technical Reports Server (NTRS)

    Russell, D.; Curtiss, C. F.

    1973-01-01

    In a paper of this series a distorted wave approximation to the T matrix for atom-symmetric top scattering was developed which is correct to first order in the part of the interaction potential responsible for transitions in the component of rotational angular momentum along the symmetry axis of the top. A semiclassical expression for this T matrix is derived by assuming large values of orbital and rotational angular momentum quantum numbers.

  20. Approximation Algorithms for the Highway Problem under the Coupon Model

    NASA Astrophysics Data System (ADS)

    Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei

    When a store sells items to customers, the store wishes to decide the prices of items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy the items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we consider the line highway problem (in which each customer is interested in an interval on the line of the items) and the cycle highway problem (in which each customer is interested in an interval on the cycle of the items), and show approximation algorithms for the line highway problem and the cycle highway problem in which the smallest valuation is s and the largest valuation is l (this is called an [s, l]-valuation setting) or all valuations are identical (this is called a single valuation setting).

  1. Common Cause Failure Modes

    NASA Technical Reports Server (NTRS)

    Wetherholt, Jon; Heimann, Timothy J.; Anderson, Brenda

    2011-01-01

    High technology industries with high failure costs commonly use redundancy as a means to reduce risk. Redundant systems, whether similar or dissimilar, are susceptible to Common Cause Failures (CCF). CCF is not always considered in the design effort and, therefore, can be a major threat to success. There are several aspects to CCF which must be understood to perform an analysis which will find hidden issues that may negate redundancy. This paper will provide definition, types, a list of possible causes and some examples of CCF. Requirements and designs from NASA projects will be used in the paper as examples.

  2. Approximate model for laser ablation of carbon

    NASA Astrophysics Data System (ADS)

    Shusser, Michael

    2010-08-01

    The paper presents an approximate kinetic theory model of ablation of carbon by a nanosecond laser pulse. The model approximates the process as sublimation and combines conduction heat transfer in the target with the gas dynamics of the ablated plume which are coupled through the boundary conditions at the interface. The ablated mass flux and the temperature of the ablating material are obtained from the assumption that the ablation rate is restricted by the kinetic theory limitation on the maximum mass flux that can be attained in a phase-change process. To account for non-uniform distribution of the laser intensity while keeping the calculation simple the quasi-one-dimensional approximation is used in both gas and solid phases. The results are compared with the predictions of the exact axisymmetric model that uses the conservation relations at the interface derived from the momentum solution of the Boltzmann equation for arbitrary strong evaporation. It is seen that the simpler approximate model provides good accuracy.

  3. Large Hierarchies from Approximate R Symmetries

    SciTech Connect

    Kappl, Rolf; Ratz, Michael; Schmidt-Hoberg, Kai; Nilles, Hans Peter; Ramos-Sanchez, Saul; Vaudrevange, Patrick K. S.

    2009-03-27

    We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales.

  4. Approximating a nonlinear MTFDE from physiology

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena

    2016-12-01

    This paper describes a numerical scheme which approximates the solution of a nonlinear mixed type functional differential equation from nerve conduction theory. The solution of such equation is defined in all the entire real axis and tends to known values at ±∞. A numerical method extended from linear case is developed and applied to solve a nonlinear equation.

  5. Padé approximations and diophantine geometry

    PubMed Central

    Chudnovsky, D. V.; Chudnovsky, G. V.

    1985-01-01

    Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552

  6. Block Addressing Indices for Approximate Text Retrieval.

    ERIC Educational Resources Information Center

    Baeza-Yates, Ricardo; Navarro, Gonzalo

    2000-01-01

    Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)

  7. Approximations of Two-Attribute Utility Functions

    DTIC Science & Technology

    1976-09-01

    Introduction to Approximation Theory, McGraw-Hill, New York, 1966. Faber, G., Uber die interpolatorische Darstellung stetiger Funktionen, Deutsche...Management Review 14 (1972b) 37-50. Keeney, R. L., A decision analysis with multiple objectives: the Mexico City airport, Bell Journal of Economics

  8. Can Distributional Approximations Give Exact Answers?

    ERIC Educational Resources Information Center

    Griffiths, Martin

    2013-01-01

    Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…

  9. Kravchuk functions for the finite oscillator approximation

    NASA Technical Reports Server (NTRS)

    Atakishiyev, Natig M.; Wolf, Kurt Bernardo

    1995-01-01

    Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.

  10. An approximate classical unimolecular reaction rate theory

    NASA Astrophysics Data System (ADS)

    Zhao, Meishan; Rice, Stuart A.

    1992-05-01

    We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.

  11. Sensing Position With Approximately Constant Contact Force

    NASA Technical Reports Server (NTRS)

    Sturdevant, Jay

    1996-01-01

    Computer-controlled electromechanical system uses number of linear variable-differential transformers (LVDTs) to measure axial positions of selected points on surface of lens, mirror, or other precise optical component with high finish. Pressures applied to pneumatically driven LVDTs adjusted to maintain small, approximately constant contact forces as positions of LVDT tips vary.

  12. Approximate Solution to the Generalized Boussinesq Equation

    NASA Astrophysics Data System (ADS)

    Telyakovskiy, A. S.; Mortensen, J.

    2010-12-01

    The traditional Boussinesq equation describes motion of water in groundwater flows. It models unconfined groundwater flow under the Dupuit assumption that the equipotential lines are vertical, making the flowlines horizontal. The Boussinesq equation is a nonlinear diffusion equation with diffusivity depending linearly on water head. Here we analyze a generalization of the Boussinesq equation, when the diffusivity is a power law function of water head. For example polytropic gases moving through porous media obey this equation. Solving this equation usually requires numerical approximations, but for certain classes of initial and boundary conditions an approximate analytical solution can be constructed. This work focuses on the latter approach, using the scaling properties of the equation. We consider one-dimensional semi-infinite initially empty aquifer with boundary conditions at the inlet in case of cylindrical symmetry. Such situation represents the case of an injection well. Solutions would propagate with the finite speed. We construct an approximate scaling function, and we compare the approximate solution with the direct numerical solutions obtained by using the scaling properties of the equations.

  13. Alternative approximation concepts for space frame synthesis

    NASA Technical Reports Server (NTRS)

    Lust, R. V.; Schmit, L. A.

    1985-01-01

    A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.

  14. Quickly Approximating the Distance Between Two Objects

    NASA Technical Reports Server (NTRS)

    Hammen, David

    2009-01-01

    A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.

  15. Approximation algorithms for planning and control

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; Dean, Thomas

    1989-01-01

    A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.

  16. Approximating Confidence Intervals for Factor Loadings.

    ERIC Educational Resources Information Center

    Lambert, Zarrel V.; And Others

    1991-01-01

    A method is presented that eliminates some interpretational limitations arising from assumptions implicit in the use of arbitrary rules of thumb to interpret exploratory factor analytic results. The bootstrap method is presented as a way of approximating sampling distributions of estimated factor loadings. Simulated datasets illustrate the…

  17. Approximated integrability of the Dicke model

    NASA Astrophysics Data System (ADS)

    Relaño, A.; Bastarrachea-Magnani, M. A.; Lerma-Hernández, S.

    2016-12-01

    A very approximate second integral of motion of the Dicke model is identified within a broad energy region above the ground state, and for a wide range of values of the external parameters. This second integral, obtained from a Born-Oppenheimer approximation, classifies the whole regular part of the spectrum in bands, coming from different semi-classical energy surfaces, and labelled by its corresponding eigenvalues. Results obtained from this approximation are compared with exact numerical diagonalization for finite systems in the superradiant phase, obtaining a remarkable accord. The region of validity of our approach in the parameter space, which includes the resonant case, is unveiled. The energy range of validity goes from the ground state up to a certain upper energy where chaos sets in, and extends far beyond the range of applicability of a simple harmonic approximation around the minimal energy configuration. The upper energy validity limit increases for larger values of the coupling constant and the ratio between the level splitting and the frequency of the field. These results show that the Dicke model behaves like a two-degree-of-freedom integrable model for a wide range of energies and values of the external parameters.

  18. Fostering Formal Commutativity Knowledge with Approximate Arithmetic

    PubMed Central

    Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert

    2015-01-01

    How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311

  19. Multidimensional stochastic approximation using locally contractive functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1975-01-01

    A Robbins-Monro type multidimensional stochastic approximation algorithm which converges in mean square and with probability one to the fixed point of a locally contractive regression function is developed. The algorithm is applied to obtain maximum likelihood estimates of the parameters for a mixture of multivariate normal distributions.

  20. Approximating the efficiency characteristics of blade pumps

    NASA Astrophysics Data System (ADS)

    Shekun, G. D.

    2007-11-01

    Results from a statistical investigation into the experimental efficiency characteristics of commercial type SD centrifugal pumps and type SDS swirl flow pumps are presented. An exponential function for approximating the efficiency characteristics of blade pumps is given. The versatile nature of this characteristic is confirmed by the fact that the use of different systems of relative units gives identical results.

  1. Counting independent sets using the Bethe approximation

    SciTech Connect

    Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J

    2009-01-01

    The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.

  2. Finding the Common Ground.

    ERIC Educational Resources Information Center

    Wallace, Dawn

    1980-01-01

    Describes an attempt to combine secondary English instruction emphasizing United States literature with science and history by finding "common ground" between these disciplines in (1) the separation of truth from falsehood and (2) logical thinking. Biographies combined history and literature, and science fiction combined science and English;…

  3. Common Standards for All

    ERIC Educational Resources Information Center

    Principal, 2010

    2010-01-01

    About three-fourths of the states have already adopted the Common Core State Standards, which were designed to provide more clarity about and consistency in what is expected of student learning across the country. However, given the brief time since the standards' final release in June, questions persist among educators, who will have the…

  4. Navagating the Common Core

    ERIC Educational Resources Information Center

    McShane, Michael Q.

    2014-01-01

    This article presents a debate over the Common Core State Standards Initiative as it has rocketed to the forefront of education policy discussions around the country. The author contends that there is value in having clear cross state standards that will clarify the new online and blended learning that the growing use of technology has provided…

  5. Information Commons to Go

    ERIC Educational Resources Information Center

    Bayer, Marc Dewey

    2008-01-01

    Since 2004, Buffalo State College's E. H. Butler Library has used the Information Commons (IC) model to assist its 8,500 students with library research and computer applications. Campus Technology Services (CTS) plays a very active role in its IC, with a centrally located Computer Help Desk and a newly created Application Support Desk right in the…

  6. Space station commonality analysis

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This study was conducted on the basis of a modification to Contract NAS8-36413, Space Station Commonality Analysis, which was initiated in December, 1987 and completed in July, 1988. The objective was to investigate the commonality aspects of subsystems and mission support hardware while technology experiments are accommodated on board the Space Station in the mid-to-late 1990s. Two types of mission are considered: (1) Advanced solar arrays and their storage; and (2) Satellite servicing. The point of departure for definition of the technology development missions was a set of missions described in the Space Station Mission Requirements Data Base. (MRDB): TDMX 2151 Solar Array/Energy Storage Technology; TDMX 2561 Satellite Servicing and Refurbishment; TDMX 2562 Satellite Maintenance and Repair; TDMX 2563 Materials Resupply (to a free-flyer materials processing platform); TDMX 2564 Coatings Maintenance Technology; and TDMX 2565 Thermal Interface Technology. Issues to be addressed according to the Statement of Work included modularity of programs, data base analysis interactions, user interfaces, and commonality. The study was to consider State-of-the-art advances through the 1990s and to select an appropriate scale for the technology experiments, considering hardware commonality, user interfaces, and mission support requirements. The study was to develop evolutionary plans for the technology advancement missions.

  7. Commonalities across Effective Collaboratives.

    ERIC Educational Resources Information Center

    Russell, Jill F.; Flynn, Richard B.

    2000-01-01

    Examined effective collaborations involving schools and colleges of education and other organizations, identifying commonly voiced reasons for collaboration and factors perceived as important in collaboration. Data come from research, case descriptions, survey responses, and input from collaborators. Willingness to listen, mutual respect,…

  8. The Common School

    ERIC Educational Resources Information Center

    Pring, Richard

    2007-01-01

    The paper is concerned with the conflicting principles revealed respectively by those who argue for the common school and by those who seek to promote a system of schools that, though maintained by the state, might reflect the different religious beliefs within the community. The philosopher, John Dewey, is appealed to in defence of the common…

  9. Solving Common Mathematical Problems

    NASA Technical Reports Server (NTRS)

    Luz, Paul L.

    2005-01-01

    Mathematical Solutions Toolset is a collection of five software programs that rapidly solve some common mathematical problems. The programs consist of a set of Microsoft Excel worksheets. The programs provide for entry of input data and display of output data in a user-friendly, menu-driven format, and for automatic execution once the input data has been entered.

  10. Pleasure: the common currency.

    PubMed

    Cabanac, M

    1992-03-21

    At present as physiologists studying various homeostatic behaviors, such as thermoregulatory behavior and food and fluid intake, we have no common currency that allows us to equate the strength of the motivational drive that accompanies each regulatory need, in terms of how an animal or a person will choose to satisfy his needs when there is a conflict between two or more of them. Yet the behaving organism must rank his priorities and needs a common currency to achieve the ranking (McFarland & Sibly, 1975, Phil. Trans. R. Soc. Lond. 270 Biol 265-293). A theory is proposed here according to which pleasure is this common currency. The perception of pleasure, as measured operationally and quantitatively by choice behavior (in the case of animals), or by the rating of the intensity of pleasure or displeasure (in the case of humans) can serve as such a common currency. The tradeoffs between various motivations would thus be accomplished by simple maximization of pleasure. In what follows, the scientific work arising recently on this subject, with be reviewed briefly and our recent experimental findings will be presented. This will serve as the support for the theoretical position formulated in this essay.

  11. Common Magnets, Unexpected Polarities

    ERIC Educational Resources Information Center

    Olson, Mark

    2013-01-01

    In this paper, I discuss a "misconception" in magnetism so simple and pervasive as to be typically unnoticed. That magnets have poles might be considered one of the more straightforward notions in introductory physics. However, the magnets common to students' experiences are likely different from those presented in educational…

  12. Common Carrier Services.

    ERIC Educational Resources Information Center

    Federal Communications Commission, Washington, DC.

    This bulletin outlines the Federal Communications Commission's (FCC) responsibilities in regulating the interstate and foreign common carrier communication via electrical means. Also summarized are the history, technological development, and current capabilities and prospects of telegraph, wire telephone, radiotelephone, satellite communications,…

  13. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2003-01-01

    Part of the 2002 industrial minerals review. The production, consumption, and price of shale and common clay in the U.S. during 2002 are discussed. The impact of EPA regulations on brick and structural clay product manufacturers is also outlined.

  14. Human Commonalities and Art

    ERIC Educational Resources Information Center

    Passmore, Kaye

    2008-01-01

    Educator Ernest Boyer believed that well-educated students should do more than master isolated facts. They should understand the "connectedness of things." He suggested organizing curriculum thematically around eight commonalities shared by people around the world. In the book "The Basic School: A Community for Learning," Boyer recommends that…

  15. Does Common Enrollment Work?

    ERIC Educational Resources Information Center

    Carpenter, Dick M., II; Clayton, Grant

    2016-01-01

    In this article, researchers Dick M. Carpenter II and Grant Clayton explore common enrollment systems (CESs)--how they work and what school leaders can learn from districts that have implemented CESs. Denver, New Orleans, and Newark (New Jersey) have rolled out this centralized enrollment process for all district-run and charter schools in their…

  16. Common clay and shale

    USGS Publications Warehouse

    Virta, R.L.

    2001-01-01

    Part of the 2000 annual review of the industrial minerals sector. A general overview of the common clay and shale industry is provided. In 2000, U.S. production increased by 5 percent, while sales or use declined to 23.6 Mt. Despite the slowdown in the economy, no major changes are expected for the market.

  17. Common File Formats.

    PubMed

    Mills, Lauren

    2014-03-21

    An overview of the many file formats commonly used in bioinformatics and genome sequence analysis is presented, including various data file formats, alignment file formats, and annotation file formats. Example workflows illustrate how some of the different file types are typically used.

  18. Common Carrier Services.

    ERIC Educational Resources Information Center

    Federal Communications Commission, Washington, DC.

    After outlining the Federal Communications Commission's (FCC) responsibility for regulating interstate common carrier communication (non-broadcast communication whose carriers are required by law to furnish service at reasonable charges upon request), this information bulletin reviews the history, technological development, and current…

  19. Math, Literacy, & Common Standards

    ERIC Educational Resources Information Center

    Education Week, 2012

    2012-01-01

    Nearly every state has signed on to use the Common Core State Standards as a framework for teaching English/language arts and mathematics to students. Translating them for the classroom, however, requires schools, teachers, and students to change the way they approach teaching and learning. This report examines the progress some states have made…

  20. Approximation of the optimal-time problem for controlled differential inclusions

    SciTech Connect

    Otakulov, S.

    1995-01-01

    One of the common methods for numerical solution of optimal control problems constructs an approximating sequence of discrete control problems. The approximation method is also attractive because it can be used as an effective tool for analyzing optimality conditions and other topics in optimization theory. In this paper, we consider the approximation of optimal-time problems for controlled differential inclusions. The sequence of approximating problems is constructed using a finite-difference scheme, i.e., the differential inclusions are replaced with difference inclusions.

  1. Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods

    NASA Astrophysics Data System (ADS)

    Plantagie, Linda; Batenburg, Kees Joost

    2015-01-01

    We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.

  2. A Partition Function Approximation Using Elementary Symmetric Functions

    PubMed Central

    Anandakrishnan, Ramu

    2012-01-01

    In statistical mechanics, the canonical partition function can be used to compute equilibrium properties of a physical system. Calculating however, is in general computationally intractable, since the computation scales exponentially with the number of particles in the system. A commonly used method for approximating equilibrium properties, is the Monte Carlo (MC) method. For some problems the MC method converges slowly, requiring a very large number of MC steps. For such problems the computational cost of the Monte Carlo method can be prohibitive. Presented here is a deterministic algorithm – the direct interaction algorithm (DIA) – for approximating the canonical partition function in operations. The DIA approximates the partition function as a combinatorial sum of products known as elementary symmetric functions (ESFs), which can be computed in operations. The DIA was used to compute equilibrium properties for the isotropic 2D Ising model, and the accuracy of the DIA was compared to that of the basic Metropolis Monte Carlo method. Our results show that the DIA may be a practical alternative for some problems where the Monte Carlo method converge slowly, and computational speed is a critical constraint, such as for very large systems or web-based applications. PMID:23251504

  3. Legitimacy of the stochastic Michaelis-Menten approximation.

    PubMed

    Sanft, K R; Gillespie, D T; Petzold, L R

    2011-01-01

    Michaelis-Menten kinetics are commonly used to represent enzyme-catalysed reactions in biochemical models. The Michaelis-Menten approximation has been thoroughly studied in the context of traditional differential equation models. The presence of small concentrations in biochemical systems, however, encourages the conversion to a discrete stochastic representation. It is shown that the Michaelis-Menten approximation is applicable in discrete stochastic models and that the validity conditions are the same as in the deterministic regime. The authors then compare the Michaelis-Menten approximation to a procedure called the slow-scale stochastic simulation algorithm (ssSSA). The theory underlying the ssSSA implies a formula that seems in some cases to be different from the well-known Michaelis-Menten formula. Here those differences are examined, and some special cases of the stochastic formulas are confirmed using a first-passage time analysis. This exercise serves to place the conventional Michaelis-Menten formula in a broader rigorous theoretical framework.

  4. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas

    PubMed Central

    Bedford, Tim; Daneshkhah, Alireza

    2015-01-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240

  5. Damping effects in doped graphene: The relaxation-time approximation

    NASA Astrophysics Data System (ADS)

    Kupčić, I.

    2014-11-01

    The dynamical conductivity of interacting multiband electronic systems derived by Kupčić et al. [J. Phys.: Condens. Matter 90, 145602 (2013), 10.1088/0953-8984/25/14/145602] is shown to be consistent with the general form of the Ward identity. Using the semiphenomenological form of this conductivity formula, we have demonstrated that the relaxation-time approximation can be used to describe the damping effects in weakly interacting multiband systems only if local charge conservation in the system and gauge invariance of the response theory are properly treated. Such a gauge-invariant response theory is illustrated on the common tight-binding model for conduction electrons in doped graphene. The model predicts two distinctly resolved maxima in the energy-loss-function spectra. The first one corresponds to the intraband plasmons (usually called the Dirac plasmons). On the other hand, the second maximum (π plasmon structure) is simply a consequence of the Van Hove singularity in the single-electron density of states. The dc resistivity and the real part of the dynamical conductivity are found to be well described by the relaxation-time approximation, but only in the parametric space in which the damping is dominated by the direct scattering processes. The ballistic transport and the damping of Dirac plasmons are thus the problems that require abandoning the relaxation-time approximation.

  6. Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas.

    PubMed

    Bedford, Tim; Daneshkhah, Alireza; Wilson, Kevin J

    2016-04-01

    Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets.

  7. Common tester platform concept.

    SciTech Connect

    Hurst, Michael James

    2008-05-01

    This report summarizes the results of a case study on the doctrine of a common tester platform, a concept of a standardized platform that can be applicable across the broad spectrum of testing requirements throughout the various stages of a weapons program, as well as across the various weapons programs. The common tester concept strives to define an affordable, next-generation design that will meet testing requirements with the flexibility to grow and expand; supporting the initial development stages of a weapons program through to the final production and surveillance stages. This report discusses a concept investing key leveraging technologies and operational concepts combined with prototype tester-development experiences and practical lessons learned gleaned from past weapons programs.

  8. Common medical pains

    PubMed Central

    Jacobson, Sheila

    2007-01-01

    Pain in infancy and childhood is extremely common. Sources of pain include illness, injury, and medical and dental procedures. Over the past two decades, tremendous progress has been made in the assessment, prevention and treatment of pain. It is important for the paediatric health care provider to be aware of the implications and consequences of pain in childhood. A multitude of interventions are available to reduce or alleviate pain in children of all ages, including neonates. These include behavioural and psychological methods, as well as a host of pharmacological preparations, which are safe and effective when used as indicated. Many complementary and alternative treatments appear to be promising in treating and relieving pain, although further research is required. The present article reviews the most common sources of pain in childhood and infancy, as well as current treatment strategies and options. PMID:19030348

  9. Multiwavelet neural network and its approximation properties.

    PubMed

    Jiao, L; Pan, J; Fang, Y

    2001-01-01

    A model of multiwavelet-based neural networks is proposed. Its universal and L(2) approximation properties, together with its consistency are proved, and the convergence rates associated with these properties are estimated. The structure of this network is similar to that of the wavelet network, except that the orthonormal scaling functions are replaced by orthonormal multiscaling functions. The theoretical analyses show that the multiwavelet network converges more rapidly than the wavelet network, especially for smooth functions. To make a comparison between both networks, experiments are carried out with the Lemarie-Meyer wavelet network, the Daubechies2 wavelet network and the GHM multiwavelet network, and the results support the theoretical analysis well. In addition, the results also illustrate that at the jump discontinuities, the approximation performance of the two networks are about the same.

  10. Flow past a porous approximate spherical shell

    NASA Astrophysics Data System (ADS)

    Srinivasacharya, D.

    2007-07-01

    In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.

  11. Approximate gauge symmetry of composite vector bosons

    NASA Astrophysics Data System (ADS)

    Suzuki, Mahiko

    2010-08-01

    It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.

  12. Approximate inverse preconditioners for general sparse matrices

    SciTech Connect

    Chow, E.; Saad, Y.

    1994-12-31

    Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.

  13. Common drive unit

    NASA Technical Reports Server (NTRS)

    Ellis, R. C.; Fink, R. A.; Moore, E. A.

    1987-01-01

    The Common Drive Unit (CDU) is a high reliability rotary actuator with many versatile applications in mechanism designs. The CDU incorporates a set of redundant motor-brake assemblies driving a single output shaft through differential. Tachometers provide speed information in the AC version. Operation of both motors, as compared to the operation of one motor, will yield the same output torque with twice the output speed.

  14. Common Skin Cancers

    PubMed Central

    Ho, Vincent C.

    1992-01-01

    Melanoma, basal cell carcinoma, and squamous cell carcinoma are the three most common forms of skin cancer. The incidence of skin cancer is increasing at an alarming rate. Early detection is the key to successful management. In this article, the salient clinical features and diagnostic clues for these tumors and their precursor lesions are presented. Current management guidelines are also discussed. ImagesFigure 1Figures 2-3Figures 4-6Figures 7-9 PMID:21221380

  15. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2015-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  16. Common neuropathic itch syndromes.

    PubMed

    Oaklander, Anne Louise

    2012-03-01

    Patients with chronic itch are diagnosed and treated by dermatologists. However, itch is a neural sensation and some forms of chronic itch are the presenting symptoms of neurological diseases. Dermatologists need some familiarity with the most common neuropathic itch syndromes to initiate diagnostic testing and to know when to refer to a neurologist. This review summarizes current knowledge, admittedly incomplete, on neuropathic itch caused by diseases of the brain, spinal cord, cranial or spinal nerve-roots, and peripheral nerves.

  17. Common Cause Failure Modeling

    NASA Technical Reports Server (NTRS)

    Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.

    2016-01-01

    Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.

  18. Common Anorectal Disorders

    PubMed Central

    Foxx-Orenstein, Amy E.; Umar, Sarah B.; Crowell, Michael D.

    2014-01-01

    Anorectal disorders result in many visits to healthcare specialists. These disorders include benign conditions such as hemorrhoids to more serious conditions such as malignancy; thus, it is important for the clinician to be familiar with these disorders as well as know how to conduct an appropriate history and physical examination. This article reviews the most common anorectal disorders, including hemorrhoids, anal fissures, fecal incontinence, proctalgia fugax, excessive perineal descent, and pruritus ani, and provides guidelines on comprehensive evaluation and management. PMID:24987313

  19. Approximate active fault detection and control

    NASA Astrophysics Data System (ADS)

    Škach, Jan; Punčochář, Ivo; Šimandl, Miroslav

    2014-12-01

    This paper deals with approximate active fault detection and control for nonlinear discrete-time stochastic systems over an infinite time horizon. Multiple model framework is used to represent fault-free and finitely many faulty models. An imperfect state information problem is reformulated using a hyper-state and dynamic programming is applied to solve the problem numerically. The proposed active fault detector and controller is illustrated in a numerical example of an air handling unit.

  20. Microscopic justification of the equal filling approximation

    SciTech Connect

    Perez-Martin, Sara; Robledo, L. M.

    2008-07-15

    The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.

  1. An Approximation Scheme for Delay Equations.

    DTIC Science & Technology

    1980-06-16

    C(-r,0. R) IR defined by m 10 D( ) 0 (O) - I B (-rj- B(s) b(s)ds, I(* A ,(-r ) + A(s) (s)ds, where 0 =r 0 < rI < ... < rm r. ’AJ,B are n x n matrices ...Approximations of delays by ordinary differen- tial equations, INCREST - Institutul de Matematica , Preprint series in Mathematics No. 22/1978. [14] F

  2. Solving Math Problems Approximately: A Developmental Perspective

    PubMed Central

    Ganor-Stern, Dana

    2016-01-01

    Although solving arithmetic problems approximately is an important skill in everyday life, little is known about the development of this skill. Past research has shown that when children are asked to solve multi-digit multiplication problems approximately, they provide estimates that are often very far from the exact answer. This is unfortunate as computation estimation is needed in many circumstances in daily life. The present study examined 4th graders, 6th graders and adults’ ability to estimate the results of arithmetic problems relative to a reference number. A developmental pattern was observed in accuracy, speed and strategy use. With age there was a general increase in speed, and an increase in accuracy mainly for trials in which the reference number was close to the exact answer. The children tended to use the sense of magnitude strategy, which does not involve any calculation but relies mainly on an intuitive coarse sense of magnitude, while the adults used the approximated calculation strategy which involves rounding and multiplication procedures, and relies to a greater extent on calculation skills and working memory resources. Importantly, the children were less accurate than the adults, but were well above chance level. In all age groups performance was enhanced when the reference number was smaller (vs. larger) than the exact answer and when it was far (vs. close) from it, suggesting the involvement of an approximate number system. The results suggest the existence of an intuitive sense of magnitude for the results of arithmetic problems that might help children and even adults with difficulties in math. The present findings are discussed in the context of past research reporting poor estimation skills among children, and the conditions that might allow using children estimation skills in an effective manner. PMID:27171224

  3. Oscillation of boson star in Newtonian approximation

    NASA Astrophysics Data System (ADS)

    Jarwal, Bharti; Singh, S. Somorendro

    2017-03-01

    Boson star (BS) rotation is studied under Newtonian approximation. A Coulombian potential term is added as perturbation to the radial potential of the system without disturbing the angular momentum. The results of the stationary states of these ground state, first and second excited state are analyzed with the correction of Coulombian potential. It is found that the results with correction increased in the amplitude of oscillation of BS in comparison to potential without perturbation correction.

  4. Approximation methods for stochastic petri nets

    NASA Technical Reports Server (NTRS)

    Jungnitz, Hauke Joerg

    1992-01-01

    Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay

  5. Three Definitions of Best Linear Approximation

    DTIC Science & Technology

    1976-04-01

    Three definitions of best (in the least squares sense) linear approximation to given data points are presented. The relationships between these three area discussed along with their relationship to basic statistics such as mean values, the covariance matrix, and the (linear) correlation coefficient . For each of the three definitions, and best line is solved in closed form in terms of the data centroid and the covariance matrix.

  6. Nonlinear amplitude approximation for bilinear systems

    NASA Astrophysics Data System (ADS)

    Jung, Chulwoo; D'Souza, Kiran; Epureanu, Bogdan I.

    2014-06-01

    An efficient method to predict vibration amplitudes at the resonant frequencies of dynamical systems with piecewise-linear nonlinearity is developed. This technique is referred to as bilinear amplitude approximation (BAA). BAA constructs a single vibration cycle at each resonant frequency to approximate the periodic steady-state response of the system. It is postulated that the steady-state response is piece-wise linear and can be approximated by analyzing the response over two time intervals during which the system behaves linearly. Overall the dynamics is nonlinear, but the system is in a distinct linear state during each of the two time intervals. Thus, the approximated vibration cycle is constructed using linear analyses. The equation of motion for analyzing the vibration of each state is projected along the overlapping space spanned by the linear mode shapes active in each of the states. This overlapping space is where the vibratory energy is transferred from one state to the other when the system switches from one state to the other. The overlapping space can be obtained using singular value decomposition. The space where the energy is transferred is used together with transition conditions of displacement and velocity compatibility to construct a single vibration cycle and to compute the amplitude of the dynamics. Since the BAA method does not require numerical integration of nonlinear models, computational costs are very low. In this paper, the BAA method is first applied to a single-degree-of-freedom system. Then, a three-degree-of-freedom system is introduced to demonstrate a more general application of BAA. Finally, the BAA method is applied to a full bladed disk with a crack. Results comparing numerical solutions from full-order nonlinear analysis and results obtained using BAA are presented for all systems.

  7. JIMWLK evolution in the Gaussian approximation

    NASA Astrophysics Data System (ADS)

    Iancu, E.; Triantafyllopoulos, D. N.

    2012-04-01

    We demonstrate that the Balitsky-JIMWLK equations describing the high-energy evolution of the n-point functions of the Wilson lines (the QCD scattering amplitudes in the eikonal approximation) admit a controlled mean field approximation of the Gaussian type, for any value of the number of colors N c . This approximation is strictly correct in the weak scattering regime at relatively large transverse momenta, where it re-produces the BFKL dynamics, and in the strong scattering regime deeply at saturation, where it properly describes the evolution of the scattering amplitudes towards the respective black disk limits. The approximation scheme is fully specified by giving the 2-point function (the S-matrix for a color dipole), which in turn can be related to the solution to the Balitsky-Kovchegov equation, including at finite N c . Any higher n-point function with n ≥ 4 can be computed in terms of the dipole S-matrix by solving a closed system of evolution equations (a simplified version of the respective Balitsky-JIMWLK equations) which are local in the transverse coordinates. For simple configurations of the projectile in the transverse plane, our new results for the 4-point and the 6-point functions coincide with the high-energy extrapolations of the respective results in the McLerran-Venugopalan model. One cornerstone of our construction is a symmetry property of the JIMWLK evolution, that we notice here for the first time: the fact that, with increasing energy, a hadron is expanding its longitudinal support symmetrically around the light-cone. This corresponds to invariance under time reversal for the scattering amplitudes.

  8. Numerical quadratures for approximate computation of ERBS

    NASA Astrophysics Data System (ADS)

    Zanaty, Peter

    2013-12-01

    In the ground-laying paper [3] on expo-rational B-splines (ERBS), the default numerical method for approximate computation of the integral with C∞-smooth integrand in the definition of ERBS is Romberg integration. In the present work, a variety of alternative numerical quadrature methods for computation of ERBS and other integrals with smooth integrands are studied, and their performance is compared on several benchmark examples.

  9. Stochastic approximation boosting for incomplete data problems.

    PubMed

    Sexton, Joseph; Laake, Petter

    2009-12-01

    Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.

  10. Capacitor-Chain Successive-Approximation ADC

    NASA Technical Reports Server (NTRS)

    Cunningham, Thomas

    2003-01-01

    A proposed successive-approximation analog-to-digital converter (ADC) would contain a capacitively terminated chain of identical capacitor cells. Like a conventional successive-approximation ADC containing a bank of binary-scaled capacitors, the proposed ADC would store an input voltage on a sample-and-hold capacitor and would digitize the stored input voltage by finding the closest match between this voltage and a capacitively generated sum of binary fractions of a reference voltage (Vref). However, the proposed capacitor-chain ADC would offer two major advantages over a conventional binary-scaled-capacitor ADC: (1) In a conventional ADC that digitizes to n bits, the largest capacitor (representing the most significant bit) must have 2(exp n-1) times as much capacitance, and hence, approximately 2(exp n-1) times as much area as does the smallest capacitor (representing the least significant bit), so that the total capacitor area must be 2(exp n) times that of the smallest capacitor. In the proposed capacitor-chain ADC, there would be three capacitors per cell, each approximately equal to the smallest capacitor in the conventional ADC, and there would be one cell per bit. Therefore, the total capacitor area would be only about 3(exp n) times that of the smallest capacitor. The net result would be that the proposed ADC could be considerably smaller than the conventional ADC. (2) Because of edge effects, parasitic capacitances, and manufacturing tolerances, it is difficult to make capacitor banks in which the values of capacitance are scaled by powers of 2 to the required precision. In contrast, because all the capacitors in the proposed ADC would be identical, the problem of precise binary scaling would not arise.

  11. Space-Time Approximation with Sparse Grids

    SciTech Connect

    Griebel, M; Oeltz, D; Vassilevski, P S

    2005-04-14

    In this article we introduce approximation spaces for parabolic problems which are based on the tensor product construction of a multiscale basis in space and a multiscale basis in time. Proper truncation then leads to so-called space-time sparse grid spaces. For a uniform discretization of the spatial space of dimension d with O(N{sup d}) degrees of freedom, these spaces involve for d > 1 also only O(N{sup d}) degrees of freedom for the discretization of the whole space-time problem. But they provide the same approximation rate as classical space-time Finite Element spaces which need O(N{sup d+1}) degrees of freedoms. This makes these approximation spaces well suited for conventional parabolic and for time-dependent optimization problems. We analyze the approximation properties and the dimension of these sparse grid space-time spaces for general stable multiscale bases. We then restrict ourselves to an interpolatory multiscale basis, i.e. a hierarchical basis. Here, to be able to handle also complicated spatial domains {Omega}, we construct the hierarchical basis from a given spatial Finite Element basis as follows: First we determine coarse grid points recursively over the levels by the coarsening step of the algebraic multigrid method. Then, we derive interpolatory prolongation operators between the respective coarse and fine grid points by a least squares approach. This way we obtain an algebraic hierarchical basis for the spatial domain which we then use in our space-time sparse grid approach. We give numerical results on the convergence rate of the interpolation error of these spaces for various space-time problems with two spatial dimensions. Also implementational issues, data structures and questions of adaptivity are addressed to some extent.

  12. Variational Bayesian Approximation methods for inverse problems

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2012-09-01

    Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.

  13. Common Geometry Module

    SciTech Connect

    Tautges, Timothy J.

    2005-01-01

    The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and on top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also indudes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.

  14. Large-deformation, elasto-plastic analysis of frames under nonconservative loading, using explicitly derived tangent stiffnesses based on assumed stresses

    NASA Astrophysics Data System (ADS)

    Kondoh, K.; Atluri, S. N.

    1987-03-01

    Simple and economical procedures for large-deformation elasto-plastic analysis of frames, whose members can be characterized as beams, are presented. An assumed stress approach is employed to derive the tangent stiffness of the beam, subjected in general to non-conservative type distributed loading. The beam is assumed to undergo arbitrarily large rigid rotations but small axial stretch and relative (non-rigid) point-wise rotations. It is shown that if a plastic-hinge method (with allowance being made for the formation of the hinge at an arbitrary location or locations along the beam) is employed, the tangent stiffness matrix may be derived in an explicit fashion, without numerical integration. Several examples are given to illustrate the relative economy and efficiency of the method in solving large-deformation elasto-plastic problems. The method is of considerable utility in analysing off-shore structures and large structures that are likely to be deployed in outerspace.

  15. Normal moveout for long offset in isotropic media using the Padé approximation

    NASA Astrophysics Data System (ADS)

    Song, Han-Jie; Zhang, Jin-Hai; Yao, Zhen-Xing

    2016-12-01

    The normal moveout correction is important to long-offset observations, especially deep layers. For isotropic media, the conventional two-term approximation of the normal moveout function assumes a small offset-to-depth ratio and thus fails at large offset-to-depth ratios. We approximate the long-offset moveout using the Padé approximation. This method is superior to typical methods and flattens the seismic gathers over a wide range of offsets in multilayered media. For a four-layer model, traditional methods show traveltime errors of about 5 ms for offset-to-depth ratio of 2 and greater than 10 ms for offset-to-depth ratio of 3; in contrast, the maximum traveltime error for the [3, 3]-order Padé approximation is no more than 5 ms at offset-to-depth ratio of 3. For the Cooper Basin model, the maximum offset-to-depth ratio for the [3, 3]-order Padé approximation is typically double of those in typical methods. The [7, 7]-order Padé approximation performs better than the [3, 3]-order Padé approximation.

  16. Ranking Support Vector Machine with Kernel Approximation

    PubMed Central

    Dou, Yong

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256

  17. Ranking Support Vector Machine with Kernel Approximation.

    PubMed

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  18. Strong washout approximation to resonant leptogenesis

    NASA Astrophysics Data System (ADS)

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ɛ=Xsin(2varphi)/(X2+sin2varphi), where X=8πΔ/(|Y1|2+|Y2|2), Δ=4(M1-M2)/(M1+M2), varphi=arg(Y2/Y1), and M1,2, Y1,2 are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y1,2|2gg Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  19. An Origami Approximation to the Cosmic Web

    NASA Astrophysics Data System (ADS)

    Neyrinck, Mark C.

    2016-10-01

    The powerful Lagrangian view of structure formation was essentially introduced to cosmology by Zel'dovich. In the current cosmological paradigm, a dark-matter-sheet 3D manifold, inhabiting 6D position-velocity phase space, was flat (with vanishing velocity) at the big bang. Afterward, gravity stretched and bunched the sheet together in different places, forming a cosmic web when projected to the position coordinates. Here, I explain some properties of an origami approximation, in which the sheet does not stretch or contract (an assumption that is false in general), but is allowed to fold. Even without stretching, the sheet can form an idealized cosmic web, with convex polyhedral voids separated by straight walls and filaments, joined by convex polyhedral nodes. The nodes form in `polygonal' or `polyhedral' collapse, somewhat like spherical/ellipsoidal collapse, except incorporating simultaneous filament and wall formation. The origami approximation allows phase-space geometries of nodes, filaments, and walls to be more easily understood, and may aid in understanding spin correlations between nearby galaxies. This contribution explores kinematic origami-approximation models giving velocity fields for the first time.

  20. Approximation abilities of neuro-fuzzy networks

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2010-01-01

    The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.

  1. Approximate Graph Edit Distance in Quadratic Time.

    PubMed

    Riesen, Kaspar; Ferrer, Miquel; Bunke, Horst

    2015-09-14

    Graph edit distance is one of the most flexible and general graph matching models available. The major drawback of graph edit distance, however, is its computational complexity that restricts its applicability to graphs of rather small size. Recently the authors of the present paper introduced a general approximation framework for the graph edit distance problem. The basic idea of this specific algorithm is to first compute an optimal assignment of independent local graph structures (including substitutions, deletions, and insertions of nodes and edges). This optimal assignment is complete and consistent with respect to the involved nodes of both graphs and can thus be used to instantly derive an admissible (yet suboptimal) solution for the original graph edit distance problem in O(n3) time. For large scale graphs or graph sets, however, the cubic time complexity may still be too high. Therefore, we propose to use suboptimal algorithms with quadratic rather than cubic time for solving the basic assignment problem. In particular, the present paper introduces five different greedy assignment algorithms in the context of graph edit distance approximation. In an experimental evaluation we show that these methods have great potential for further speeding up the computation of graph edit distance while the approximated distances remain sufficiently accurate for graph based pattern classification.

  2. CMB-lensing beyond the Born approximation

    NASA Astrophysics Data System (ADS)

    Marozzi, Giovanni; Fanizza, Giuseppe; Di Dio, Enea; Durrer, Ruth

    2016-09-01

    We investigate the weak lensing corrections to the cosmic microwave background temperature anisotropies considering effects beyond the Born approximation. To this aim, we use the small deflection angle approximation, to connect the lensed and unlensed power spectra, via expressions for the deflection angles up to third order in the gravitational potential. While the small deflection angle approximation has the drawback to be reliable only for multipoles l lesssim 2500, it allows us to consistently take into account the non-Gaussian nature of cosmological perturbation theory beyond the linear level. The contribution to the lensed temperature power spectrum coming from the non-Gaussian nature of the deflection angle at higher order is a new effect which has not been taken into account in the literature so far. It turns out to be the leading contribution among the post-Born lensing corrections. On the other hand, the effect is smaller than corrections coming from non-linearities in the matter power spectrum, and its imprint on CMB lensing is too small to be seen in present experiments.

  3. Green-Ampt approximations: A comprehensive analysis

    NASA Astrophysics Data System (ADS)

    Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.

    2016-04-01

    Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.

  4. A coastal ocean model with subgrid approximation

    NASA Astrophysics Data System (ADS)

    Walters, Roy A.

    2016-06-01

    A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.

  5. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  6. EJ Extra: Mathematical Language and the Common Core State Standards for English

    ERIC Educational Resources Information Center

    Berger, Lisa

    2013-01-01

    The Common Core State Standards (CCSS) urge English language arts teachers to assume responsibility for teaching technical reading, along with literature, poetry, and composition. Ideally, each teacher assumes a share in developing reading proficiency within his or her content area, but state assessments may implicitly compel school districts to…

  7. Approximate analysis of balloting motion of railgun projectiles. Technical report

    SciTech Connect

    Chu, S.H.

    1991-07-01

    This is the final of three reports dealing with the in-bore balloting motion of a projectile fired from an electromagnetic railgun. Knowledge of projectile in-bore motion is important to its design and the design of the railgun. It is a complicated problem since many parameters are involved and it is not easy to determine the interacting relationships between them. To make the problem easier to understand it was analyzed on several levels. Beginning from the basic simple model which computed only the axial motion, more complicated models were introduced in upper levels that included the more significant lateral forces and gun tube vibration effects. This report deals with the approximate analysis of balloting motion. This model considers the effects of the propulsion force, the friction force of the projectile package (projectile and armature), air resistance, gravity, the elastic forces, and the projectile/barrel clearance. To simplify the modeling, a plane motion configuration is assumed. Though the projectile is moving with a varying yaw angle, the axes of the barrel and the projectile package, and the projectile center of gravity are always considered in a plane containing the centerlines of the rails. Equations of motion are derived and solved. A sample computation is performed and the results plotted to give a clearer understanding of projectile in-bore motion.

  8. Breakdown of the adiabatic Born-Oppenheimer approximation in graphene

    NASA Astrophysics Data System (ADS)

    Pisana, Simone; Lazzeri, Michele; Casiraghi, Cinzia; Novoselov, Kostya S.; Geim, A. K.; Ferrari, Andrea C.; Mauri, Francesco

    2007-03-01

    The adiabatic Born-Oppenheimer approximation (ABO) has been the standard ansatz to describe the interaction between electrons and nuclei since the early days of quantum mechanics. ABO assumes that the lighter electrons adjust adiabatically to the motion of the heavier nuclei, remaining at any time in their instantaneous ground state. ABO is well justified when the energy gap between ground and excited electronic states is larger than the energy scale of the nuclear motion. In metals, the gap is zero and phenomena beyond ABO (such as phonon-mediated superconductivity or phonon-induced renormalization of the electronic properties) occur. The use of ABO to describe lattice motion in metals is, therefore, questionable. In spite of this, ABO has proved effective for the accurate determination of chemical reactions, molecular dynamics and phonon frequencies in a wide range of metallic systems. Here, we show that ABO fails in graphene. Graphene, recently discovered in the free state, is a zero-bandgap semiconductor that becomes a metal if the Fermi energy is tuned applying a gate voltage, Vg. This induces a stiffening of the Raman G peak that cannot be described within ABO.

  9. Strong washout approximation to resonant leptogenesis

    SciTech Connect

    Garbrecht, Björn; Gautier, Florian; Klaric, Juraj E-mail: florian.gautier@tum.de

    2014-09-01

    We show that the effective decay asymmetry for resonant Leptogenesis in the strong washout regime with two sterile neutrinos and a single active flavour can in wide regions of parameter space be approximated by its late-time limit ε=Xsin(2φ)/(X{sup 2}+sin{sup 2}φ), where X=8πΔ/(|Y{sub 1}|{sup 2}+|Y{sub 2}|{sup 2}), Δ=4(M{sub 1}-M{sub 2})/(M{sub 1}+M{sub 2}), φ=arg(Y{sub 2}/Y{sub 1}), and M{sub 1,2}, Y{sub 1,2} are the masses and Yukawa couplings of the sterile neutrinos. This approximation in particular extends to parametric regions where |Y{sub 1,2}|{sup 2}>> Δ, i.e. where the width dominates the mass splitting. We generalise the formula for the effective decay asymmetry to the case of several flavours of active leptons and demonstrate how this quantity can be used to calculate the lepton asymmetry for phenomenological scenarios that are in agreement with the observed neutrino oscillations. We establish analytic criteria for the validity of the late-time approximation for the decay asymmetry and compare these with numerical results that are obtained by solving for the mixing and the oscillations of the sterile neutrinos. For phenomenologically viable models with two sterile neutrinos, we find that the flavoured effective late-time decay asymmetry can be applied throughout parameter space.

  10. An approximate projection method for incompressible flow

    NASA Astrophysics Data System (ADS)

    Stevens, David E.; Chan, Stevens T.; Gresho, Phil

    2002-12-01

    This paper presents an approximate projection method for incompressible flows. This method is derived from Galerkin orthogonality conditions using equal-order piecewise linear elements for both velocity and pressure, hereafter Q1Q1. By combining an approximate projection for the velocities with a variational discretization of the continuum pressure Poisson equation, one eliminates the need to filter either the velocity or pressure fields as is often needed with equal-order element formulations. This variational approach extends to multiple types of elements; examples and results for triangular and quadrilateral elements are provided. This method is related to the method of Almgren et al. (SIAM J. Sci. Comput. 2000; 22: 1139-1159) and the PISO method of Issa (J. Comput. Phys. 1985; 62: 40-65). These methods use a combination of two elliptic solves, one to reduce the divergence of the velocities and another to approximate the pressure Poisson equation. Both Q1Q1 and the method of Almgren et al. solve the second Poisson equation with a weak error tolerance to achieve more computational efficiency.A Fourier analysis of Q1Q1 shows that a consistent mass matrix has a positive effect on both accuracy and mass conservation. A numerical comparison with the widely used Q1Q0 (piecewise linear velocities, piecewise constant pressures) on a periodic test case with an analytic solution verifies this analysis. Q1Q1 is shown to have comparable accuracy as Q1Q0 and good agreement with experiment for flow over an isolated cubic obstacle and dispersion of a point source in its wake.

  11. Photoelectron spectroscopy and the dipole approximation

    SciTech Connect

    Hemmers, O.; Hansen, D.L.; Wang, H.

    1997-04-01

    Photoelectron spectroscopy is a powerful technique because it directly probes, via the measurement of photoelectron kinetic energies, orbital and band structure in valence and core levels in a wide variety of samples. The technique becomes even more powerful when it is performed in an angle-resolved mode, where photoelectrons are distinguished not only by their kinetic energy, but by their direction of emission as well. Determining the probability of electron ejection as a function of angle probes the different quantum-mechanical channels available to a photoemission process, because it is sensitive to phase differences among the channels. As a result, angle-resolved photoemission has been used successfully for many years to provide stringent tests of the understanding of basic physical processes underlying gas-phase and solid-state interactions with radiation. One mainstay in the application of angle-resolved photoelectron spectroscopy is the well-known electric-dipole approximation for photon interactions. In this simplification, all higher-order terms, such as those due to electric-quadrupole and magnetic-dipole interactions, are neglected. As the photon energy increases, however, effects beyond the dipole approximation become important. To best determine the range of validity of the dipole approximation, photoemission measurements on a simple atomic system, neon, where extra-atomic effects cannot play a role, were performed at BL 8.0. The measurements show that deviations from {open_quotes}dipole{close_quotes} expectations in angle-resolved valence photoemission are observable for photon energies down to at least 0.25 keV, and are quite significant at energies around 1 keV. From these results, it is clear that non-dipole angular-distribution effects may need to be considered in any application of angle-resolved photoelectron spectroscopy that uses x-ray photons of energies as low as a few hundred eV.

  12. Product-State Approximations to Quantum States

    NASA Astrophysics Data System (ADS)

    Brandão, Fernando G. S. L.; Harrow, Aram W.

    2016-02-01

    We show that for any many-body quantum state there exists an unentangled quantum state such that most of the two-body reduced density matrices are close to those of the original state. This is a statement about the monogamy of entanglement, which cannot be shared without limit in the same way as classical correlation. Our main application is to Hamiltonians that are sums of two-body terms. For such Hamiltonians we show that there exist product states with energy that is close to the ground-state energy whenever the interaction graph of the Hamiltonian has high degree. This proves the validity of mean-field theory and gives an explicitly bounded approximation error. If we allow states that are entangled within small clusters of systems but product across clusters then good approximations exist when the Hamiltonian satisfies one or more of the following properties: (1) high degree, (2) small expansion, or (3) a ground state where the blocks in the partition have sublinear entanglement. Previously this was known only in the case of small expansion or in the regime where the entanglement was close to zero. Our approximations allow an extensive error in energy, which is the scale considered by the quantum PCP (probabilistically checkable proof) and NLTS (no low-energy trivial-state) conjectures. Thus our results put restrictions on the possible Hamiltonians that could be used for a possible proof of the qPCP or NLTS conjectures. By contrast the classical PCP constructions are often based on constraint graphs with high degree. Likewise we show that the parallel repetition that is possible with classical constraint satisfaction problems cannot also be possible for quantum Hamiltonians, unless qPCP is false. The main technical tool behind our results is a collection of new classical and quantum de Finetti theorems which do not make any symmetry assumptions on the underlying states.

  13. Approximations of nonlinear systems having outputs

    NASA Technical Reports Server (NTRS)

    Hunt, L. R.; Su, R.

    1985-01-01

    For a nonlinear system with output derivative x = f(x) and y = h(x), two types of linearizations about a point x(0) in state space are considered. One is the usual Taylor series approximation, and the other is defined by linearizing the appropriate Lie derivatives of the output with respect to f about x(0). The latter is called the obvservation model and appears to be quite natural for observation. It is noted that there is a coordinate system in which these two kinds of linearizations agree. In this coordinate system, a technique to construct an observer is introduced.

  14. The monoenergetic approximation in stellarator neoclassical calculations

    NASA Astrophysics Data System (ADS)

    Landreman, Matt

    2011-08-01

    In 'monoenergetic' stellarator neoclassical calculations, to expedite computation, ad hoc changes are made to the kinetic equation so speed enters only as a parameter. Here we examine the validity of this approach by considering the effective particle trajectories in a model magnetic field. We find monoenergetic codes systematically under-predict the true trapped particle fraction. The error in the trapped ion fraction can be of order unity for large but experimentally realizable values of the radial electric field, suggesting some results of these codes may be unreliable in this regime. This inaccuracy is independent of any errors introduced by approximation of the collision operator.

  15. Semiclassical approximations to quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Egorov, S. A.; Skinner, J. L.

    1998-09-01

    Over the last 40 years several ad hoc semiclassical approaches have been developed in order to obtain approximate quantum time correlation functions, using as input only the corresponding classical time correlation functions. The accuracy of these approaches has been tested for several exactly solvable gas-phase models. In this paper we test the accuracy of these approaches by comparing to an exactly solvable many-body condensed-phase model. We show that in the frequency domain the Egelstaff approach is the most accurate, especially at high frequencies, while in the time domain one of the other approaches is more accurate.

  16. Approximation concepts for numerical airfoil optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1979-01-01

    An efficient algorithm for airfoil optimization is presented. The algorithm utilizes approximation concepts to reduce the number of aerodynamic analyses required to reach the optimum design. Examples are presented and compared with previous results. Optimization efficiency improvements of more than a factor of 2 are demonstrated. Improvements in efficiency are demonstrated when analysis data obtained in previous designs are utilized. The method is a general optimization procedure and is not limited to this application. The method is intended for application to a wide range of engineering design problems.

  17. Approximation Algorithms for Free-Label Maximization

    NASA Astrophysics Data System (ADS)

    de Berg, Mark; Gerrits, Dirk H. P.

    Inspired by air traffic control and other applications where moving objects have to be labeled, we consider the following (static) point labeling problem: given a set P of n points in the plane and labels that are unit squares, place a label with each point in P in such a way that the number of free labels (labels not intersecting any other label) is maximized. We develop efficient constant-factor approximation algorithms for this problem, as well as PTASs, for various label-placement models.

  18. Analytic Approximation to Randomly Oriented Spheroid Extinction

    DTIC Science & Technology

    1993-12-01

    104 times faster than by the T - matrix code . Since the T-matrix scales as at least the cube of the optical size whereas the analytic approximation is...coefficient estimate, and with the Rayleigh formula. Since it is difficult estimate the accuracy near the limit of stability of the T - matrix code some...additional error due to the T - matrix code could be present. UNCLASSIFIED 30 Max Ret Error, Analytic vs T-Mat, r= 1/5 0.0 20 25 10 ~ 0.5 100 . 7.5 S-1.0

  19. Relativistic Random Phase Approximation At Finite Temperature

    SciTech Connect

    Niu, Y. F.; Paar, N.; Vretenar, D.; Meng, J.

    2009-08-26

    The fully self-consistent finite temperature relativistic random phase approximation (FTRRPA) has been established in the single-nucleon basis of the temperature dependent Dirac-Hartree model (FTDH) based on effective Lagrangian with density dependent meson-nucleon couplings. Illustrative calculations in the FTRRPA framework show the evolution of multipole responses of {sup 132}Sn with temperature. With increased temperature, in both monopole and dipole strength distributions additional transitions appear in the low energy region due to the new opened particle-particle and hole-hole transition channels.

  20. Relativistic equation of state at subnuclear densities in the Thomas-Fermi approximation

    SciTech Connect

    Zhang, Z. W.; Shen, H.

    2014-06-20

    We study the non-uniform nuclear matter using the self-consistent Thomas-Fermi approximation with a relativistic mean-field model. The non-uniform matter is assumed to be composed of a lattice of heavy nuclei surrounded by dripped nucleons. At each temperature T, proton fraction Y{sub p} , and baryon mass density ρ {sub B}, we determine the thermodynamically favored state by minimizing the free energy with respect to the radius of the Wigner-Seitz cell, while the nucleon distribution in the cell can be determined self-consistently in the Thomas-Fermi approximation. A detailed comparison is made between the present results and previous calculations in the Thomas-Fermi approximation with a parameterized nucleon distribution that has been adopted in the widely used Shen equation of state.

  1. Many-body localization phase transition: A simplified strong-randomness approximate renormalization group

    NASA Astrophysics Data System (ADS)

    Zhang, Liangsheng; Zhao, Bo; Devakul, Trithep; Huse, David A.

    2016-06-01

    We present a simplified strong-randomness renormalization group (RG) that captures some aspects of the many-body localization (MBL) phase transition in generic disordered one-dimensional systems. This RG can be formulated analytically and is mathematically equivalent to a domain coarsening model that has been previously solved. The critical fixed-point distribution and critical exponents (that satisfy the Chayes inequality) are thus obtained analytically or to numerical precision. This reproduces some, but not all, of the qualitative features of the MBL phase transition that are indicated by previous numerical work and approximate RG studies: our RG might serve as a "zeroth-order" approximation for future RG studies. One interesting feature that we highlight is that the rare Griffiths regions are fractal. For thermal Griffiths regions within the MBL phase, this feature might be qualitatively correctly captured by our RG. If this is correct beyond our approximations, then these Griffiths effects are stronger than has been previously assumed.

  2. The convergence rate of approximate solutions for nonlinear scalar conservation laws

    NASA Technical Reports Server (NTRS)

    Nessyahu, Haim; Tadmor, Eitan

    1991-01-01

    The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L(sup 2)-stability requirement. It is assumed that the approximate solutions are Lip(sup +)-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip(sup +)-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L(sup p) convergence rate estimates.

  3. The convergence rate of approximate solutions for nonlinear scalar conservation laws. Final Report

    SciTech Connect

    Nessyahu, HAIM; Tadmor, EITAN.

    1991-07-01

    The convergence rate is discussed of approximate solutions for the nonlinear scalar conservation law. The linear convergence theory is extended into a weak regime. The extension is based on the usual two ingredients of stability and consistency. On the one hand, the counterexamples show that one must strengthen the linearized L{sup 2}-stability requirement. It is assumed that the approximate solutions are Lip{sup +}-stable in the sense that they satisfy a one-sided Lipschitz condition, in agreement with Oleinik's E-condition for the entropy solution. On the other hand, the lack of smoothness requires to weaken the consistency requirement, which is measured in the Lip'-(semi)norm. It is proved for Lip{sup +}-stable approximate solutions, that their Lip'convergence rate to the entropy solution is of the same order as their Lip'-consistency. The Lip'-convergence rate is then converted into stronger L{sup p} convergence rate estimates.

  4. Mars Surface Systems Common Capabilities and Challenges for Human Missions

    NASA Technical Reports Server (NTRS)

    Toups, Larry; Hoffman, Stephen J.; Watts, Kevin

    2016-01-01

    This paper describes the current status of common systems and operations as they are applied to actual locations on Mars that are representative of Exploration Zones (EZ) - NASA's term for candidate locations where humans could land, live and work on the martian surface. Given NASA's current concepts for human missions to Mars, an EZ is a collection of Regions of Interest (ROIs) located within approximately 100 kilometers of a centralized landing site. ROIs are areas that are relevant for scientific investigation and/or development/maturation of capabilities and resources necessary for a sustainable human presence. An EZ also contains a habitation site that will be used by multiple human crews during missions to explore and utilize the ROIs within the EZ. The Evolvable Mars Campaign (EMC), a description of NASA's current approach to these human Mars missions, assumes that a single EZ will be identified within which NASA will establish a substantial and durable surface infrastructure that will be used by multiple human crews. The process of identifying and eventually selecting this single EZ will likely take many years to finalized. Because of this extended EZ selection process it becomes important to evaluate the current suite of surface systems and operations being evaluated for the EMC as they are likely to perform at a variety of proposed EZ locations and for the types of operations - both scientific and development - that are proposed for these candidate EZs. It is also important to evaluate proposed EZs for their suitability to be explored or developed given the range of capabilities and constraints for the types of surface systems and operations being considered within the EMC.

  5. [Hormonal factors in etiology of common acne].

    PubMed

    Bergler-Czop, Beata; Brzezińska-Wcisło, Ligia

    2004-05-01

    Common acne is steatorrhoeic chronic disease, to which specific is, among others, the presence of blackheads, papulopustular eruptions, purulent cysts and cicatrices. Such hormonal factors belong to elements inherent in etiology of the affection. Sebaceous glands have cell receptors on their surface for androgens. In etiopathogenesis of common/simple acne, a decisive role is played by a derivative of testosterone, i.e. 5-alpha-dihydrotestosterone (DHT). However, some experts are of opinion that there is no correlation between the increased intensity of common acne and other symptoms of hyperandrogenism. Numerous authors assume, however, that common acne-affected patients may be sometimes subjected to intense reactions caused by sebaceous glands against physiological androgens concentrations. Naturally, estrogens can inhibit release of such androgens. Under physiological conditions, natural progesterone does not conduct to intensification of the seborrhea, but the activity of sebum secretion may be triggered off by its synthetic counterparts. Hormonal etiology can be very distinctly visible in the steroid, androgenic, premenstrual, menopausal acne, as well as in juvenile acne and acne neonatorum. In case of females affected by acne, hormonal therapy should be persistently supported and consulted with dermatologists, endocrinologists and gynecologists. Antiandrogenic preparations are applied, such as: cyproterone acetate concurrently administered with estrogens and, as well as not so frequently with chlormadinone acetate (independently or during estrogenic therapy).

  6. Phase field approximation of dynamic brittle fracture

    NASA Astrophysics Data System (ADS)

    Schlüter, Alexander; Willenbücher, Adrian; Kuhn, Charlotte; Müller, Ralf

    2014-11-01

    Numerical methods that are able to predict the failure of technical structures due to fracture are important in many engineering applications. One of these approaches, the so-called phase field method, represents cracks by means of an additional continuous field variable. This strategy avoids some of the main drawbacks of a sharp interface description of cracks. For example, it is not necessary to track or model crack faces explicitly, which allows a simple algorithmic treatment. The phase field model for brittle fracture presented in Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) assumes quasi-static loading conditions. However dynamic effects have a great impact on the crack growth in many practical applications. Therefore this investigation presents an extension of the quasi-static phase field model for fracture from Kuhn and Müller (Eng Fract Mech 77(18):3625-3634, 2010) to the dynamic case. First of all Hamilton's principle is applied to derive a coupled set of Euler-Lagrange equations that govern the mechanical behaviour of the body as well as the crack growth. Subsequently the model is implemented in a finite element scheme which allows to solve several test problems numerically. The numerical examples illustrate the capabilities of the developed approach to dynamic fracture in brittle materials.

  7. Approximate Sensory Data Collection: A Survey

    PubMed Central

    Cheng, Siyao; Cai, Zhipeng; Li, Jianzhong

    2017-01-01

    With the rapid development of the Internet of Things (IoTs), wireless sensor networks (WSNs) and related techniques, the amount of sensory data manifests an explosive growth. In some applications of IoTs and WSNs, the size of sensory data has already exceeded several petabytes annually, which brings too many troubles and challenges for the data collection, which is a primary operation in IoTs and WSNs. Since the exact data collection is not affordable for many WSN and IoT systems due to the limitations on bandwidth and energy, many approximate data collection algorithms have been proposed in the last decade. This survey reviews the state of the art of approximate data collection algorithms. We classify them into three categories: the model-based ones, the compressive sensing based ones, and the query-driven ones. For each category of algorithms, the advantages and disadvantages are elaborated, some challenges and unsolved problems are pointed out, and the research prospects are forecasted. PMID:28287440

  8. Revisiting approximate dynamic programming and its convergence.

    PubMed

    Heydari, Ali

    2014-12-01

    Value iteration-based approximate/adaptive dynamic programming (ADP) as an approximate solution to infinite-horizon optimal control problems with deterministic dynamics and continuous state and action spaces is investigated. The learning iterations are decomposed into an outer loop and an inner loop. A relatively simple proof for the convergence of the outer-loop iterations to the optimal solution is provided using a novel idea with some new features. It presents an analogy between the value function during the iterations and the value function of a fixed-final-time optimal control problem. The inner loop is utilized to avoid the need for solving a set of nonlinear equations or a nonlinear optimization problem numerically, at each iteration of ADP for the policy update. Sufficient conditions for the uniqueness of the solution to the policy update equation and for the convergence of the inner-loop iterations to the solution are obtained. Afterwards, the results are formed as a learning algorithm for training a neurocontroller or creating a look-up table to be used for optimal control of nonlinear systems with different initial conditions. Finally, some of the features of the investigated method are numerically analyzed.

  9. Variational extensions of the mean spherical approximation

    NASA Astrophysics Data System (ADS)

    Blum, L.; Ubriaco, M.

    2000-04-01

    In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.

  10. Exact and Approximate Sizes of Convex Datacubes

    NASA Astrophysics Data System (ADS)

    Nedjar, Sébastien

    In various approaches, data cubes are pre-computed in order to efficiently answer Olap queries. The notion of data cube has been explored in various ways: iceberg cubes, range cubes, differential cubes or emerging cubes. Previously, we have introduced the concept of convex cube which generalizes all the quoted variants of cubes. More precisely, the convex cube captures all the tuples satisfying a monotone and/or antimonotone constraint combination. This paper is dedicated to a study of the convex cube size. Actually, knowing the size of such a cube even before computing it has various advantages. First of all, free space can be saved for its storage and the data warehouse administration can be improved. However the main interest of this size knowledge is to choose at best the constraints to apply in order to get a workable result. For an aided calibrating of constraints, we propose a sound characterization, based on inclusion-exclusion principle, of the exact size of convex cube as long as an upper bound which can be very quickly yielded. Moreover we adapt the nearly optimal algorithm HyperLogLog in order to provide a very good approximation of the exact size of convex cubes. Our analytical results are confirmed by experiments: the approximated size of convex cubes is really close to their exact size and can be computed quasi immediately.

  11. Approximation of Failure Probability Using Conditional Sampling

    NASA Technical Reports Server (NTRS)

    Giesy. Daniel P.; Crespo, Luis G.; Kenney, Sean P.

    2008-01-01

    In analyzing systems which depend on uncertain parameters, one technique is to partition the uncertain parameter domain into a failure set and its complement, and judge the quality of the system by estimating the probability of failure. If this is done by a sampling technique such as Monte Carlo and the probability of failure is small, accurate approximation can require so many sample points that the computational expense is prohibitive. Previous work of the authors has shown how to bound the failure event by sets of such simple geometry that their probabilities can be calculated analytically. In this paper, it is shown how to make use of these failure bounding sets and conditional sampling within them to substantially reduce the computational burden of approximating failure probability. It is also shown how the use of these sampling techniques improves the confidence intervals for the failure probability estimate for a given number of sample points and how they reduce the number of sample point analyses needed to achieve a given level of confidence.

  12. Adaptive Discontinuous Galerkin Approximation to Richards' Equation

    NASA Astrophysics Data System (ADS)

    Li, H.; Farthing, M. W.; Miller, C. T.

    2006-12-01

    Due to the occurrence of large gradients in fluid pressure as a function of space and time resulting from nonlinearities in closure relations, numerical solutions to Richards' equations are notoriously difficult for certain media properties and auxiliary conditions that occur routinely in describing physical systems of interest. These difficulties have motivated a substantial amount of work aimed at improving numerical approximations to this physically important and mathematically rich model. In this work, we build upon recent advances in temporal and spatial discretization methods by developing spatially and temporally adaptive solution approaches based upon the local discontinuous Galerkin method in space and a higher order backward difference method in time. Spatial step-size adaption, h adaption, approaches are evaluated and a so-called hp-adaption strategy is considered as well, which adjusts both the step size and the order of the approximation. Solution algorithms are advanced and performance is evaluated. The spatially and temporally adaptive approaches are shown to be robust and offer significant increases in computational efficiency compared to similar state-of-the-art methods that adapt in time alone. In addition, we extend the proposed methods to two dimensions and provide preliminary numerical results.

  13. Perturbed kernel approximation on homogeneous manifolds

    NASA Astrophysics Data System (ADS)

    Levesley, J.; Sun, X.

    2007-02-01

    Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.

  14. Analytic approximate radiation effects due to Bremsstrahlung

    SciTech Connect

    Ben-Zvi I.

    2012-02-01

    The purpose of this note is to provide analytic approximate expressions that can provide quick estimates of the various effects of the Bremsstrahlung radiation produced relatively low energy electrons, such as the dumping of the beam into the beam stop at the ERL or field emission in superconducting cavities. The purpose of this work is not to replace a dependable calculation or, better yet, a measurement under real conditions, but to provide a quick but approximate estimate for guidance purposes only. These effects include dose to personnel, ozone generation in the air volume exposed to the radiation, hydrogen generation in the beam dump water cooling system and radiation damage to near-by magnets. These expressions can be used for other purposes, but one should note that the electron beam energy range is limited. In these calculations the good range is from about 0.5 MeV to 10 MeV. To help in the application of this note, calculations are presented as a worked out example for the beam dump of the R&D Energy Recovery Linac.

  15. Approximate solutions to fractional subdiffusion equations

    NASA Astrophysics Data System (ADS)

    Hristov, J.

    2011-03-01

    The work presents integral solutions of the fractional subdiffusion equation by an integral method, as an alternative approach to the solutions employing hypergeometric functions. The integral solution suggests a preliminary defined profile with unknown coefficients and the concept of penetration (boundary layer). The prescribed profile satisfies the boundary conditions imposed by the boundary layer that allows its coefficients to be expressed through its depth as unique parameter. The integral approach to the fractional subdiffusion equation suggests a replacement of the real distribution function by the approximate profile. The solution was performed with Riemann-Liouville time-fractional derivative since the integral approach avoids the definition of the initial value of the time-derivative required by the Laplace transformed equations and leading to a transition to Caputo derivatives. The method is demonstrated by solutions to two simple fractional subdiffusion equations (Dirichlet problems): 1) Time-Fractional Diffusion Equation, and 2) Time-Fractional Drift Equation, both of them having fundamental solutions expressed through the M-Wright function. The solutions demonstrate some basic issues of the suggested integral approach, among them: a) Choice of the profile, b) Integration problem emerging when the distribution (profile) is replaced by a prescribed one with unknown coefficients; c) Optimization of the profile in view to minimize the average error of approximations; d) Numerical results allowing comparisons to the known solutions expressed to the M-Wright function and error estimations.

  16. Common hair loss disorders.

    PubMed

    Springer, Karyn; Brown, Matthew; Stulberg, Daniel L

    2003-07-01

    Hair loss (alopecia) affects men and women of all ages and often significantly affects social and psychologic well-being. Although alopecia has several causes, a careful history, dose attention to the appearance of the hair loss, and a few simple studies can quickly narrow the potential diagnoses. Androgenetic alopecia, one of the most common forms of hair loss, usually has a specific pattern of temporal-frontal loss in men and central thinning in women. The U.S. Food and Drug Administration has approved topical minoxidil to treat men and women, with the addition of finasteride for men. Telogen effluvium is characterized by the loss of "handfuls" of hair, often following emotional or physical stressors. Alopecia areata, trichotillomania, traction alopecia, and tinea capitis have unique features on examination that aid in diagnosis. Treatment for these disorders and telogen effluvium focuses on resolution of the underlying cause.

  17. [Common anemias in neonatology].

    PubMed

    Humbert, J; Wacker, P

    1999-01-28

    We describe the four most common groups of neonatal anemia and their treatments, with particular emphasis on erythropoietin therapy. The hemolytic anemias include the ABO incompatibility (much more frequent, nowadays, than the Rh incompatibility, which has nearly disappeared following the use of anti-D immunoglobulin in postpartum Rh-negative mothers), hereditary spherocytosis and G-6-PD deficiency. Among hypoplastic anemias, that caused by Parvovirus B19 predominates, by far, over Diamond-Blackfan anemia, alpha-thalassemia and the rare sideroblastic anemias. "Hemorrhagic" anemias occur during twin-to-twin transfusions, or during feto-maternal transfusions. Finally, the multifactorial anemia of prematurity develops principally as a result of the rapid expansion of the blood volume in this group of patients. Erythropoietin therapy, often at doses much higher than those used in the adult, should be seriously considered in most cases of non-hypoplastic neonatal anemias, to minimise maximally the use of transfusions.

  18. TMT common software update

    NASA Astrophysics Data System (ADS)

    Gillies, Kim; Brighton, Allan; Buur, Hanne

    2016-08-01

    TMT Common Software (CSW). CSW consists of software services and library code that is used by developers to create the subsystems and components that participate in the software system. CSW also defines the types of components that can be constructed and their functional roles in the software system. TMT CSW has recently passed its preliminary design review. The unique features of CSW include its use of multiple, open-source products as the basis for services, and an approach that works to reduce the amount of CSW-provided infrastructure code. Considerable prototyping was completed during this phase to mitigate risk with results that demonstrate the validity of this design approach and the selected service implementation products. This paper describes the latest design of TMT CSW, key features, and results from the prototyping effort.

  19. Self-ratings of materialism and status consumption in a Malaysian sample: effects of answering during an assumed recession versus economic growth.

    PubMed

    Jusoh, W J; Heaney, J G; Goldsmith, R E

    2001-06-01

    Consumers' self-assessments of materialism and status consumption may be influenced by external economic conditions. In this study, 239 Malaysian students were asked to describe their levels of materialism using Richins and Dawson's 1992 Materialism scale and status consumption using Eastman, Goldsmith, and Flynn's 1999 Status Consumption Scale. Half the students were told to respond assuming that they were in an expanding economy, and half as if the economy was in a recession. Comparison of the groups' mean scores showed no statistically significant differences.

  20. Common Superficial Bursitis.

    PubMed

    Khodaee, Morteza

    2017-02-15

    Superficial bursitis most often occurs in the olecranon and prepatellar bursae. Less common locations are the superficial infrapatellar and subcutaneous (superficial) calcaneal bursae. Chronic microtrauma (e.g., kneeling on the prepatellar bursa) is the most common cause of superficial bursitis. Other causes include acute trauma/hemorrhage, inflammatory disorders such as gout or rheumatoid arthritis, and infection (septic bursitis). Diagnosis is usually based on clinical presentation, with a particular focus on signs of septic bursitis. Ultrasonography can help distinguish bursitis from cellulitis. Blood testing (white blood cell count, inflammatory markers) and magnetic resonance imaging can help distinguish infectious from noninfectious causes. If infection is suspected, bursal aspiration should be performed and fluid examined using Gram stain, crystal analysis, glucose measurement, blood cell count, and culture. Management depends on the type of bursitis. Acute traumatic/hemorrhagic bursitis is treated conservatively with ice, elevation, rest, and analgesics; aspiration may shorten the duration of symptoms. Chronic microtraumatic bursitis should be treated conservatively, and the underlying cause addressed. Bursal aspiration of microtraumatic bursitis is generally not recommended because of the risk of iatrogenic septic bursitis. Although intrabursal corticosteroid injections are sometimes used to treat microtraumatic bursitis, high-quality evidence demonstrating any benefit is unavailable. Chronic inflammatory bursitis (e.g., gout, rheumatoid arthritis) is treated by addressing the underlying condition, and intrabursal corticosteroid injections are often used. For septic bursitis, antibiotics effective against Staphylococcus aureus are generally the initial treatment, with surgery reserved for bursitis not responsive to antibiotics or for recurrent cases. Outpatient antibiotics may be considered in those who are not acutely ill; patients who are acutely ill