ERIC Educational Resources Information Center
Michaelides, Michalis P.; Haertel, Edward H.
2014-01-01
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
ERIC Educational Resources Information Center
Michaelides, Michalis P.; Haertel, Edward H.
2014-01-01
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
Approximate natural vibration analysis of rectangular plates with openings using assumed mode method
NASA Astrophysics Data System (ADS)
Cho, Dae Seung; Vladimir, Nikola; Choi, Tae MuK
2013-09-01
Natural vibration analysis of plates with openings of different shape represents an important issue in naval architecture and ocean engineering applications. In this paper, a procedure for vibration analysis of plates with openings and arbitrary edge constraints is presented. It is based on the assumed mode method, where natural frequencies and modes are determined by solving an eigenvalue problem of a multi-degree-of-freedom system matrix equation derived by using Lagrange's equations of motion. The presented solution represents an extension of a procedure for natural vibration analysis of rectangular plates without openings, which has been recently presented in the literature. The effect of an opening is taken into account in an intuitive way, i.e. by subtracting its energy from the total plate energy without opening. Illustrative numerical examples include dynamic analysis of rectangular plates with rectangular, elliptic, circular as well as oval openings with various plate thicknesses and different combinations of boundary conditions. The results are compared with those obtained by the finite element method (FEM) as well as those available in the relevant literature, and very good agreement is achieved.
Mapping biological entities using the longest approximately common prefix method
2014-01-01
Background The significant growth in the volume of electronic biomedical data in recent decades has pointed to the need for approximate string matching algorithms that can expedite tasks such as named entity recognition, duplicate detection, terminology integration, and spelling correction. The task of source integration in the Unified Medical Language System (UMLS) requires considerable expert effort despite the presence of various computational tools. This problem warrants the search for a new method for approximate string matching and its UMLS-based evaluation. Results This paper introduces the Longest Approximately Common Prefix (LACP) method as an algorithm for approximate string matching that runs in linear time. We compare the LACP method for performance, precision and speed to nine other well-known string matching algorithms. As test data, we use two multiple-source samples from the Unified Medical Language System (UMLS) and two SNOMED Clinical Terms-based samples. In addition, we present a spell checker based on the LACP method. Conclusions The Longest Approximately Common Prefix method completes its string similarity evaluations in less time than all nine string similarity methods used for comparison. The Longest Approximately Common Prefix outperforms these nine approximate string matching methods in its Maximum F1 measure when evaluated on three out of the four datasets, and in its average precision on two of the four datasets. PMID:24928653
Performance Improvement Assuming Complexity
ERIC Educational Resources Information Center
Rowland, Gordon
2007-01-01
Individual performers, work teams, and organizations may be considered complex adaptive systems, while most current human performance technologies appear to assume simple determinism. This article explores the apparent mismatch and speculates on future efforts to enhance performance if complexity rather than simplicity is assumed. Included are…
Performance Improvement Assuming Complexity
ERIC Educational Resources Information Center
Rowland, Gordon
2007-01-01
Individual performers, work teams, and organizations may be considered complex adaptive systems, while most current human performance technologies appear to assume simple determinism. This article explores the apparent mismatch and speculates on future efforts to enhance performance if complexity rather than simplicity is assumed. Included are…
NASA Astrophysics Data System (ADS)
Stanke, Monica; Bubin, Sergiy; Adamowicz, Ludwik
2009-06-01
Very accurate variational calculations of the fundamental pure vibrational transitions of the H3eH4e+ and L7iH+ ions are performed within the framework that does not assume the Born-Oppenheimer (BO) approximation. The non-BO wave functions expanded in terms of one-center explicitly correlated Gaussian functions multiplied by even powers of the internuclear distance are used to calculate the leading relativistic corrections. Up to 10000 Gaussian functions are used for each state. It is shown that the experimental H3eH4e+ fundamental transitions is reproduced within 0.06cm-1 by the calculations. A similar precision is expected for the calculated, but still unmeasured, fundamental transition of L7iH+ . Thus, three-electron diatomic systems are calculated with a similar accuracy as two-electron systems.
NASA Astrophysics Data System (ADS)
Endres, J.; Diener, A.; Wurm, M.; Bodermann, B.
2014-04-01
Scatterometry is a common tool for the dimensional characterization of periodic nanostructures. It is an indirect measurement method, where the dimensions and geometry of the structures under test are reconstructed from the measured scatterograms applying inverse rigorous calculations. This approach is numerically very elaborate so that usually a number of approximations are used. The influence of each approximation has to be analysed to quantify its contribution to the uncertainty budget. This is a fundamental step to achieve traceability. In this paper, we experimentally investigate two common approximations: the effect of a finite illumination spot size and the application of a more advanced structure model for the reconstruction. We show that the illumination spot size affects the sensitivity to sample inhomogeneities but has no influence on the reconstruction parameters, whereas additional corner rounding of the trapezoidal grating profile significantly improves the reconstruction result.
2012-01-01
Background Lysosomal storage disorders (LSD) are a rare cause of non immunological hydrops fetalis (NIHF) and congenital ascites. The reported incidence is about 1%. The incidence of idiopathic NIHF is estimated to be about 18%. Patients and methods We report four cases with transient hydrops fetalis resulting from LSD and performed a literature review on LSD with NIHF and congenital ascites in combination. Results At present, 12 different LSDs are described to be associated with NIHF or congenital ascites. Most patients had a family history of NIHF, where the preceding sibling had not been examined. A diagnostic approach to the fetus with NIHF due to suspected LSD either in utero or postnatal is suggested. Transient forms of NIHF and/or ascites in association with MPS IVA, MPS VII and NPC are described for the first time in this publication. Conclusions LSD should be considered in transient hydrops. Enzymatic studies in chorionic villous sample or amniotic cultured cells, once the most common conditions associated with fetal ascites or hydrops have been ruled out, are important. This paper emphasizes the fact that LSD is significantly higher than the estimated 1% in previous studies, which is important for genetic counseling as there is a high risk of recurrence and the availability of enzyme replacement therapy for an increasing number of LSD. PMID:23137060
Collaboration: Assumed or Taught?
ERIC Educational Resources Information Center
Kaplan, Sandra N.
2014-01-01
The relationship between collaboration and gifted and talented students often is assumed to be an easy and successful learning experience. However, the transition from working alone to working with others necessitates an understanding of issues related to ability, sociability, and mobility. Collaboration has been identified as both an asset and a…
Collaboration: Assumed or Taught?
ERIC Educational Resources Information Center
Kaplan, Sandra N.
2014-01-01
The relationship between collaboration and gifted and talented students often is assumed to be an easy and successful learning experience. However, the transition from working alone to working with others necessitates an understanding of issues related to ability, sociability, and mobility. Collaboration has been identified as both an asset and a…
NASA Astrophysics Data System (ADS)
Dai, Liyi
2016-05-01
Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n-2/5 in general and to n-1/2, the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.
NASA Technical Reports Server (NTRS)
Blundell, Colin; Giannakopoulou, Dimitra; Pasareanu, Corina S.
2005-01-01
Verification techniques for component-based systems should ideally be able to predict properties of the assembled system through analysis of individual components before assembly. This work introduces such a modular technique in the context of testing. Assume-guarantee testing relies on the (automated) decomposition of key system-level requirements into local component requirements at design time. Developers can verify the local requirements by checking components in isolation; failed checks may indicate violations of system requirements, while valid traces from different components compose via the assume-guarantee proof rule to potentially provide system coverage. These local requirements also form the foundation of a technique for efficient predictive testing of assembled systems: given a correct system run, this technique can predict violations by alternative system runs without constructing those runs. We discuss the application of our approach to testing a multi-threaded NASA application, where we treat threads as components.
Common fixed points in best approximation for Banach operator pairs with Ciric type I-contractions
NASA Astrophysics Data System (ADS)
Hussain, N.
2008-02-01
The common fixed point theorems, similar to those of Ciric [Lj.B. Ciric, On a common fixed point theorem of a Gregus type, Publ. Inst. Math. (Beograd) (N.S.) 49 (1991) 174-178; Lj.B. Ciric, On Diviccaro, Fisher and Sessa open questions, Arch. Math. (Brno) 29 (1993) 145-152; Lj.B. Ciric, On a generalization of Gregus fixed point theorem, Czechoslovak Math. J. 50 (2000) 449-458], Fisher and Sessa [B. Fisher, S. Sessa, On a fixed point theorem of Gregus, Internat. J. Math. Math. Sci. 9 (1986) 23-28], Jungck [G. Jungck, On a fixed point theorem of Fisher and Sessa, Internat. J. Math. Math. Sci. 13 (1990) 497-500] and Mukherjee and Verma [R.N. Mukherjee, V. Verma, A note on fixed point theorem of Gregus, Math. Japon. 33 (1988) 745-749], are proved for a Banach operator pair. As applications, common fixed point and approximation results for Banach operator pair satisfying Ciric type contractive conditions are obtained without the assumption of linearity or affinity of either T or I. Our results unify and generalize various known results to a more general class of noncommuting mappings.
NASA Astrophysics Data System (ADS)
Xiao, Jian-Zhong; Sun, Jing; Huang, Xuan
2010-02-01
In this paper a k+1-step iterative scheme with error terms involving k+1 asymptotically quasi-nonexpansive mappings is studied. In usual Banach spaces, some sufficient and necessary conditions are given for the iterative scheme to approximate a common fixed point. In uniformly convex Banach spaces, power equicontinuity for a mapping is introduced and a series of new convergence theorems are established. Several known results in the current literature are extended and refined.
Undamped critical speeds of rotor systems using assumed modes
NASA Astrophysics Data System (ADS)
Nelson, H. D.; Chen, W. J.
1993-07-01
A procedure is presented to reduce the DOF of a discrete rotordynamics model by utilizing an assumed-modes Rayleigh-Ritz approximation. Many possibilities exist for the assumed modes and any reasonable choice will yield a reduced-order model with adequate accuracy for most applications. The procedure provides an option which can be implemented with relative ease and may prove beneficial for many applications where computational efficiency is particularly important.
Assume-Guarantee Reasoning for Deadlock
2006-09-01
and non-circular assume-guarantee rules [Pnueli 85, de Roever 98, Barringer 03]. Amla and colleagues have presented a sound and complete assume...guarantee method in the context of an abstract process composition framework [ Amla 03]. However, they do not discuss deadlock detection or explore the use of...NY: Springer-Verlag, July 2005. [ Amla 03] Amla , N.; Emerson, E. A.; Namjoshi, K. S.; & Trefler, R. J. “Abstract Patterns of Compositional Reasoning
ERIC Educational Resources Information Center
Beaton, Albert E., Jr.
Commonality analysis is an attempt to understand the relative predictive power of the regressor variables, both individually and in combination. The squared multiple correlation is broken up into elements assigned to each individual regressor and to each possible combination of regressors. The elements have the property that the appropriate sums…
Empirical progress and nomic truth approximation revisited.
Kuipers, Theo A F
2014-06-01
In my From Instrumentalism to Constructive Realism (2000) I have shown how an instrumentalist account of empirical progress can be related to nomic truth approximation. However, it was assumed that a strong notion of nomic theories was needed for that analysis. In this paper it is shown, in terms of truth and falsity content, that the analysis already applies when, in line with scientific common sense, nomic theories are merely assumed to exclude certain conceptual possibilities as nomic possibilities.
Assumed policy similarity and voter preference.
Quist, Ryan M; Crano, William D
2003-04-01
The effects of attitude similarity on voters' preferences were examined. Using secondary analyses, the authors created measures of assumed similarity across 6 issues between voters and U.S. presidential candidates (in 1972). Greater similarity was associated with greater attraction (operationalized in terms of voters' presidential preferences). In 2 independent analyses, perceived similarity resulted in predictive accuracy of 84% to 88%. In a 3rd analysis, the predictive efficiency of each of 6 similarity measures was determined and used to develop a model that accurately predicted voters' actions in a hold-out sample. Findings demonstrate the importance of perceived attitude similarity in determining voter preferences and suggest the utility of earlier similarity-attraction research for the development of models of policy choice behavior.
UN projections assume fertility decline, mortality increase.
Haub, C
1998-12-01
This article summarizes the latest findings from the UN Population Division's 1998 review of World Population Estimates and Projections. The revisions reflect lower future population size and faster rates of fertility and mortality decline. The medium variant of population projection for 2050 indicates 8.9 billion, which is 458 million lower than projected in 1996 and 924 million lower than projected in 1994. The changes are due to changes in developing countries. Africa's changes accounted for over 50% of the change. The UN medium projection assumes that the desire for fewer children and effective contraceptive practice will continue and that the availability of family planning services will increase. The revisions are also attributed to the widespread prevalence of AIDS in sub-Saharan Africa and greater chances for lower fertility in developing countries. AIDS mortality may decrease average life expectancy in 29 African countries by 7 years. The UN medium projection assumes a decline in fertility from 2.7 children/woman during 1995-2000 to 2.0 children/woman by 2050. The UN high variant is 10.7 billion by 2050; the low variant is 7.3 billion. It is concluded that efforts of national governments and international agencies have contributed to increased access to reproductive health services and subsequent fertility decline. Future declines will depend on accessibility issues. Despite declines, world population is still growing by 78 million annually. Even countries such as Botswana, with 25% of the population infected with HIV/AIDS, will double in size by 2050.
Examining roles pharmacists assume in disasters: a content analytic approach.
Ford, Heath; Dallas, Cham E; Harris, Curt
2013-12-01
Numerous practice reports recommend roles pharmacists may adopt during disasters. This study examines the peer-reviewed literature for factors that explain the roles pharmacists assume in disasters and the differences in roles and disasters when stratified by time. Quantitative content analysis was used to gather data consisting of words and phrases from peer-reviewed pharmacy literature regarding pharmacists' roles in disasters. Negative binomial regression and Kruskal-Wallis nonparametric models were applied to the data. Pharmacists' roles in disasters have not changed significantly since the 1960s. Pharmaceutical supply remains their preferred role, while patient management and response integration roles decrease in context of common, geographically widespread disasters. Policy coordination roles, however, significantly increase in nuclear terrorism planning. Pharmacists' adoption of nonpharmaceutical supply roles may represent a problem of accepting a paradigm shift in nontraditional roles. Possible shortages of personnel in future disasters may change the pharmacists' approach to disaster management.
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not impose...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as provided...
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not impose...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as provided...
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not impose...
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not impose...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as provided...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as provided...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as provided...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not impose...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with no...
Kashinski, D O; Chase, G M; Nelson, R G; Di Nallo, O E; Scales, A N; VanderLey, D L; Byrd, E F C
2017-03-23
We propose new approximate global multiplicative scaling factors for the DFT calculation of ground state harmonic vibrational frequencies using functionals from the TPSS, M06, and M11 functional families with standard correlation consistent cc-pVxZ and aug-cc-pVxZ (x = D, T, and Q), 6-311G split valence family, Sadlej and Sapporo polarized triple-ζ basis sets. Results for B3LYP, CAM-B3LYP, B3PW91, PBE, and PBE0 functionals with these basis sets are also reported. A total of 99 harmonic frequencies were calculated for 26 gas-phase organic and nonorganic molecules typically found in detonated solid propellant residue. Our proposed approximate multiplicative scaling factors are determined using a least-squares approach comparing the computed harmonic frequencies to experimental counterparts well established in the scientific literature. A comparison of our work to previously published global scaling factors is made to verify method reliability and the applicability of our molecular test set.
Sirivichayakul, Sunee; Tirawatnapong, Thaweesak; Ruxrungtham, Kiat; Oelrichs, Robert; Lorenzen, Sven-Lver; Xin, Ke-Qin; Okuda, Kenji; Phanuphak, Praphan
2004-03-01
DNA immunization represents one of the promising HIV-1 vaccine approaches. To overcome the obstacle of genetic variation, we used the last common ancestor (LCA) or "center-of-the-tree" approach to study a DNA fragment of the HIV-1 envelope surrounding the V3 region. A humanized codon of the 297-bp consensus ancestral sequence of the HIV-1 envelope (codons 291-391) was derived from the 80 most recent HIV-1 isolates from the 8 circulating HIV-1 subtypes worldwide. This 297-bp humanized "multi-clade" V3 DNA was amplified by a PCR-based technique. The PCR product was well expressed in vitro whereas the corresponding non-humanized V3 DNA (subtype A/E) could not be expressed. However, both V3 DNA constructs as well as the full-length HIV-1 envelope construct (A/E) were found to be immunogenic in mice by the footpad-swelling assay. Moreover, intracellular and extracellular interferon-gamma could be detected upon in vitro stimulation of spleen cells although the response was relatively weak. Further improvement of our humanized V3 DNA is needed.
Perceptual and Emotional Effects of Assuming a Disability.
ERIC Educational Resources Information Center
Raines, Shanan R.; And Others
The effects of assuming a disability in changing attitudes towards persons with disabilities were assessed in 18 undergraduate students who were enrolled in an introductory rehabilitation counseling course. The subjects were instructed to engage in two levels of assumed disability (one-hand bound and two-hands bound) in three settings (private…
Perceptual and Emotional Effects of Assuming a Disability.
ERIC Educational Resources Information Center
Raines, Shanan R.; And Others
The effects of assuming a disability in changing attitudes towards persons with disabilities were assessed in 18 undergraduate students who were enrolled in an introductory rehabilitation counseling course. The subjects were instructed to engage in two levels of assumed disability (one-hand bound and two-hands bound) in three settings (private…
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
Assume-Guarantee Abstraction Refinement Meets Hybrid Systems
NASA Technical Reports Server (NTRS)
Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas
2014-01-01
Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.
Abstraction and Assume-Guarantee Reasoning for Automated Software Verification
NASA Technical Reports Server (NTRS)
Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.
2004-01-01
Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.
Study on Beijing University Returned Overseas Students Assuming Leadership Posts
ERIC Educational Resources Information Center
Chinese Education and Society, 2004
2004-01-01
In response to requests from the Central Committee's Organization Department and the Organization Department of the Beijing Municipal Party Committee, a monographic study on the subject of Beijing University's returned overseas students assuming leadership posts, was conducted. Information was obtained in various quarters by means of informal…
46 CFR 174.075 - Compartments assumed flooded: general.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 7 2010-10-01 2010-10-01 false Compartments assumed flooded: general. 174.075 Section 174.075 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore Drilling...
46 CFR 174.075 - Compartments assumed flooded: general.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 7 2013-10-01 2013-10-01 false Compartments assumed flooded: general. 174.075 Section 174.075 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) SUBDIVISION AND STABILITY SPECIAL RULES PERTAINING TO SPECIFIC VESSEL TYPES Special Rules Pertaining to Mobile Offshore Drilling...
A Report on Women West Point Graduates Assuming Nontraditional Roles.
ERIC Educational Resources Information Center
Yoder, Janice D.; Adams, Jerome
In 1980 the first women graduated from the military and college training program at West Point. To investigate the progress of both male and female graduates as they assume leadership roles in the regular Army, 35 women and 113 men responded to a survey assessing career involvement and planning, commitment and adjustment, and satisfaction.…
Study on Beijing University Returned Overseas Students Assuming Leadership Posts
ERIC Educational Resources Information Center
Chinese Education and Society, 2004
2004-01-01
In response to requests from the Central Committee's Organization Department and the Organization Department of the Beijing Municipal Party Committee, a monographic study on the subject of Beijing University's returned overseas students assuming leadership posts, was conducted. Information was obtained in various quarters by means of informal…
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Modeling turbulent/chemistry interactions using assumed pdf methods
NASA Technical Reports Server (NTRS)
Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.
1992-01-01
Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.
Chemically reacting supersonic flow calculation using an assumed PDF model
NASA Technical Reports Server (NTRS)
Farshchi, M.
1990-01-01
This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.
Asynchronous variational integration using continuous assumed gradient elements
Wolff, Sebastian; Bucher, Christian
2013-01-01
Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) [1], exemplified by continuous assumed gradient elements Wolff and Bucher (2011) [2]. The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step. PMID:23543620
17. Photographic copy of photograph. Location unknown but assumed to ...
17. Photographic copy of photograph. Location unknown but assumed to be uper end of canal. Features no longer extant. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation service. Annual Report, Fiscal Year 1925. Vol. I, Narrative and Photographs, Irrigation District #4, California and Southern Arizona, RG 75, Entry 655, Box 28, National Archives, Washington, DC.) Photographer unknown. MAIN (TITLED FLORENCE) CANAL, WASTEWAY, SLUICEWAY, & BRIDGE, 1/26/25. - San Carlos Irrigation Project, Marin Canal, Amhurst-Hayden Dam to Picacho Reservoir, Coolidge, Pinal County, AZ
Statistical motor number estimation assuming a binomial distribution.
Blok, Joleen H; Visser, Gerhard H; de Graaf, Sándor; Zwarts, Machiel J; Stegeman, Dick F
2005-02-01
The statistical method of motor unit number estimation (MUNE) uses the natural stochastic variation in a muscle's compound response to electrical stimulation to obtain an estimate of the number of recruitable motor units. The current method assumes that this variation follows a Poisson distribution. We present an alternative that instead assumes a binomial distribution. Results of computer simulations and of a pilot study on 19 healthy subjects showed that the binomial MUNE values are considerably higher than those of the Poisson method, and in better agreement with the results of other MUNE techniques. In addition, simulation results predict that the performance in patients with severe motor unit loss will be better for the binomial than Poisson method. The adapted method remains closer to physiology, because it can accommodate the increase in activation probability that results from rising stimulus intensity. It does not need recording windows as used with the Poisson method, and is therefore less user-dependent and more objective and quicker in its operation. For these reasons, we believe that the proposed modifications may lead to significant improvements in the statistical MUNE technique.
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-31
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
An assumed partition algorithm for determining processor inter-communication
Baker, A H; Falgout, R D; Yang, U M
2005-09-23
The recent advent of parallel machines with tens of thousands of processors is presenting new challenges for obtaining scalability. A particular challenge for large-scale scientific software is determining the inter-processor communications required by the computation when a global description of the data is unavailable or too costly to store. We present a type of rendezvous algorithm that determines communication partners in a scalable manner by assuming the global distribution of the data. We demonstrate the scaling properties of the algorithm on up to 32,000 processors in the context of determining communication patterns for a matrix-vector multiply in the hypre software library. Our algorithm is very general and is applicable to a variety of situations in parallel computing.
Organohalogens in nature: More widespread than previously assumed
Asplund, G.; Grimvall, A. )
1991-08-01
Although the natural production of organohalogens has been observed in several studies, it is generally assumed to be much smaller than the industrial production of these compounds. Nevertheless, two important natural sources have been known since the 1970s: red algae in marine ecosystems produce large amounts of brominated compounds, and methyl halides of natural origin are present in the atmosphere. During the past few years it has been shown that organohalogens are so widespread in groundwater, surface water, and soil that all samples in the studies referred to contain measurable amounts of absorbable organohalogens (AOX). The authors document the widespread occurrence of organohalogens in unpolluted soil and water and discuss possible sources of these compounds. It has been suggested that these organohalogens originate from long-range atmospheric transport of industrially produced compounds. The authors review existing evidence of enzymatically mediated halogenation of organic matter in soil and show that, most probably, natural halogenation in the terrestrial environment is the largest source.
Repulsion or attraction? Group membership and assumed attitude similarity.
Chen, Fang Fang; Kenrick, Douglas T
2002-07-01
Three studies investigated group membership effects on similarity-attraction and dissimilarity-repulsion. Membership in an in-group versus out-group was expected to create initially different levels of assumed attitude similarity. In 3 studies, ratings made after participants learned about the target's attitudes were compared with initial attraction based only on knowing target's group membership. Group membership was based on political affiliation in Study 1 and on sexual orientation in Study 2. Study 3 crossed political affiliation with target's obnoxiousness. Attitude dissimilarity produced stronger repulsion effects for in-group than for out-group members in all studies. Attitude similarity produced greater increments in attraction for political out-group members but not for targets with a stigmatic sexual orientation or personality characteristic.
Plasma expansion into vacuum assuming a steplike electron energy distribution.
Kiefer, Thomas; Schlegel, Theodor; Kaluza, Malte C
2013-04-01
The expansion of a semi-infinite plasma slab into vacuum is analyzed with a hydrodynamic model implying a steplike electron energy distribution function. Analytic expressions for the maximum ion energy and the related ion distribution function are derived and compared with one-dimensional numerical simulations. The choice of the specific non-Maxwellian initial electron energy distribution automatically ensures the conservation of the total energy of the system. The estimated ion energies may differ by an order of magnitude from the values obtained with an adiabatic expansion model supposing a Maxwellian electron distribution. Furthermore, good agreement with data from experiments using laser pulses of ultrashort durations τ(L)assumed.
Assumed Probability Density Functions for Shallow and Deep Convection
NASA Astrophysics Data System (ADS)
Bogenschutz, Peter A.; Krueger, Steven K.; Khairoutdinov, Marat
2010-04-01
The assumed joint probability density function (PDF) between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS) parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model). The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio) compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence representation in coarse
Students Learn Statistics When They Assume a Statistician's Role.
ERIC Educational Resources Information Center
Sullivan, Mary M.
Traditional elementary statistics instruction for non-majors has focused on computation. Rarely have students had an opportunity to interact with real data sets or to use questioning to drive data analysis, common activities among professional statisticians. Inclusion of data gathering and analysis into whole class and small group activities…
Sensitivity of Global Warming Potentials to the assumed background atmosphere
Wuebbles, D.J.; Patten, K.O.
1992-03-05
This is the first in a series of papers in which we will examine various aspects of the Global Warming Potential (GWP) concept and the sensitivity and uncertainties associated with the GWP values derived for the 1992 updated scientific assessment report of the Intergovernmental Panel on Climate Change (IPCC). One of the authors of this report (DJW) helped formulate the GWP concept for the first IPCC report in 1990. The Global Warming Potential concept was developed for that report as an attempt to fulfill the request from policymakers for a way of relating the potential effects on climate from various greenhouse gases, in much the same way as the Ozone Depletion Potential (ODP) concept (Wuebbles, 1981) is used in policy analyses related to concerns about the relative effects of CFCs and other compounds on stratospheric ozone destruction. We are also coauthors of the section on radiative forcing and Global Warming Potentials for the 1992 IPCC update; however, there was too little time to prepare much in the way of new research material for that report. Nonetheless, we have recognized for some time that there are a number of uncertainties and limitations associated with the definition of GWPs used in both the original and new IPCC reports. In this paper, we examine one of those uncertainties, namely, the effect of the assumed background atmospheric concentrations on the derived GWPs. Later papers will examine the sensitivity of GWPs to other uncertainties and limitations in the current concept.
Computing functions by approximating the input
NASA Astrophysics Data System (ADS)
Goldberg, Mayer
2012-12-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their output. Our approach assumes only the most rudimentary knowledge of algebra and trigonometry, and makes no use of calculus.
A 4-node assumed-stress hybrid shell element with rotational degrees of freedom
NASA Technical Reports Server (NTRS)
Aminpour, Mohammad A.
1990-01-01
An assumed-stress hybrid/mixed 4-node quadrilateral shell element is introduced that alleviates most of the deficiencies associated with such elements. The formulation of the element is based on the assumed-stress hybrid/mixed method using the Hellinger-Reissner variational principle. The membrane part of the element has 12 degrees of freedom including rotational or drilling degrees of freedom at the nodes. The bending part of the element also has 12 degrees of freedom. The bending part of the element uses the Reissner-Mindlin plate theory which takes into account the transverse shear contributions. The element formulation is derived from an 8-node isoparametric element. This process is accomplished by assuming quadratic variations for both in-plane and out-of-plane displacement fields and linear variations for both in-plane and out-of-plane rotation fields along the edges of the element. In addition, the degrees of freedom at midside nodes are approximated in terms of the degrees of freedom at corner nodes. During this process the rotational degrees of freedom at the corner nodes enter into the formulation of the element. The stress field are expressed in the element natural-coordinate system such that the element remains invariant with respect to node numbering.
Second Approximation to Conical Flows
1950-12-01
Public Release WRIGHT AIR DEVELOPMENT CENTER AF-WP-(B)-O-29 JUL 53 100 NOTICES ’When Government drawings, specifications, or other data are used V...so that the X, the approximation always depends on the ( "/)th, etc. Here the second approximation, i.e., the terms in C and 62, are computed and...the scheme shown in Fig. 1, the isentropic equations of motion are (cV-X2) +~X~C 6 +- 4= -x- 1 It is assumed that + Ux !E . $O’/ + (8) Introducing Eqs
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Interpolation and Approximation Theory.
ERIC Educational Resources Information Center
Kaijser, Sten
1991-01-01
Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
An overview of sheet metal forming simulations with enhanced assumed strain elements
Valente, R. A. F.; Sousa, R. J. A. de; Cardoso, R. P. R.; Simoes, F.; Gracio, J.; Jorge, R. M. N.; Yoon, J. W.
2007-05-17
Sheet metal forming operations are characterized by extreme shape changes in initially flat or pre-formed blanks, thus needing complex and robust simulation tools for their correct virtual analysis. Among numerical approaches, finite element procedures are one of the most common techniques in modelling and simulation of such manufacturing applications. However, reliable simulations of complex parts' sheet forming must be able to correctly reproduce the deformation patterns involved but also accurately predict the appearance of defects after or during forming stages. Among the most common defects in the forming of metallic parts, spring-back and wrinkling are of crucial importance in manufacturing viewpoint. Spring-back appearance can be traced to the onset of traction instabilities when the tools depart the blank due to a rearrangement of stress fields after forming (or forming stages) and so, the unloaded blank reaches a new equilibrium. On the other side, wrinkling defects can be seen as compressive dominated defects and, in this sense, be dealt with as buckling-type structural instabilities. In this work, a class of solid-shell finite elements, based on distinct features but relying on the enhanced assumed strain approach, are tested in the simulation of sheet metal forming operation in metallic components. Results obtained from these elements, specially designed to treat transverse shear and volumetric locking effects, are then compared with well-established references in the literature, including experimental and numerical studies, where, for the latter case, shell finite elements are dominantly used.
25 CFR 170.610 - What IRR Program functions may a tribe assume under ISDEAA?
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false What IRR Program functions may a tribe assume under... Agreements Under Isdeaa § 170.610 What IRR Program functions may a tribe assume under ISDEAA? A tribe may assume all IRR Program functions and activities that are otherwise contractible under a...
NASA Astrophysics Data System (ADS)
Niiniluoto, Ilkka
2014-03-01
Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).
Approximate symmetries of Hamiltonians
NASA Astrophysics Data System (ADS)
Chubb, Christopher T.; Flammia, Steven T.
2017-08-01
We explore the relationship between approximate symmetries of a gapped Hamiltonian and the structure of its ground space. We start by considering approximate symmetry operators, defined as unitary operators whose commutators with the Hamiltonian have norms that are sufficiently small. We show that when approximate symmetry operators can be restricted to the ground space while approximately preserving certain mutual commutation relations. We generalize the Stone-von Neumann theorem to matrices that approximately satisfy the canonical (Heisenberg-Weyl-type) commutation relations and use this to show that approximate symmetry operators can certify the degeneracy of the ground space even though they only approximately form a group. Importantly, the notions of "approximate" and "small" are all independent of the dimension of the ambient Hilbert space and depend only on the degeneracy in the ground space. Our analysis additionally holds for any gapped band of sufficiently small width in the excited spectrum of the Hamiltonian, and we discuss applications of these ideas to topological quantum phases of matter and topological quantum error correcting codes. Finally, in our analysis, we also provide an exponential improvement upon bounds concerning the existence of shared approximate eigenvectors of approximately commuting operators under an added normality constraint, which may be of independent interest.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
Indexing the approximate number system.
Inglis, Matthew; Gilmore, Camilla
2014-01-01
Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects.
NASA Astrophysics Data System (ADS)
Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.
2005-10-01
The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-27
... Responsibilities; Notice of Proposed Information Collection: Comment Request AGENCY: Office of the Assistant...: Environmental Review Procedures for Entities Assuming HUD Environmental Responsibilities. OMB Control...
IONIS: Approximate atomic photoionization intensities
NASA Astrophysics Data System (ADS)
Heinäsmäki, Sami
2012-02-01
A program to compute relative atomic photoionization cross sections is presented. The code applies the output of the multiconfiguration Dirac-Fock method for atoms in the single active electron scheme, by computing the overlap of the bound electron states in the initial and final states. The contribution from the single-particle ionization matrix elements is assumed to be the same for each final state. This method gives rather accurate relative ionization probabilities provided the single-electron ionization matrix elements do not depend strongly on energy in the region considered. The method is especially suited for open shell atoms where electronic correlation in the ionic states is large. Program summaryProgram title: IONIS Catalogue identifier: AEKK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1149 No. of bytes in distributed program, including test data, etc.: 12 877 Distribution format: tar.gz Programming language: Fortran 95 Computer: Workstations Operating system: GNU/Linux, Unix Classification: 2.2, 2.5 Nature of problem: Photoionization intensities for atoms. Solution method: The code applies the output of the multiconfiguration Dirac-Fock codes Grasp92 [1] or Grasp2K [2], to compute approximate photoionization intensities. The intensity is computed within the one-electron transition approximation and by assuming that the sum of the single-particle ionization probabilities is the same for all final ionic states. Restrictions: The program gives nonzero intensities for those transitions where only one electron is removed from the initial configuration(s). Shake-type many-electron transitions are not computed. The ionized shell must be closed in the initial state. Running time: Few seconds for a
The benefits of tight glycemic control in critical illness: Sweeter than assumed?
Gardner, Andrew John
2014-12-01
Hyperglycemia has long been observed amongst critically ill patients and associated with increased mortality and morbidity. Tight glycemic control (TGC) is the clinical practice of controlling blood glucose (BG) down to the "normal" 4.4-6.1 mmol/L range of a healthy adult, aiming to avoid any potential deleterious effects of hyperglycemia. The ground-breaking Leuven trials reported a mortality benefit of approximately 10% when using this technique, which led many to endorse its benefits. In stark contrast, the multi-center normoglycemia in intensive care evaluation-survival using glucose algorithm regulation (NICE-SUGAR) trial, not only failed to replicate this outcome, but showed TGC appeared to be harmful. This review attempts to re-analyze the current literature and suggests that hope for a benefit from TGC should not be so hastily abandoned. Inconsistencies in study design make a like-for-like comparison of the Leuven and NICE-SUGAR trials challenging. Inadequate measures preventing hypoglycemic events are likely to have contributed to the increased mortality observed in the NICE-SUGAR treatment group. New technologies, including predictive models, are being developed to improve the safety of TGC, primarily by minimizing hypoglycemia. Intensive Care Units which are unequipped in trained staff and monitoring capacity would be unwise to attempt TGC, especially considering its yet undefined benefit and the deleterious nature of hypoglycemia. International recommendations now advise clinicians to ensure critically ill patients maintain a BG of <10 mmol/L. Despite encouraging evidence, currently we can only speculate and remain optimistic that the benefit of TGC in clinical practice is sweeter than assumed.
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Cooper, D. G.; Everingham, J. M.; Kraciun, M. K.; Stonedahl, F.
2012-12-01
Hydraulic conductivity (K) is an important sediment property related to the speed with which water flows through sediments. It affects hyporheic uptake and residence time distributions, which are critical to assessing solute transport and nutrient depletion in streams. In this study we investigated the effect of millimeter-scale K variability on measurements that use one of the simplest in situ measurement techniques, the falling-head permeameter test. In a laboratory setting vertical K values and their variability were calculated for a variety of sands. We created composite systems by layering these sands and measured their respective K values. Spatial head distributions for these composite systems were modeled using the finite difference capability of MODFLOW with inputs of head levels, boundaries, and known localized K values. These head distributions were then used to calculate the volumetric flux through the column, which was used in the Hvorslev constant-head equation to calculate vertical K values. We found that these simulated system K values reproduced the same qualitative trends as the laboratory measurements, and provided a good quantitative match in some cases. We then used the model to select distinct heterogeneous K distributions (i.e. layered, randomly distributed, and systematically increasing) that have the same simulated system K value. These K distributions were used in a two-dimensional dune/ripple-scale pumping model to approximate hyporheic residence time distributions and provide estimates of the error associated with the assumed homogeneity of the K distributions. The results have direct implications for both field studies where hydraulic conductivity is being measured and also for determining the level of detail that should be included in computational models.inite difference model of the constant-head permeameter
24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?
Code of Federal Regulations, 2010 CFR
2010-04-01
... evaluation of the environmental issues and take responsibility for the scope and content of the EA in... assume environmental review responsibilities? 1000.20 Section 1000.20 Housing and Urban Development... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is...
Pre-Service Teachers' Personal Epistemic Beliefs and the Beliefs They Assume Their Pupils to Have
ERIC Educational Resources Information Center
Rebmann, Karin; Schloemer, Tobias; Berding, Florian; Luttenberger, Silke; Paechter, Manuela
2015-01-01
In their workaday life, teachers are faced with multiple complex tasks. How they carry out these tasks is also influenced by their epistemic beliefs and the beliefs they assume their pupils hold. In an empirical study, pre-service teachers' epistemic beliefs and those they assume of their pupils were investigated in the setting of teacher…
24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?
Code of Federal Regulations, 2011 CFR
2011-04-01
... evaluation of the environmental issues and take responsibility for the scope and content of the EA in... assume environmental review responsibilities? 1000.20 Section 1000.20 Housing and Urban Development... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is an...
24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?
Code of Federal Regulations, 2014 CFR
2014-04-01
... evaluation of the environmental issues and take responsibility for the scope and content of the EA in... assume environmental review responsibilities? 1000.20 Section 1000.20 Housing and Urban Development... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is an...
24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?
Code of Federal Regulations, 2012 CFR
2012-04-01
... evaluation of the environmental issues and take responsibility for the scope and content of the EA in... assume environmental review responsibilities? 1000.20 Section 1000.20 Housing and Urban Development... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is an...
24 CFR 1000.20 - Is an Indian tribe required to assume environmental review responsibilities?
Code of Federal Regulations, 2013 CFR
2013-04-01
... evaluation of the environmental issues and take responsibility for the scope and content of the EA in... assume environmental review responsibilities? 1000.20 Section 1000.20 Housing and Urban Development... § 1000.20 Is an Indian tribe required to assume environmental review responsibilities? (a) No. It is an...
The Motivation of Teachers to Assume the Role of Cooperating Teacher
ERIC Educational Resources Information Center
Jonett, Connie L. Foye
2009-01-01
The Motivation of Teachers to Assume the Role of Cooperating Teacher This study explored a phenomenological understanding of the motivation and influences that cause experienced teachers to assume pedagogical training of student teachers through the role of cooperating teacher. The research question guiding the study was what motivates teachers to…
Pre-Service Teachers' Personal Epistemic Beliefs and the Beliefs They Assume Their Pupils to Have
ERIC Educational Resources Information Center
Rebmann, Karin; Schloemer, Tobias; Berding, Florian; Luttenberger, Silke; Paechter, Manuela
2015-01-01
In their workaday life, teachers are faced with multiple complex tasks. How they carry out these tasks is also influenced by their epistemic beliefs and the beliefs they assume their pupils hold. In an empirical study, pre-service teachers' epistemic beliefs and those they assume of their pupils were investigated in the setting of teacher…
Intrinsic Nilpotent Approximation.
1985-06-01
RD-A1II58 265 INTRINSIC NILPOTENT APPROXIMATION(U) MASSACHUSETTS INST 1/2 OF TECH CAMBRIDGE LAB FOR INFORMATION AND, DECISION UMCLRSSI SYSTEMS C...TYPE OF REPORT & PERIOD COVERED Intrinsic Nilpotent Approximation Technical Report 6. PERFORMING ORG. REPORT NUMBER LIDS-R-1482 7. AUTHOR(.) S...certain infinite-dimensional filtered Lie algebras L by (finite-dimensional) graded nilpotent Lie algebras or g . where x E M, (x,,Z) E T*M/O. It
Anomalous diffraction approximation limits
NASA Astrophysics Data System (ADS)
Videen, Gorden; Chýlek, Petr
It has been reported in a recent article [Liu, C., Jonas, P.R., Saunders, C.P.R., 1996. Accuracy of the anomalous diffraction approximation to light scattering by column-like ice crystals. Atmos. Res., 41, pp. 63-69] that the anomalous diffraction approximation (ADA) accuracy does not depend on particle refractive index, but instead is dependent on the particle size parameter. Since this is at odds with previous research, we thought these results warranted further discussion.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Estimating average annual percent change for disease rates without assuming constant change.
Fay, Michael P; Tiwari, Ram C; Feuer, Eric J; Zou, Zhaohui
2006-09-01
The annual percent change (APC) is often used to measure trends in disease and mortality rates, and a common estimator of this parameter uses a linear model on the log of the age-standardized rates. Under the assumption of linearity on the log scale, which is equivalent to a constant change assumption, APC can be equivalently defined in three ways as transformations of either (1) the slope of the line that runs through the log of each rate, (2) the ratio of the last rate to the first rate in the series, or (3) the geometric mean of the proportional changes in the rates over the series. When the constant change assumption fails then the first definition cannot be applied as is, while the second and third definitions unambiguously define the same parameter regardless of whether the assumption holds. We call this parameter the percent change annualized (PCA) and propose two new estimators of it. The first, the two-point estimator, uses only the first and last rates, assuming nothing about the rates in between. This estimator requires fewer assumptions and is asymptotically unbiased as the size of the population gets large, but has more variability since it uses no information from the middle rates. The second estimator is an adaptive one and equals the linear model estimator with a high probability when the rates are not significantly different from linear on the log scale, but includes fewer points if there are significant departures from that linearity. For the two-point estimator we can use confidence intervals previously developed for ratios of directly standardized rates. For the adaptive estimator, we show through simulation that the bootstrap confidence intervals give appropriate coverage.
Coon, H.; Jensen, S.; Hoff, M.; Holik, J.; Plaetke, R.; Reimherr, F.; Wender, P.; Leppert, M.; Byerley, W. )
1993-06-01
Manic-depressive illness (MDI), also known as [open quotes]bipolar affective disorder[close quotes], is a common and devastating neuropsychiatric illness. Although pivotal biochemical alterations underlying the disease are unknown, results of family, twin, and adoption studies consistently implicate genetic transmission in the pathogenesis of MDI. In order to carry out linkage analysis, the authors ascertained eight moderately sized pedigrees containing multiple cases of the disease. For a four-allele marker mapping at 5 cM from the disease gene, the pedigree sample has >97% power to detect a dominant allele under genetic homogeneity and has >73% power under 20% heterogeneity. To date, the eight pedigrees have been genotyped with 328 polymorphic DNA loci throughout the genome. When autosomal dominant inheritance was assumed, 273 DNA markers gave lod scores <[minus]2.0 at [theta] = .05, and 4 DNA marker loci yielded lod scores >1 (chromosome 5 -- D5S39, D5S43, and D5S62; chromosome 11 -- D11S85). Of the markers giving lod scores >1, only D5S62 continued to show evidence for linkage when the affected-pedigree-member method was used. The D5S62 locus maps to distal 5q, a region containing neurotransmitter-receptor genes for dopamine, norepinephrine, glutamate, and gamma-aminobutyric acid. Although additional work in this region may be warranted, the linkage results should be interpreted as preliminary data, as 68 unaffected individuals are not past the age of risk. 72 refs., 2 tabs.
NASA Technical Reports Server (NTRS)
Paris, Isabelle L.; Krueger, Ronald; OBrien, T. Kevin
2004-01-01
The difference in delamination onset predictions based on the type and location of the assumed initial damage are compared in a specimen consisting of a tapered flange laminate bonded to a skin laminate. From previous experimental work, the damage was identified to consist of a matrix crack in the top skin layer followed by a delamination between the top and second skin layer (+45 deg./-45 deg. interface). Two-dimensional finite elements analyses were performed for three different assumed flaws and the results show a considerable reduction in critical load if an initial delamination is assumed to be present, both under tension and bending loads. For a crack length corresponding to the peak in the strain energy release rate, the delamination onset load for an assumed initial flaw in the bondline is slightly higher than the critical load for delamination onset from an assumed skin matrix crack, both under tension and bending loads. As a result, assuming an initial flaw in the bondline is simpler while providing a critical load relatively close to the real case. For the configuration studied, a small delamination might form at a lower tension load than the critical load calculated for a 12.7 mm (0.5") delamination, but it would grow in a stable manner. For the bending case, assuming an initial flaw of 12.7 mm (0.5") is conservative, the crack would grow unstably.
NASA Technical Reports Server (NTRS)
Paris, Isabelle L.; Krueger, Ronald; OBrien, T. Kevin
2004-01-01
The difference in delamination onset predictions based on the type and location of the assumed initial damage are compared in a specimen consisting of a tapered flange laminate bonded to a skin laminate. From previous experimental work, the damage was identified to consist of a matrix crack in the top skin layer followed by a delamination between the top and second skin layer (+45 deg./-45 deg. interface). Two-dimensional finite elements analyses were performed for three different assumed flaws and the results show a considerable reduction in critical load if an initial delamination is assumed to be present, both under tension and bending loads. For a crack length corresponding to the peak in the strain energy release rate, the delamination onset load for an assumed initial flaw in the bondline is slightly higher than the critical load for delamination onset from an assumed skin matrix crack, both under tension and bending loads. As a result, assuming an initial flaw in the bondline is simpler while providing a critical load relatively close to the real case. For the configuration studied, a small delamination might form at a lower tension load than the critical load calculated for a 12.7 mm (0.5") delamination, but it would grow in a stable manner. For the bending case, assuming an initial flaw of 12.7 mm (0.5") is conservative, the crack would grow unstably.
A Concept Analysis: Assuming Responsibility for Self-Care among Adolescents with Type 1 Diabetes
Hanna, Kathleen M.; Decker, Carol L.
2009-01-01
Purpose This concept analysis clarifies “assuming responsibility for self-care” by adolescents with type 1 diabetes. Methods Walker and Avant’s (2005) methodology guided the analysis. Results Assuming responsibility for self-care was defined as a process specific to diabetes within the context of development. It is daily, gradual, individualized to person, and unique to task. The goal is ownership that involves autonomy in behaviors and decision-making. Practice Implications Adolescents with type 1 diabetes need to be assessed for assuming responsibility for self-care. This achievement has implications for adolescents’ diabetes management, short- and long-term health, and psychosocial quality of life. PMID:20367781
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Approximate Bayesian Computation
NASA Astrophysics Data System (ADS)
Cisewski, Jessi
2015-08-01
Explicitly specifying a likelihood function is becoming increasingly difficult for many problems in astronomy. Astronomers often specify a simpler approximate likelihood - leaving out important aspects of a more realistic model. Approximate Bayesian computation (ABC) provides a framework for performing inference in cases where the likelihood is not available or intractable. I will introduce ABC and explain how it can be a useful tool for astronomers. In particular, I will focus on the eccentricity distribution for a sample of exoplanets with multiple sub-populations.
Zito, Sarah; Morton, John; Vankan, Dianne; Paterson, Mandy; Bennett, Pauleen C; Rand, Jacquie; Phillips, Clive J C
2016-01-01
Most cats surrendered to nonhuman animal shelters are identified as unowned, and the surrender reason for these cats is usually simply recorded as "stray." A cross-sectional study was conducted with people surrendering cats to 4 Australian animal shelters. Surrenderers of unowned cats commonly gave surrender reasons relating to concern for the cat and his/her welfare. Seventeen percent of noncaregivers had considered adopting the cat. Barriers to assuming ownership most commonly related to responsible ownership concerns. Unwanted kittens commonly contributed to the decision to surrender for both caregivers and noncaregivers. Nonowners gave more surrender reasons than owners, although many owners also gave multiple surrender reasons. These findings highlight the multifactorial nature of the decision-making process leading to surrender and demonstrate that recording only one reason for surrender does not capture the complexity of the surrender decision. Collecting information about multiple reasons for surrender, particularly reasons for surrender of unowned cats and barriers to assuming ownership, could help to develop strategies to reduce the number of cats surrendered.
1984-10-26
or greater, in Figure 2 we plot the ratio - value of A, is assumed known. If X I, X2 are indepen- of the 100 pth upper percentiles under dependence...of 100 pth Percentile Under 0.. UThe mean time to system failure based on (2.1, pendence and Independence Versus Coffelation for assuming independence...Bivariate Lifetables and its Application in Epidemiological Studies of Familial Tendency in Chronic Disease Incidence, Biometrika 65, -’ 141-151. 4. Fisher, L
Multicriteria approximation through decomposition
Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.
1998-06-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Multicriteria approximation through decomposition
Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |
1997-12-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
ERIC Educational Resources Information Center
Wolff, Hans
This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
[The relationship between assumed-competence and communication about learning in high school].
Kodaira, Hideshi; Aoki, Naoko; Matsuoka, Mirei; Hayamizu, Toshihiko
2008-08-01
This study investigated the relationship between assumed-competence (based on undervaluing others in general belief) and learning-related communication. Two-hundred-seventy-one high school students completed a questionnaire measured assumed-competence, engagement in study-related conversations with friends (planned courses after high school, students' own achievements in learning, school subjects they like and dislike, anxiety about failure, criticism of others), help-seeking behavior directed towards teachers and friends, and help-giving to friends. Students who had high assumed-competence tended to brag about their own achievements, criticize their teachers' methods, and talk negatively about their friends' academic failures. Furthermore, assumed-competence correlated positively with avoidance of help-seeking from friends, avoidance of help-giving to friends, and giving away answers on assignments. These types of help-seeking and help-giving behaviors are apparently not connected with learning, given that people with high assumed-competence tended not to seek help from friends or help friends in appropriate ways. The present results indicate that assumed-competence could be an obstruction to the formation of good relationships with others.
Optimizing the Zeldovich approximation
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.
1994-01-01
We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
Topics in Metric Approximation
NASA Astrophysics Data System (ADS)
Leeb, William Edward
This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.
Stochastic population dynamics: The Poisson approximation
NASA Astrophysics Data System (ADS)
Solari, Hernán G.; Natiello, Mario A.
2003-03-01
We introduce an approximation to stochastic population dynamics based on almost independent Poisson processes whose parameters obey a set of coupled ordinary differential equations. The approximation applies to systems that evolve in terms of events such as death, birth, contagion, emission, absorption, etc., and we assume that the event-rates satisfy a generalized mass-action law. The dynamics of the populations is then the result of the projection from the space of events into the space of populations that determine the state of the system (phase space). The properties of the Poisson approximation are studied in detail. Especially, error bounds for the moment generating function and the generating function receive particular attention. The deterministic approximation for the population fractions and the Langevin-type approximation for the fluctuations around the mean value are recovered within the framework of the Poisson approximation as particular limit cases. However, the proposed framework allows to treat other limit cases and general situations with small populations that lie outside the scope of the standard approaches. The Poisson approximation can be viewed as a general (numerical) integration scheme for this family of problems in population dynamics.
Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders.
Shay, Blake; Weber, Robert J
2015-11-01
Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals.
Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders
Shay, Blake; Weber, Robert J.
2015-01-01
Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals. PMID:27621512
NASA Technical Reports Server (NTRS)
Stankiewicz, N.; Palmer, R. W.
1972-01-01
Three-dimensional potential and current distributions in a Faraday segmented MHD generator operating in the Hall mode are computed. Constant conductivity and a Hall parameter of 1.0 is assumed. The electric fields and currents are assumed to be coperiodic with the electrode structure. The flow is assumed to be fully developed and a family of power-law velocity profiles, ranging from parabolic to turbulent, is used to show the effect of the fullness of the velocity profile. Calculation of the square of the current density shows that nonequilibrium heating is not likely to occur along the boundaries. This seems to discount the idea that the generator insulating walls are regions of high conductivity and are therefore responsible for boundary-layer shorting, unless the shorting is a surface phenomenon on the insulating material.
NASA Astrophysics Data System (ADS)
Anggriani, N.; Wicaksono, B. C.; Supriatna, A. K.
2016-06-01
Tuberculosis (TB) is one of the deadliest infectious disease in the world which caused by Mycobacterium tuberculosis. The disease is spread through the air via the droplets from the infectious persons when they are coughing. The World Health Organization (WHO) has paid a special attention to the TB by providing some solution, for example by providing BCG vaccine that prevent an infected person from becoming an active infectious TB. In this paper we develop a mathematical model of the spread of the TB which assumes endogeneous reactivation and exogeneous reinfection factors. We also assume that some of the susceptible population are vaccinated. Furthermore we investigate the optimal vaccination level for the disease.
Bowing-reactivity trends in EBR-II assuming zero-swelling ducts
Meneghetti, D.
1994-03-01
Predicted trends of duct-bowing reactivities for the Experimental Breeder Reactor II (EBR-II) are correlated with predicted row-wise duct deflections assuming use of idealized zero-void-swelling subassembly ducts. These assume no irradiation induced swellings of ducts but include estimates of the effects of irradiation-creep relaxation of thermally induced bowing stresses. The results illustrate the manners in which at-power creeps may affect subsequent duct deflections at zero power and thereby the trends of the bowing component of a subsequent power reactivity decrement.
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Beyond the Kirchhoff approximation
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1989-01-01
The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.
... nose, coughing - everyone knows the symptoms of the common cold. It is probably the most common illness. In ... avoid colds. There is no cure for the common cold. For relief, try Getting plenty of rest Drinking ...
How Public High School Students Assume Cooperative Roles to Develop Their EFL Speaking Skills
ERIC Educational Resources Information Center
Parra Espinel, Julie Natalie; Fonseca Canaría, Diana Carolina
2010-01-01
This study describes an investigation we carried out in order to identify how the specific roles that 7th grade public school students assumed when they worked cooperatively were related to their development of speaking skills in English. Data were gathered through interviews, field notes, students' reflections and audio recordings. The findings…
25 CFR 170.610 - What IRR Program functions may a tribe assume under ISDEAA?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false What IRR Program functions may a tribe assume under ISDEAA? 170.610 Section 170.610 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAND AND WATER INDIAN RESERVATION ROADS PROGRAM Service Delivery for Indian Reservation Roads Contracts...
Andrews, Bridget; Aisenberg, Daniela; d'Avossa, Giovanni; Sapir, Ayelet
2013-11-01
When judging the 3D shape of a shaded image, observers generally assume that the light source is placed above and to the left. This leftward bias has been attributed to experiential factors shaped by the observers' handedness or hemispheric dominance. Others have found that experiential factors can rapidly modify the direction of the assumed light source, suggesting a role for learning in shaping perceptual expectations. In the current study, instead, we assessed the contribution of cultural factors affecting the way visual scenes are customarily inspected, in determining the assumed light source direction. Left- and right-handed first language English and Hebrew participants, who read and write from left to right and from right to left, respectively, judged the relative depth of the central hexagon surrounded by six shaded hexagons. We found a left bias in first language English participants, but a significantly smaller one in Hebrew participants. In neither group was the light direction affected by participants' handedness. We conclude that the bias in the assumed light source direction is affected by cultural factors, likely related to the habitual scanning direction employed by participants when reading and writing their first language script.
[Return-to-work results of depressive employees in correlation to assumed chronification].
Poersch, M
2007-06-01
Return-to-work results of 52 depressive employees were examined in 4 subgroups with assumed different chronification. A maximum of chronification was assumed if motivation for a return-to-work was below 5 points (1-8 BWM-Scale) and sickness-absence was longer than 52 weeks ("Chronic-Group"). Minimum chronification was assumed if motivation was 5 points and more (1-8 BWM-Scale) and sickness-absence was below 52 weeks ("Motivated Group"). The "ambivalently motivated" had a return-to-work motivation of 5 points and more and a sickness-absence longer than 52 weeks, the "ambivalently demotivated" subgroup had a return-to-work-motivation of below 5 points and a sickness-absence below 52 weeks. The "motivated" subgroup achieved a return-to-work of 100%, the ambivalent motivated of 67%, the "ambivalent-demotivated" of 33%, the "chronic" of 9.5%. In spite of the small numbers, the return-to-work-results of these four subgroups divided by (a) duration of sickness absence and (b) motovation for a return back to work seemed to show a notable inverse correlation with the assumed chronification of depressive ill employess.
A Model for Teacher Effects from Longitudinal Data without Assuming Vertical Scaling
ERIC Educational Resources Information Center
Mariano, Louis T.; McCaffrey, Daniel F.; Lockwood, J. R.
2010-01-01
There is an increasing interest in using longitudinal measures of student achievement to estimate individual teacher effects. Current multivariate models assume each teacher has a single effect on student outcomes that persists undiminished to all future test administrations (complete persistence [CP]) or can diminish with time but remains…
Kristen M. Fletcher
2000-01-01
While States have initiated their own wetland protection schemes for decades, Congress formally invited States to join the regulatory game under the Clean Water Act (CWA) in 1977. The CWA Amendments provided two ways for States to increase responsibility by assuming some administration of Federal regulatory programs: State programmatic general permits and State...
Teacher Leader Model Standards and the Functions Assumed by National Board Certified Teachers
ERIC Educational Resources Information Center
Swan Dagen, Allison; Morewood, Aimee; Smith, Megan L.
2017-01-01
The Teacher Leader Model Standards (TLMS) were created to stimulate discussion around the leadership responsibilities teachers assume in schools. This study used the TLMS to gauge the self-reported leadership responsibilities of National Board Certified Teachers (NBCTs). The NBCTs reported engaging in all domains of the TLMS, most frequently with…
Assumed strain formulation for the four-node quadrilateral with improved in-plane bending behaviour
NASA Astrophysics Data System (ADS)
Stolarski, Henryk K.; Chen, Yung-I.
1995-04-01
A new assumed strain quadrilateral element with highly accurate in-plane bending behavior is presented for plane stress and plane strain analysis. The basic idea of the formulation consists in identification of various modes of deformation and then in proper modification of the strain field in some of these modes. In particular, the strain operator corresponding to the in-plane bending modes is modified to simulate the strain field resulting from the assumptions usually made in structural mechanics. The modification of the strain field leads to the assumed strain operator on the element level. As a result, the so-called shear and membrane locking phenomena are alleviated. The element exhibits remarkable success in bending-dominated problems even when severely distorted and high aspect ratio meshes are used. Another advantage of the present assumed strain element is that locking for nearly incompressible materials is also mitigated. While this assumed strain element passes the patch test only for the parallelogram shapes, the element provides convergent solutions as long as the initially general form of the element approaches a parallelogram shape with the refinement of the mesh.
Migration Intentions of Rural Youth: Testing an Assumed Benefit of Rapid Growth.
ERIC Educational Resources Information Center
Seyfrit, Carole L.
1986-01-01
Questions one of the assumed benefits of rapid growth in rural areas--the retention of rural youths through finding employment in their home communities. Finds no relationship between migration intentions of 970 high school seniors in rural Utah counties and rapid growth in local energy-related extractive employment. (LFL)
Demenais, F M
1991-01-01
Statistical models have been developed to delineate the major-gene and non-major-gene factors accounting for the familial aggregation of complex diseases. The mixed model assumes an underlying liability to the disease, to which a major gene, a multifactorial component, and random environment contribute independently. Affection is defined by a threshold on the liability scale. The regressive logistic models assume that the logarithm of the odds of being affected is a linear function of major genotype, phenotypes of antecedents and other covariates. An equivalence between these two approaches cannot be derived analytically. I propose a formulation of the regressive logistic models on the supposition of an underlying liability model of disease. Relatives are assumed to have correlated liabilities to the disease; affected persons have liabilities exceeding an estimable threshold. Under the assumption that the correlation structure of the relatives' liabilities follows a regressive model, the regression coefficients on antecedents are expressed in terms of the relevant familial correlations. A parsimonious parameterization is a consequence of the assumed liability model, and a one-to-one correspondence with the parameters of the mixed model can be established. The logits, derived under the class A regressive model and under the class D regressive model, can be extended to include a large variety of patterns of family dependence, as well as gene-environment interactions. PMID:1897524
The Impact of Assumed Knowledge Entry Standards on Undergraduate Mathematics Teaching in Australia
ERIC Educational Resources Information Center
King, Deborah; Cattlin, Joann
2015-01-01
Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who…
Bone marrow mesenchymal stem cells can differentiate and assume corneal keratocyte phenotype
Liu, Hongshan; Zhang, Jianhua; Liu, Chia-Yang; Hayashi, Yasuhito; Kao, Winston W-Y
2012-01-01
Abstract It remains elusive as to what bone marrow (BM) cell types infiltrate into injured and/or diseased tissues and subsequently differentiate to assume the phenotype of residential cells, for example, neurons, cardiac myocytes, keratocytes, etc., to repair damaged tissue. Here, we examined the possibility of whether BM cell invasion via circulation into uninjured and injured corneas could assume a keratocyte phenotype, using chimeric mice generated by transplantation of enhanced green fluorescent protein (EGFP)+ BM cells into keratocan null (Kera−/−) and lumican null (Lum−/−) mice. EGFP+ BM cells assumed dendritic cell morphology, but failed to synthesize corneal-specific keratan sulfate proteoglycans, that is KS-lumican and KS-keratocan. In contrast, some EGFP+ BM cells introduced by intrastromal transplantation assumed keratocyte phenotypes. Furthermore, BM cells were isolated from Kera-Cre/ZEG mice, a double transgenic mouse line in which cells expressing keratocan become EGFP+ due to the synthesis of Cre driven by keratocan promoter. Three days after corneal and conjunctival transplantations of such BM cells into Kera−/− mice, green keratocan positive cells were found in the cornea, but not in conjunctiva. It is worthy to note that transplanted BM cells were rejected in 4 weeks. MSC isolated from BM were used to examine if BM mesenchymal stem cells (BM-MSC) could assume keratocyte phenotype. When BM-MSC were intrastromal-transplanted into Kera−/− mice, they survived in the cornea without any immune and inflammatory responses and expressed keratocan in Kera−/− mice. These observations suggest that corneal intrastromal transplantation of BM-MSC may be an effective treatment regimen for corneal diseases involving dysfunction of keratocytes. PMID:21883890
Hierarchical Approximate Bayesian Computation
Turner, Brandon M.; Van Zandt, Trisha
2013-01-01
Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
Hybrid Approximate Message Passing
NASA Astrophysics Data System (ADS)
Rangan, Sundeep; Fletcher, Alyson K.; Goyal, Vivek K.; Byrne, Evan; Schniter, Philip
2017-09-01
The standard linear regression (SLR) problem is to recover a vector $\\mathbf{x}^0$ from noisy linear observations $\\mathbf{y}=\\mathbf{Ax}^0+\\mathbf{w}$. The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i.i.d.\\ sub-Gaussian matrices $\\mathbf{A}$, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal. AMP, however, is fragile in that even small deviations from the i.i.d.\\ sub-Gaussian model can cause the algorithm to diverge. This paper considers a "vector AMP" (VAMP) algorithm and shows that VAMP has a rigorous scalar state-evolution that holds under a much broader class of large random matrices $\\mathbf{A}$: those that are right-rotationally invariant. After performing an initial singular value decomposition (SVD) of $\\mathbf{A}$, the per-iteration complexity of VAMP can be made similar to that of AMP. In addition, the fixed points of VAMP's state evolution are consistent with the replica prediction of the minimum mean-squared error recently derived by Tulino, Caire, Verd\\'u, and Shamai. The effectiveness and state evolution predictions of VAMP are confirmed in numerical experiments.
ANS shell elements with improved transverse shear accuracy. [Assumed Natural Coordinate Strain
NASA Technical Reports Server (NTRS)
Jensen, Daniel D.; Park, K. C.
1992-01-01
A method of forming assumed natural coordinate strain (ANS) plate and shell elements is presented. The ANS method uses equilibrium based constraints and kinematic constraints to eliminate hierarchical degrees of freedom which results in lower order elements with improved stress recovery and displacement convergence. These techniques make it possible to easily implement the element into the standard finite element software structure, and a modified shape function matrix can be used to create consistent nodal loads.
Catalogue of maximum crack opening stress for CC(T) specimen assuming large strain condition
NASA Astrophysics Data System (ADS)
Graba, Marcin
2013-06-01
In this paper, values for the maximum opening crack stress and its distance from crack tip are determined for various elastic-plastic materials for centre cracked plate in tension (CC(T) specimen) are presented. Influences of yield strength, the work-hardening exponent and the crack length on the maximum opening stress were tested. The author has provided some comments and suggestions about modelling FEM assuming large strain formulation.
Countably QC-Approximating Posets
Mao, Xuxin; Xu, Luoshan
2014-01-01
As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730
The impact of assumed knowledge entry standards on undergraduate mathematics teaching in Australia
NASA Astrophysics Data System (ADS)
King, Deborah; Cattlin, Joann
2015-10-01
Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who may be returning to study, or who are changing career paths but do not have particular prerequisite study, now have more flexible pathways. However, there is mounting evidence to indicate that there are also significant negative impacts associated with assumed knowledge approaches, with large numbers of students enrolling in degrees without the stated assumed knowledge. For students, there are negative impacts on pass rates and retention rates and limitations to pathways within particular degrees. For institutions, the necessity to offer additional mathematics subjects at a lower level than normal and more support services for under-prepared students impacts on workloads and resources. In this paper, we discuss early research from the First Year in Maths project, which begins to shed light on the realities of a system that may in fact be too flexible.
Testing the plausibility of several a priori assumed error distributions for discharge measurements
NASA Astrophysics Data System (ADS)
Van Eerdenbrugh, Katrien; Verhoest, Niko E. C.
2017-04-01
Hydrologic measurements are used for a variety of research topics and operational projects. Regardless of the application, it is important to account for measurement uncertainty. In many projects, no local information is available about this uncertainty. Therefore, error distributions and accompanying parameters or uncertainty boundaries are often taken from literature without any knowledge about their applicability in the new context. In this research, an approach is proposed that uses relative differences between simultaneous discharge measurements to test the plausibility of several a priori assumed error distributions. For this test, simultaneous discharge measurements (measured with one type of device) from nine different Belgian rivers were available. This implies the assumption that their error distribution does not depend upon river, measurement location and measurement team. Moreover, it is assumed that the errors of two simultaneous measurements are not mutually dependent. This data set does not allow for a direct assessment of measurement errors. However, independently of the value of the real discharge, the relative difference between two simultaneous measurements can be expressed by their relative measurement errors. If a distribution is assumed for these errors, it is thus possible to test equality between the distributions of both the relative differences of the simultaneously measured discharge pairs and a created set of relative differences based on two equally sized samples of measurement errors from the assumed distribution. If the assumed error distribution is correct, these two data sets will have the same distribution. In this research, equality is tested with a two-sample nonparametric Kolmogorov-Smirnov test. The resulting p-value and the corresponding value of the Kolmogorov-Smirnov statistic (KS statistic) are used for this evaluation. The occurrence of a high p-value (and corresponding small value of the KS statistic) provides no
... this page: //medlineplus.gov/ency/article/000678.htm Common cold To use the sharing features on this page, please enable JavaScript. The common cold most often causes a runny nose, nasal congestion, ...
Huang, J; Vieland, V J
2001-01-01
It is well known that the asymptotic null distribution of the homogeneity lod score (LOD) does not depend on the genetic model specified in the analysis. When appropriately rescaled, the LOD is asymptotically distributed as 0.5 chi(2)(0) + 0.5 chi(2)(1), regardless of the assumed trait model. However, because locus heterogeneity is a common phenomenon, the heterogeneity lod score (HLOD), rather than the LOD itself, is often used in gene mapping studies. We show here that, in contrast with the LOD, the asymptotic null distribution of the HLOD does depend upon the genetic model assumed in the analysis. In affected sib pair (ASP) data, this distribution can be worked out explicitly as (0.5 - c)chi(2)(0) + 0.5chi(2)(1) + cchi(2)(2), where c depends on the assumed trait model. E.g., for a simple dominant model (HLOD/D), c is a function of the disease allele frequency p: for p = 0.01, c = 0.0006; while for p = 0.1, c = 0.059. For a simple recessive model (HLOD/R), c = 0.098 independently of p. This latter (recessive) distribution turns out to be the same as the asymptotic distribution of the MLS statistic under the possible triangle constraint, which is asymptotically equivalent to the HLOD/R. The null distribution of the HLOD/D is close to that of the LOD, because the weight c on the chi(2)(2) component is small. These results mean that the cutoff value for a test of size alpha will tend to be smaller for the HLOD/D than the HLOD/R. For example, the alpha = 0.0001 cutoff (on the lod scale) for the HLOD/D with p = 0.05 is 3.01, while for the LOD it is 3.00, and for the HLOD/R it is 3.27. For general pedigrees, explicit analytical expression of the null HLOD distribution does not appear possible, but it will still depend on the assumed genetic model. Copyright 2001 S. Karger AG, Basel
NASA Astrophysics Data System (ADS)
Bailey, Scott M.; Thomas, Gary E.; Hervig, Mark E.; Lumpe, Jerry D.; Randall, Cora E.; Carstens, Justin N.; Thurairajah, Brentha; Rusch, David W.; Russell, James M.; Gordley, Larry L.
2015-05-01
Nadir viewing observations of Polar Mesospheric Clouds (PMCs) from the Cloud Imaging and Particle Size (CIPS) instrument on the Aeronomy of Ice in the Mesosphere (AIM) spacecraft are compared to Common Volume (CV), limb-viewing observations by the Solar Occultation For Ice Experiment (SOFIE) also on AIM. CIPS makes multiple observations of PMC-scattered UV sunlight from a given location at a variety of geometries and uses the variation of the radiance with scattering angle to determine a cloud albedo, particle size distribution, and Ice Water Content (IWC). SOFIE uses IR solar occultation in 16 channels (0.3-5 μm) to obtain altitude profiles of ice properties including the particle size distribution and IWC in addition to temperature, water vapor abundance, and other environmental parameters. CIPS and SOFIE made CV observations from 2007 to 2009. In order to compare the CV observations from the two instruments, SOFIE observations are used to predict the mean PMC properties observed by CIPS. Initial agreement is poor with SOFIE predicting particle size distributions with systematically smaller mean radii and a factor of two more albedo and IWC than observed by CIPS. We show that significantly improved agreement is obtained if the PMC ice is assumed to contain 0.5% meteoric smoke by mass, in agreement with previous studies. We show that the comparison is further improved if an adjustment is made in the CIPS data processing regarding the removal of Rayleigh scattered sunlight below the clouds. This change has an effect on the CV PMC, but is negligible for most of the observed clouds outside the CV. Finally, we examine the role of the assumed shape of the ice particle size distribution. Both experiments nominally assume the shape is Gaussian with a width parameter roughly half of the mean radius. We analyze modeled ice particle distributions and show that, for the column integrated ice distribution, Log-normal and Exponential distributions better represent the range
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-31
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.
Shear viscosity in the postquasistatic approximation
Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.
2010-05-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.
Fast approximate stochastic tractography.
Iglesias, Juan Eugenio; Thompson, Paul M; Liu, Cheng-Yi; Tu, Zhuowen
2012-01-01
Many different probabilistic tractography methods have been proposed in the literature to overcome the limitations of classical deterministic tractography: (i) lack of quantitative connectivity information; and (ii) robustness to noise, partial volume effects and selection of seed region. However, these methods rely on Monte Carlo sampling techniques that are computationally very demanding. This study presents an approximate stochastic tractography algorithm (FAST) that can be used interactively, as opposed to having to wait several minutes to obtain the output after marking a seed region. In FAST, tractography is formulated as a Markov chain that relies on a transition tensor. The tensor is designed to mimic the features of a well-known probabilistic tractography method based on a random walk model and Monte-Carlo sampling, but can also accommodate other propagation rules. Compared to the baseline algorithm, our method circumvents the sampling process and provides a deterministic solution at the expense of partially sacrificing sub-voxel accuracy. Therefore, the method is strictly speaking not stochastic, but provides a probabilistic output in the spirit of stochastic tractography methods. FAST was compared with the random walk model using real data from 10 patients in two different ways: 1. the probability maps produced by the two methods on five well-known fiber tracts were directly compared using metrics from the image registration literature; and 2. the connectivity measurements between different regions of the brain given by the two methods were compared using the correlation coefficient ρ. The results show that the connectivity measures provided by the two algorithms are well-correlated (ρ = 0.83), and so are the probability maps (normalized cross correlation 0.818 ± 0.081). The maps are also qualitatively (i.e., visually) very similar. The proposed method achieves a 60x speed-up (7 s vs. 7 min) over the Monte Carlo sampling scheme, therefore
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
A variational justification of the assumed natural strain formulation of finite elements
NASA Technical Reports Server (NTRS)
Militello, Carmelo; Felippa, Carlos A.
1991-01-01
The objective is to study the assumed natural strain (ANS) formulation of finite elements from a variational standpoint. The study is based on two hybrid extensions of the Reissner-type functional that uses strains and displacements as independent fields. One of the forms is a genuine variational principle that contains an independent boundary traction field, whereas the other one represents a restricted variational principle. Two procedures for element level elimination of the strain field are discussed, and one of them is shown to be equivalent to the inclusion of incompatible displacement modes. Also, the 4-node C(exp 0) plate bending quadrilateral element is used to illustrate applications of this theory.
Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element
NASA Technical Reports Server (NTRS)
Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.
1993-01-01
Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.
Traction free finite elements with the assumed stress hybrid model. M.S. Thesis, 1981
NASA Technical Reports Server (NTRS)
Kafie, Kurosh
1991-01-01
An effective approach in the finite element analysis of the stress field at the traction free boundary of a solid continuum was studied. Conventional displacement and assumed stress finite elements were used in the determination of stress concentrations around circular and elliptical holes. Specialized hybrid elements were then developed to improve the satisfaction of prescribed traction boundary conditions. Results of the stress analysis indicated that finite elements which exactly satisfy the free stress boundary conditions are the most accurate and efficient in such problems. A general approach for hybrid finite elements which incorporate traction free boundaries of arbitrary geometry was formulated.
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Mullen, Robert L.
1989-01-01
A linear finite strip plate element based on Mindlin-Reissner plate theory is developed. The analysis is suitable for both thin and thick plates. In the formulation, new transverse shear strains are introduced and assumed constant in each two-node linear strip. The element stiffness matrix is explicitly formulated for efficient computation and computer implementation. Numerical results showing the efficiency and predictive capability of the element for the analysis of plates are presented for different support and loading conditions and a wide range of thicknesses. No sign of shear locking is observed with the newly developed element.
An assumed-stress hybrid 4-node shell element with drilling degrees of freedom
NASA Technical Reports Server (NTRS)
Aminpour, M. A.
1992-01-01
An assumed-stress hybrid/mixed 4-node quadrilateral shell element is introduced that alleviates most of the deficiencies associated with such elements. The formulation of the element is based on the assumed-stress hybrid/mixed method using the Hellinger-Reissner variational principle. The membrane part of the element has 12 degrees of freedom including rotational or 'drilling' degrees of freedom at the nodes. The bending part of the element also has 12 degrees of freedom. The bending part of the element uses the Reissner-Mindlin plate theory which takes into account the transverse shear contributions. The element formulation is derived from an 8-node isoparametric element by expressing the midside displacement degrees of freedom in terms of displacement and rotational degrees of freedom at corner nodes. The element passes the patch test, is nearly insensitive to mesh distortion, does not 'lock', possesses the desirable invariance properties, has no hidden spurious modes, and for the majority of test cases used in this paper produces more accurate results than the other elements employed herein for comparison.
Aseismic Slips Preceding Ruptures Assumed for Anomalous Seismicities and Crustal Deformations
NASA Astrophysics Data System (ADS)
Ogata, Y.
2007-12-01
If aseismic slips occurs on a fault or its deeper extension, both seismicity and geodetic records around the source should be affected. Such anomalies are revealed to have occurred during the last several years leading up to the October 2004 Chuetsu Earthquake of M6.8, the March 2007 Noto Peninsula Earthquake of M6.9, and the July 2007 Chuetsu-Oki Earthquake of M6.8, which occurred successively in the near-field, central Japan. Seismic zones of negative and positive increments of the Coulomb failure stress, assuming such slips, show seismic quiescence and activation, respectively, relative to the predicted rate by the ETAS model. These are further supported by transient crustal movement around the source preceding the rupture. Namely, time series of the baseline distance records between a numbers of the permanent GPS stations deviated from the predicted trend, with the trend of different slope that is basically consistent with the horizontal displacements of the stations due to the assumed slips. References Ogata, Y. (2007) Seismicity and geodetic anomalies in a wide area preceding the Niigata-Ken-Chuetsu Earthquake of October 23, 2004, central Japan, J. Geophys. Res. 112, in press.
Srivastava, Sanjay; Guglielmo, Steve; Beer, Jennifer S
2010-03-01
In interpersonal perception, "perceiver effects" are tendencies of perceivers to see other people in a particular way. Two studies of naturalistic interactions examined perceiver effects for personality traits: seeing a typical other as sympathetic or quarrelsome, responsible or careless, and so forth. Several basic questions were addressed. First, are perceiver effects organized as a global evaluative halo, or do perceptions of different traits vary in distinct ways? Second, does assumed similarity (as evidenced by self-perceiver correlations) reflect broad evaluative consistency or trait-specific content? Third, are perceiver effects a manifestation of stable beliefs about the generalized other, or do they form in specific contexts as group-specific stereotypes? Findings indicated that perceiver effects were better described by a differentiated, multidimensional structure with both trait-specific content and a higher order global evaluation factor. Assumed similarity was at least partially attributable to trait-specific content, not just to broad evaluative similarity between self and others. Perceiver effects were correlated with gender and attachment style, but in newly formed groups, they became more stable over time, suggesting that they grew dynamically as group stereotypes. Implications for the interpretation of perceiver effects and for research on personality assessment and psychopathology are discussed.
Effects of assumed tow architecture on the predicted moduli and stresses in woven composites
NASA Technical Reports Server (NTRS)
Chapman, Clinton Dane
1994-01-01
This study deals with the effect of assumed tow architecture on the elastic material properties and stress distributions of plain weave woven composites. Specifically, the examination of how a cross-section is assumed to sweep-out the tows of the composite is examined in great detail. The two methods studied are extrusion and translation. This effect is also examined to determine how sensitive this assumption is to changes in waviness ratio. 3D finite elements were used to study a T300/Epoxy plain weave composite with symmetrically stacked mats. 1/32nd of the unit cell is shown to be adequate for analysis of this type of configuration with the appropriate set of boundary conditions. At low waviness, results indicate that for prediction of elastic properties, either method is adequate. At high waviness, certain elastic properties become more sensitive to the method used. Stress distributions at high waviness ratio are shown to vary greatly depending on the type of loading applied. At low waviness, both methods produce similar results.
Fernández, David Lorente
2015-01-01
This chapter uses a comparative approach to examine the maintenance of Indigenous practices related with Learning by Observing and Pitching In in two generations--parent generation and current child generation--in a Central Mexican Nahua community. In spite of cultural changes and the increase of Western schooling experience, these practices persist, to different degrees, as a Nahua cultural heritage with close historical relations to the key value of cuidado (stewardship). The chapter explores how children learn the value of cuidado in a variety of everyday activities, which include assuming responsibility in many social situations, primarily in cultivating corn, raising and protecting domestic animals, health practices, and participating in family ceremonial life. The chapter focuses on three main points: (1) Cuidado (assuming responsibility for), in the Nahua socio-cultural context, refers to the concepts of protection and "raising" as well as fostering other beings, whether humans, plants, or animals, to reach their potential and fulfill their development. (2) Children learn cuidado by contributing to family endeavors: They develop attention and self-motivation; they are capable of responsible actions; and they are able to transform participation to achieve the status of a competent member of local society. (3) This collaborative participation allows children to continue the cultural tradition and to preserve a Nahua heritage at a deeper level in a community in which Nahuatl language and dress have disappeared, and people do not identify themselves as Indigenous. © 2015 Elsevier Inc. All rights reserved.
Virta, R.L.
1998-01-01
Part of a special section on the state of industrial minerals in 1997. The state of the common clay industry worldwide for 1997 is discussed. Sales of common clay in the U.S. increased from 26.2 Mt in 1996 to an estimated 26.5 Mt in 1997. The amount of common clay and shale used to produce structural clay products in 1997 was estimated at 13.8 Mt.
NASA Technical Reports Server (NTRS)
Liu, W. Timothy; Niiler, Pearn P.
1990-01-01
In deriving the surface latent heat flux with the bulk formula for the thermal forcing of some ocean circulation models, two approximations are commonly made to bypass the use of atmospheric humidity in the formula. The first assumes a constant relative humidity, and the second supposes that the sea-air humidity difference varies linearly with the saturation humidity at sea surface temperature. Using climatological fields derived from the Marine Deck and long time series from ocean weather stations, the errors introduced by these two assumptions are examined. It is shown that the errors reach above 100 W/sq m over western boundary currents and 50 W/sq m over the tropical ocean. The two approximations also introduce erroneous seasonal and spatial variabilities with magnitudes over 50 percent of the observed variabilities.
Does the rapid appearance of life on Earth suggest that life is common in the universe?
Lineweaver, Charles H; Davis, Tamara M
2002-01-01
It is sometimes assumed that the rapidity of biogenesis on Earth suggests that life is common in the Universe. Here we critically examine the assumptions inherent in this if-life-evolved-rapidly-life-must-be-common argument. We use the observational constraints on the rapidity of biogenesis on Earth to infer the probability of biogenesis on terrestrial planets with the same unknown probability of biogenesis as the Earth. We find that on such planets, older than approximately 1 Gyr, the probability of biogenesis is > 13% at the 95% confidence level. This quantifies an important term in the Drake Equation but does not necessarily mean that life is common in the Universe.
2014-01-01
Background For many molecularly targeted agents, the probability of response may be assumed to either increase or increase and then plateau in the tested dose range. Therefore, identifying the maximum effective dose, defined as the lowest dose that achieves a pre-specified target response and beyond which improvement in the response is unlikely, becomes increasingly important. Recently, a class of Bayesian designs for single-arm phase II clinical trials based on hypothesis tests and nonlocal alternative prior densities has been proposed and shown to outperform common Bayesian designs based on posterior credible intervals and common frequentist designs. We extend this and related approaches to the design of phase II oncology trials, with the goal of identifying the maximum effective dose among a small number of pre-specified doses. Methods We propose two new Bayesian designs with continuous monitoring of response rates across doses to identify the maximum effective dose, assuming monotonicity of the response rate across doses. The first design is based on Bayesian hypothesis tests. To determine whether each dose level achieves a pre-specified target response rate and whether the response rates between doses are equal, multiple statistical hypotheses are defined using nonlocal alternative prior densities. The second design is based on Bayesian model averaging and also uses nonlocal alternative priors. We conduct simulation studies to evaluate the operating characteristics of the proposed designs, and compare them with three alternative designs. Results In terms of the likelihood of drawing a correct conclusion using similar between-design average sample sizes, the performance of our proposed design based on Bayesian hypothesis tests and nonlocal alternative priors is more robust than that of the other designs. Specifically, the proposed Bayesian hypothesis test-based design has the largest probability of being the best design among all designs under comparison and
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false How do Self-Governance Tribes assume environmental...-Governance Tribes assume environmental responsibilities for construction projects under section 509 of the Act ? Self-Governance Tribes assume environmental responsibilities by: (a) Adopting a resolution...
Approximate probability distributions of the master equation.
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
Approximate probability distributions of the master equation
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
ERIC Educational Resources Information Center
Payne, William E.; Tyler, Charles R.
1999-01-01
Explains how a commons area can serve both the school and community by becoming a cost-effective, space-saving asset to the school building. Examines the commons area as a place for interaction; discusses subdividing it into smaller functional units, locating it, and related lighting and heating issues. (GR)
ERIC Educational Resources Information Center
Gordon, Douglas
2010-01-01
Student commons are no longer simply congregation spaces for students with time on their hands. They are integral to providing a welcoming environment and effective learning space for students. Many student commons have been transformed into spaces for socialization, an environment for alternative teaching methods, a forum for large group meetings…
Frankenstein's glue: transition functions for approximate solutions
NASA Astrophysics Data System (ADS)
Yunes, Nicolás
2007-09-01
Approximations are commonly employed to find approximate solutions to the Einstein equations. These solutions, however, are usually only valid in some specific spacetime region. A global solution can be constructed by gluing approximate solutions together, but this procedure is difficult because discontinuities can arise, leading to large violations of the Einstein equations. In this paper, we provide an attempt to formalize this gluing scheme by studying transition functions that join approximate analytic solutions together. In particular, we propose certain sufficient conditions on these functions and prove that these conditions guarantee that the joined solution still satisfies the Einstein equations analytically to the same order as the approximate ones. An example is also provided for a binary system of non-spinning black holes, where the approximate solutions are taken to be given by a post-Newtonian expansion and a perturbed Schwarzschild solution. For this specific case, we show that if the transition functions satisfy the proposed conditions, then the joined solution does not contain any violations to the Einstein equations larger than those already inherent in the approximations. We further show that if these functions violate the proposed conditions, then the matter content of the spacetime is modified by the introduction of a matter shell, whose stress energy tensor depends on derivatives of these functions.
On the Assumed Natural Strain method to alleviate locking in solid-shell NURBS-based finite elements
NASA Astrophysics Data System (ADS)
Caseiro, J. F.; Valente, R. A. F.; Reali, A.; Kiendl, J.; Auricchio, F.; Alves de Sousa, R. J.
2014-06-01
In isogeometric analysis (IGA), the functions used to describe the CAD geometry (such as NURBS) are also employed, in an isoparametric fashion, for the approximation of the unknown fields, leading to an exact geometry representation. Since the introduction of IGA, it has been shown that the high regularity properties of the employed functions lead in many cases to superior accuracy per degree of freedom with respect to standard FEM. However, as in Lagrangian elements, NURBS-based formulations can be negatively affected by the appearance of non-physical phenomena that "lock" the solution when constrained problems are considered. In order to alleviate such locking behaviors, the Assumed Natural Strain (ANS) method proposed for Lagrangian formulations is extended to NURBS-based elements in the present work, within the context of solid-shell formulations. The performance of the proposed methodology is assessed by means of a set of numerical examples. The results allow to conclude that the employment of the ANS method to quadratic NURBS-based elements successfully alleviates non-physical phenomena such as shear and membrane locking, significantly improving the element performance.
DALI: Derivative Approximation for LIkelihoods
NASA Astrophysics Data System (ADS)
Sellentin, Elena
2015-07-01
DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.
Analysis of a photonic nanojet assuming a focused incident beam instead of a plane wave
NASA Astrophysics Data System (ADS)
Dong, Aotuo; Su, Chin
2014-12-01
The analysis of a photonic nanojet formed by dielectric spheres almost always assumes that the incident field is a plane wave. In this work, using vector spherical harmonics representations, we analyze the case of a more realistic incident field consisting of a focused beam formed by a microscope objective. Also included is the situation in which the sphere is not at the focal plane of the focus beam. We find that the dimension of the nanojet beam waist is less sensitive with respect to the azimuthal angle when compared with the plane wave case. Also, by shifting the particle away from the focal plane, the nanojet beam waist can be positioned outside the particle which otherwise would be inside or at the particle surface. Inherently, no such adjustment is possible with an incident plane wave assumption.
An assumed pdf approach for the calculation of supersonic mixing layers
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Drummond, J. P.; Hassan, H. A.
1992-01-01
In an effort to predict the effect that turbulent mixing has on the extent of combustion, a one-equation turbulence model is added to an existing Navier-Stokes solver with finite-rate chemistry. To average the chemical-source terms appearing in the species-continuity equations, an assumed pdf approach is also used. This code was used to analyze the mixing and combustion caused by the mixing layer formed by supersonic coaxial H2-air streams. The chemistry model employed allows for the formation of H2O2 and HO2. Comparisons are made with recent measurements using laser Raman diagnostics. Comparisons include temperature and its rms, and concentrations of H2, O2, N2, H2O, and OH. In general, good agreement with experiment was noted.
Helicobacter pylori can be induced to assume the morphology of Helicobacter heilmannii.
Fawcett, P T; Gibney, K M; Vinette, K M
1999-04-01
Cultures of Helicobacter pylori obtained from the American Type Culture Collection (strain 43504) were grown as isolated colonies or lawns on blood agar plates and in broth culture with constant shaking. Examination of bacterial growth with Gram-stained fixed preparation and differential interference contrast microscopy on wet preparations revealed that bacteria grown on blood agar plates had a morphology consistent with that normally reported for H. pylori whereas bacteria from broth cultures had the morphologic appearance of Helicobacter heilmannii. Bacteria harvested from blood agar plates assumed an H. heilmannii-like morphology when transferred to broth cultures, and bacteria from broth cultures grew with morphology typical of H. pylori when grown on blood agar plates. Analysis by PCR of bacteria isolated from blood agar plates and broth cultures indicated that a single strain of bacteria (H. pylori) was responsible for both morphologies.
Analysis of an object assumed to contain “Red Mercury”
NASA Astrophysics Data System (ADS)
Obhođaš, Jasmina; Sudac, Davorin; Blagus, Saša; Valković, Vladivoj
2007-08-01
After having been informed about an attempt of illicit trafficking, the Organized Crime Division of the Zagreb Police Authority confiscated in November 2003 a hand size metal cylinder suspected to contain "Red Mercury" (RM). The sample assumed to contain RM was analyzed with two nondestructive analytical methods in order to obtain information about the nature of the investigated object, namely, activation analysis with 14.1 MeV neutrons and EDXRF analysis. The activation analysis with 14.1 MeV neutrons showed that the container and its contents were characterized by the following chemical elements: Hg, Fe, Cr and Ni. By using EDXRF analysis, it was shown that the elements Fe, Cr and Ni were constituents of the capsule. Therefore, it was concluded that these three elements were present in the capsule only, while the content of the unknown material was Hg. Antimony as a hypothetical component of red mercury was not detected.
Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa
2016-05-24
Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion.
Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa
2016-01-01
Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM’s actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM’s side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990–2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion. PMID:27162346
Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
Approximate equilibria for Bayesian games
NASA Astrophysics Data System (ADS)
Mallozzi, Lina; Pusillo, Lucia; Tijs, Stef
2008-07-01
In this paper the problem of the existence of approximate equilibria in mixed strategies is central. Sufficient conditions are given under which approximate equilibria exist for non-finite Bayesian games. Further one possible approach is suggested to the problem of the existence of approximate equilibria for the class of multicriteria Bayesian games.
McCaskey, Alexander J.
2016-11-18
There are many common software patterns and utilities for the ORNL Quantum Computing Institute that can and should be shared across projects. Otherwise we find duplication of code which adds unwanted complexity. This is a software product seeks to alleviate this by providing common utilities such as object factories, graph data structures, parameter input mechanisms, etc., for other software products within the ORNL Quantum Computing Institute. This work enables pure basic research, has no export controlled utilities, and has no real commercial value.
NASA Astrophysics Data System (ADS)
Namegaya, Y.; Ueno, T.; Satake, K.; Tanioka, Y.
2010-12-01
Tsunami waveform inversion is often used to study the source of tsunamigenic earthquakes. In this method, subsurface fault planes are divided into small subfaults, and the slip distribution, then seafloor deformation are estimated. However, it is sometimes difficult to judge the actual fault plane for offshore earthquake such as those along the eastern margin of Japan Sea. We developed an inversion method to estimate vertical seafloor deformation directly from observed tsunami waveforms. The tsunami source area is divided into many nodes, and the vertical seafloor deformation is calculated around each node by using the B-spline functions. The tsunami waveforms are calculated from each node, and used as the Green’s functions for inversion. To stabilize inversion or avoid overestimation of data errors, we introduce smoothing equations like Laplace’s equations. The optimum smoothing strength is estimated from the Akaike’s Bayesian information criterion (ABIC) Method. Advantage of this method is to estimate the vertical seafloor deformation can be estimated without assuming a fault plane. We applied the method to three recent earthquakes around Japan: the 2007 Chuetsu-oki, 2007 Noto Hanto, and 2003 Tokachi-oki earthquakes. The Chuetsu-oki earthquake (M6.8) occurred off the Japan Sea coast of central Japan on 16 July 2007. For this earthquake, complicated aftershock distribution makes it difficult to judge which of the southeast dipping fault or the northwest dipping fault was the actual fault plane. The tsunami inversion result indicates that the uplifted area extends about 10 km from the coastline, and there are two peaks of uplift: about 40 cm in the south and about 20 cm in the north. TheNoto Hanto earthquake (M6.9) occurred off Noto peninsula, also along the Japan Sea coast of central Japan, on 25 March 2007. The inversion result indicates that the uplifted area extends about 10 km off the coast, and the largest uplift amount is more than 40 cm. Location of
Approximating W projection as a separable kernel
NASA Astrophysics Data System (ADS)
Merry, Bruce
2016-02-01
W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.
Local density approximations from finite systems
NASA Astrophysics Data System (ADS)
Entwistle, M. T.; Hodgson, M. J. P.; Wetherell, J.; Longstaff, B.; Ramsden, J. D.; Godby, R. W.
2016-11-01
The local density approximation (LDA) constructed through quantum Monte Carlo calculations of the homogeneous electron gas (HEG) is the most common approximation to the exchange-correlation functional in density functional theory. We introduce an alternative set of LDAs constructed from slablike systems of one, two, and three electrons that resemble the HEG within a finite region, and illustrate the concept in one dimension. Comparing with the exact densities and Kohn-Sham potentials for various test systems, we find that the LDAs give a good account of the self-interaction correction, but are less reliable when correlation is stronger or currents flow.
Logistic Approximation to the Normal: The KL Rationale
ERIC Educational Resources Information Center
Savalei, Victoria
2006-01-01
A rationale is proposed for approximating the normal distribution with a logistic distribution using a scaling constant based on minimizing the Kullback-Leibler (KL) information, that is, the expected amount of information available in a sample to distinguish between two competing distributions using a likelihood ratio (LR) test, assuming one of…
Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Gurfinkel, Arie
2010-01-01
We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules
Bilateral Painful Ophthalmoplegia: A Case of Assumed Tolosa-Hunt Syndrome
Kamusella, Peter; Andresen, Reimer
2016-01-01
We present the case of a man of 47 years with vertical and horizontal paresis of view combined with periorbital pain that developed initially on the right side but extended after 3-4 days to the left. Gadolinum uptaking tissue in the cavernous sinus was shown by MRI of the orbital region in the T1 spin echo sequence with fat saturation (SEfs) with a slice thickness of 2 mm. As no other abnormalities were found and the pain resolved within 72 hours of treatment with cortison a bilateral Tolosa-Hunt Syndrome (THS) was assumed. THS is an uncommon cause for Painful Ophthalmoglegia (PO) and only few cases of bilateral appearance have been reported. Even though the diagnostic criteria for THS oblige unilateral symptoms we suggest that in patients with bilateral PO THS should not be excluded as a differential diagnosis. Further more when using MRI to detect granulomatous tissue in the orbital region the chosen sequence should be T1 SEfs and slice thickness should possibly be as low as 2 mm, as granulomas are often no larger than 1-2 mm. PMID:27134970
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr. (Principal Investigator)
1996-01-01
The goal of this research project is to develop assumed-stress hybrid elements with rotational degrees of freedom for analyzing composite structures. During the first year of the three-year activity, the effort was directed to further assess the AQ4 shell element and its extensions to buckling and free vibration problems. In addition, the development of a compatible 2-node beam element was to be accomplished. The extensions and new developments were implemented in the Computational Structural Mechanics Testbed COMET. An assessment was performed to verify the implementation and to assess the performance of these elements in terms of accuracy. During the second and third years, extensions to geometrically nonlinear problems were developed and tested. This effort involved working with the nonlinear solution strategy as well as the nonlinear formulation for the elements. This research has resulted in the development and implementation of two additional element processors (ES22 for the beam element and ES24 for the shell elements) in COMET. The software was developed using a SUN workstation and has been ported to the NASA Langley Convex named blackbird. Both element processors are now part of the baseline version of COMET.
Oseso, Linda; Magaret, Amalia S; Jerome, Keith R; Fox, Julie; Wald, Anna
2016-09-01
Current treatment of genital herpes is focused on ameliorating signs and symptoms but is not curative. However, as potential herpes simplex virus (HSV) cure approaches are tested in the laboratory, we aimed to assess the interest in such studies by persons with genital herpes and the willingness to assume risks associated with experimental therapy. We constructed an anonymous online questionnaire that was posted on websites that provide information regarding genital herpes. The questions collected demographic and clinical information on adults who self-reported as having genital herpes, and assessed attitudes toward and willingness to participate in HSV cure clinical research. Seven hundred eleven participants provided sufficient responses to be included in the analysis. Sixty-six percent were women; the median age was 37 years, and the median time since genital HSV diagnosis was 4.7 years. The willingness to participate in trials increased from 59.0% in phase 1 to 68.5% in phase 2, and 81.2% in phase 3 trials, and 40% reported willingness to participate even in the absence of immediate, personal benefits. The most desirable outcome was the elimination of risk for transmission to sex partner or neonate. The mean perceived severity of receiving a diagnosis of genital HSV-2 was 4.2 on a scale of 1 to 5. Despite suppressive therapy available, persons with genital herpes are interested in participating in clinical research aimed at curing HSV, especially in more advanced stages of development.
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Gregory, Irene M.
2014-01-01
Control-theoretic modeling of human operator's dynamic behavior in manual control tasks has a long, rich history. There has been significant work on techniques used to identify the pilot model of a given structure. This research attempts to go beyond pilot identification based on experimental data to develop a predictor of pilot behavior. Two methods for pre-dicting pilot stick input during changing aircraft dynamics and deducing changes in pilot behavior are presented This approach may also have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot. With this ability to detect changes in piloting behavior, the possibility now exists to mediate human adverse behaviors, hardware failures, and software anomalies with autono-my that may ameliorate these undesirable effects. However, appropriate timing of when au-tonomy should assume control is dependent on criticality of actions to safety, sensitivity of methods to accurately detect these adverse changes, and effects of changes in levels of auto-mation of the system as a whole.
Epidemiology of child pedestrian casualty rates: can we assume spatial independence?
Hewson, Paul J
2005-07-01
Child pedestrian injuries are often investigated by means of ecological studies, yet are clearly part of a complex spatial phenomena. Spatial dependence within such ecological analyses have rarely been assessed, yet the validity of basic statistical techniques rely on a number of independence assumptions. Recent work from Canada has highlighted the potential for modelling spatial dependence within data that was aggregated in terms of the number of road casualties who were resident in a given geographical area. Other jurisdictions aggregate data in terms of the number of casualties in the geographical area in which the collision took place. This paper contrasts child pedestrian casualty data from Devon County UK, which has been aggregated by both methods. A simple ecological model, with minimally useful covaraties relating to measures of child deprivation, provides evidence that data aggregated in terms of the casualty's home location cannot be assumed to be spatially independent and that for analysis of these data to be valid there must be some accounting for spatial auto-correlation within the model structure. Conversely, data aggregated in terms of the collision location (as is usual in the UK) was found to be spatially independent. Whilst the spatial model is clearly more complex it provided a superior fit to that seen with either collision aggregated or non-spatial models. Of more importance, the ecological level association between deprivation and casualty rate is much lower once the spatial structure is accounted for, highlighting the importance using appropriately structured models.
From the Kochen-Specker theorem to noncontextuality inequalities without assuming determinism.
Kunjwal, Ravi; Spekkens, Robert W
2015-09-11
The Kochen-Specker theorem demonstrates that it is not possible to reproduce the predictions of quantum theory in terms of a hidden variable model where the hidden variables assign a value to every projector deterministically and noncontextually. A noncontextual value assignment to a projector is one that does not depend on which other projectors-the context-are measured together with it. Using a generalization of the notion of noncontextuality that applies to both measurements and preparations, we propose a scheme for deriving inequalities that test whether a given set of experimental statistics is consistent with a noncontextual model. Unlike previous inequalities inspired by the Kochen-Specker theorem, we do not assume that the value assignments are deterministic and therefore in the face of a violation of our inequality, the possibility of salvaging noncontextuality by abandoning determinism is no longer an option. Our approach is operational in the sense that it does not presume quantum theory: a violation of our inequality implies the impossibility of a noncontextual model for any operational theory that can account for the experimental observations, including any successor to quantum theory.
NASA Technical Reports Server (NTRS)
Hood, L. L.
1983-01-01
A modeling analysis is carried out of six experimental phase space density profiles for nearly equatorially mirroring protons using methods based on the approach of Thomsen et al. (1977). The form of the time-averaged radial diffusion coefficient D(L) that gives an optimal fit to the experimental profiles is determined under the assumption that simple satellite plus Ring E absorption of inwardly diffusing particles and steady-state radial diffusion are the dominant physical processes affecting the proton data in the L range that is modeled. An extension of the single-satellite model employed by Thomsen et al. to a model that includes multisatellite and ring absorption is described, and the procedures adopted for estimating characteristic satellite and ring absorption times are defined. The results obtained in applying three representative solid-body absorption models to evaluate D(L) in the range where L is between 4 and 16 are reported, and a study is made of the sensitivity of the preferred amplitude and L dependence for D(L) to the assumed model parameters. The inferred form of D(L) is then compared with that which would be predicted if various proposed physical mechanisms for driving magnetospheric radial diffusion are operative at Saturn.
Defining modeling parameters for juniper trees assuming pleistocene-like conditions at the NTS
Tarbox, S.R.; Cochran, J.R.
1994-12-31
This paper addresses part of Sandia National Laboratories` (SNL) efforts to assess the long-term performance of the Greater Confinement Disposal (GCD) facility located on the Nevada Test Site (NTS). Of issue is whether the GCD site complies with 40 CFR 191 standards set for transuranic (TRU) waste burial. SNL has developed a radionuclide transport model which can be used to assess TRU radionuclide movement away from the GCD facility. An earlier iteration of the model found that radionuclide uptake and release by plants is an important aspect of the system to consider. Currently, the shallow-rooted plants at the NTS do not pose a threat to the integrity of the GCD facility. However, the threat increases substantially it deeper-rooted woodland species migrate to the GCD facility, given a shift to a wetter climate. The model parameters discussed here will be included in the next model iteration which assumes a climate shift will provide for the growth of juniper trees at the GCD facility. Model parameters were developed using published data and wherever possible, data were taken from juniper and pinon-juniper studies that mirrored as many aspects of the GCD facility as possible.
Is the perception of 3D shape from shading based on assumed reflectance and illumination?
Todd, James T.; Egan, Eric J. L.; Phillips, Flip
2014-01-01
The research described in the present article was designed to compare three types of image shading: one generated with a Lambertian BRDF and homogeneous illumination such that image intensity was determined entirely by local surface orientation irrespective of position; one that was textured with a linear intensity gradient, such that image intensity was determined entirely by local surface position irrespective of orientation; and another that was generated with a Lambertian BRDF and inhomogeneous illumination such that image intensity was influenced by both position and orientation. A gauge figure adjustment task was used to measure observers' perceptions of local surface orientation on the depicted surfaces, and the probe points included 60 pairs of regions that both had the same orientation. The results show clearly that observers' perceptions of these three types of stimuli were remarkably similar, and that probe regions with similar apparent orientations could have large differences in image intensity. This latter finding is incompatible with any process for computing shape from shading that assumes any plausible reflectance function combined with any possible homogeneous illumination. PMID:26034561
Assume-Guarantee Verification of Source Code with Design-Level Assumptions
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.
2004-01-01
Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.
Joint Modeling of Covariates and Censoring Process Assuming Non-Constant Dropout Hazard.
Jaffa, Miran A; Jaffa, Ayad A
2016-06-01
In this manuscript we propose a novel approach for the analysis of longitudinal data that have informative dropout. We jointly model the slopes of covariates of interest and the censoring process for which we assume a survival model with logistic non-constant dropout hazard in a likelihood function that is integrated over the random effects. Maximization of the marginal likelihood function results in acquiring maximum likelihood estimates for the population slopes and empirical Bayes estimates for the individual slopes that are predicted using Gaussian quadrature. Our simulation study results indicated that the performance of this model is superior in terms of accuracy and validity of the estimates compared to other models such as logistic non-constant hazard censoring model that does not include covariates, logistic constant censoring model with covariates, bootstrapping approach as well as mixed models. Sensitivity analyses for the dropout hazard and non-Gaussian errors were also undertaken to assess robustness of the proposed approach to such violations. Our model was illustrated using a cohort of renal transplant patients with estimated glomerular filtration rate as the outcome of interest.
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Combining global and local approximations
Haftka, R.T. )
1991-09-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.
Phenomenological applications of rational approximants
NASA Astrophysics Data System (ADS)
Gonzàlez-Solís, Sergi; Masjuan, Pere
2016-08-01
We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.
Engineering evaluation of alternatives: Managing the assumed leak from single-shell Tank 241-T-101
Brevick, C.H.; Jenkins, C.
1996-02-01
At mid-year 1992, the liquid level gage for Tank 241-T-101 indicated that 6,000 to 9,000 gal had leaked. Because of the liquid level anomaly, Tank 241-T-101 was declared an assumed leaker on October 4, 1992. SSTs liquid level gages have been historically unreliable. False readings can occur because of instrument failures, floating salt cake, and salt encrustation. Gages frequently self-correct and tanks show no indication of leak. Tank levels cannot be visually inspected and verified because of high radiation fields. The gage in Tank 241-T-101 has largely corrected itself since the mid-year 1992 reading. Therefore, doubt exists that a leak has occurred, or that the magnitude of the leak poses any immediate environmental threat. While reluctance exists to use valuable DST space unnecessarily, there is a large safety and economic incentive to prevent or mitigate release of tank liquid waste into the surrounding environment. During the assessment of the significance of the Tank 241-T-101 liquid level gage readings, Washington State Department of Ecology determined that Westinghouse Hanford Company was not in compliance with regulatory requirements, and directed transfer of the Tank 241-T-101 liquid contents into a DST. Meanwhile, DOE directed WHC to examine reasonable alternatives/options for safe interim management of Tank 241-T-101 wastes before taking action. The five alternatives that could be used to manage waste from a leaking SST are: (1) No-Action, (2) In-Tank Stabilization, (3) External Tank Stabilization, (4) Liquid Retrieval, and (5) Total Retrieval. The findings of these examinations are reported in this study.
NASA Astrophysics Data System (ADS)
Carvajal, Matías; Gubler, Alejandra
2016-12-01
We investigated the effect that along-dip slip distribution has on the near-shore tsunami amplitudes and on coastal land-level changes in the region of central Chile (29°-37°S). Here and all along the Chilean megathrust, the seismogenic zone extends beneath dry land, and thus, tsunami generation and propagation is limited to its seaward portion, where the sensitivity of the initial tsunami waveform to dislocation model inputs, such as slip distribution, is greater. We considered four distributions of earthquake slip in the dip direction, including a spatially uniform slip source and three others with typical bell-shaped slip patterns that differ in the depth range of slip concentration. We found that a uniform slip scenario predicts much lower tsunami amplitudes and generally less coastal subsidence than scenarios that assume bell-shaped distributions of slip. Although the finding that uniform slip scenarios underestimate tsunami amplitudes is not new, it has been largely ignored for tsunami hazard assessment in Chile. Our simulations results also suggest that uniform slip scenarios tend to predict later arrival times of the leading wave than bell-shaped sources. The time occurrence of the largest wave at a specific site is also dependent on how the slip is distributed in the dip direction; however, other factors, such as local bathymetric configurations and standing edge waves, are also expected to play a role. Arrival time differences are especially critical in Chile, where tsunamis arrive earlier than elsewhere. We believe that the results of this study will be useful to both public and private organizations for mapping tsunami hazard in coastal areas along the Chilean coast, and, therefore, help reduce the risk of loss and damage caused by future tsunamis.
NASA Astrophysics Data System (ADS)
Kraseski, K. A.
2015-12-01
Recently developed conceptual frameworks and new observations have improved our understanding of hyporheic temperature dynamics and their effects on channel temperatures. However, hyporheic temperature models that are both simple and useful remain elusive. As water moves through hyporheic pathways, it exchanges heat with hyporheic sediment through conduction, and this process dampens the diurnal temperature wave of the water entering from the channel. This study examined the mechanisms underlying this behavior, and utilized those findings to create two simple models that predict temperatures of water reentering the channel after traveling through hyporheic pathways for different lengths of time. First, we developed a laboratory experiment to represent this process and determine conduction rates for various sediment size classes (sand, fine gravel, coarse gravel, and a proportional mix of the three) by observing the time series of temperature changes between sediment and water of different initial temperatures. Results indicated that conductions rates were near-instantaneous, with heat transfer being completed on the scale of seconds to a few minutes of the initial interaction. Heat conduction rates between the sediment and water were therefore much faster than hyporheic flux rates, rendering reasonable an assumption of instantaneous conduction. Then, we developed two simple models to predict time series of hyporheic water based on the initial diurnal temperature wave and hyporheic travel distance. The first model estimates a damping coefficient based on the total water-sediment heat exchange through each diurnal cycle. The second model solves the heat transfer equation assuming instantaneous conduction using a simple finite difference algorithm. Both models demonstrated nearly complete damping of the sine wave over the distance traveled in four days. If hyporheic exchange is substantial and travel times are long, then hyporheic damping may have large effects on
The Cell Cycle Switch Computes Approximate Majority
NASA Astrophysics Data System (ADS)
Cardelli, Luca; Csikász-Nagy, Attila
2012-09-01
Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
Embedding impedance approximations in the analysis of SIS mixers
NASA Technical Reports Server (NTRS)
Kerr, A. R.; Pan, S.-K.; Withington, S.
1992-01-01
Future millimeter-wave radio astronomy instruments will use arrays of many SIS receivers, either as focal plane arrays on individual radio telescopes, or as individual receivers on the many antennas of radio interferometers. Such applications will require broadband integrated mixers without mechanical tuners. To produce such mixers, it will be necessary to improve present mixer design techniques, most of which use the three-frequency approximation to Tucker's quantum mixer theory. This paper examines the adequacy of three approximations to Tucker's theory: (1) the usual three-frequency approximation which assumes a sinusoidal LO voltage at the junction, and a short-circuit at all frequencies above the upper sideband; (2) a five-frequency approximation which allows two LO voltage harmonics and five small-signal sidebands; and (3) a quasi five-frequency approximation in which five small-signal sidebands are allowed, but the LO voltage is assumed sinusoidal. These are compared with a full harmonic-Newton solution of Tucker's equations, including eight LO harmonics and their corresponding sidebands, for realistic SIS mixer circuits. It is shown that the accuracy of the three approximations depends strongly on the value of omega R(sub N)C for the SIS junctions used. For large omega R(sub N)C, all three approximations approach the eight-harmonic solution. For omega R(sub N)C values in the range 0.5 to 10, the range of most practical interest, the quasi five-frequency approximation is a considerable improvement over the three-frequency approximation, and should be suitable for much design work. For the realistic SIS mixers considered here, the five-frequency approximation gives results very close to those of the eight-harmonic solution. Use of these approximations, where appropriate, considerably reduces the computational effort needed to analyze an SIS mixer, and allows the design and optimization of mixers using a personal computer.
Approximating Functions with Exponential Functions
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2005-01-01
The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…
An approximate Riemann solver for hypervelocity flows
NASA Technical Reports Server (NTRS)
Jacobs, Peter A.
1991-01-01
We describe an approximate Riemann solver for the computation of hypervelocity flows in which there are strong shocks and viscous interactions. The scheme has three stages, the first of which computes the intermediate states assuming isentropic waves. A second stage, based on the strong shock relations, may then be invoked if the pressure jump across either wave is large. The third stage interpolates the interface state from the two initial states and the intermediate states. The solver is used as part of a finite-volume code and is demonstrated on two test cases. The first is a high Mach number flow over a sphere while the second is a flow over a slender cone with an adiabatic boundary layer. In both cases the solver performs well.
The Bloch Approximation in Periodically Perforated Media
Conca, C. Gomez, D. Lobo, M. Perez, E.
2005-06-15
We consider a periodically heterogeneous and perforated medium filling an open domain {omega} of R{sup N}. Assuming that the size of the periodicity of the structure and of the holes is O({epsilon}),we study the asymptotic behavior, as {epsilon} {sup {yields}} 0, of the solution of an elliptic boundary value problem with strongly oscillating coefficients posed in {omega}{sup {epsilon}}({omega}{sup {epsilon}} being {omega} minus the holes) with a Neumann condition on the boundary of the holes. We use Bloch wave decomposition to introduce an approximation of the solution in the energy norm which can be computed from the homogenized solution and the first Bloch eigenfunction. We first consider the case where {omega}is R{sup N} and then localize the problem for abounded domain {omega}, considering a homogeneous Dirichlet condition on the boundary of {omega}.
Benthic grazers and suspension feeders: Which one assumes the energetic dominance in Königshafen?
NASA Astrophysics Data System (ADS)
Asmus, H.
1994-06-01
Size-frequency histograms of biomass, secondary production, respiration and energy flow of 4 dominant macrobenthic communities of the intertidal bay of Königshafen were analysed and compared. In the shallow sandy flats ( Nereis-Corophium-belt [ N.C.-belt], seagrass-bed and Arenicola-flat) a bimodal size-frequency histogram of biomass, secondary production, respiration and energy flow was found with a first peak formed by individuals within a size range of 0.10 to 0.32 mg ash free dry weight (AFDW). In this size range, the small prosobranch Hydrobia ulvae was the dominant species, showing maximal biomass as well as secondary production, respiration and energy flow in the seagrass-bed. The second peak on the size-frequency histogram was formed by the polychaete Nereis diversicolor with individual weights of 10 to 18 mg AFDW in the N.C.-belt, and by Arenicola marina with individual weights of 100 to 562 mg AFDW in both of the other sand flats. Biomass, productivity, respiration and energy flow of these polychaetes increased from the Nereis-Corophium-belt, to the seagrass-bed, and to the Arenicola-flat. Mussel beds surpassed all other communities in biomass and the functional parameters mentioned above. Size-frequency histograms of these parameters were distinctly unimodal with a maximum at an individual size of 562 to 1000 mg AFDW. This size group was dominated by adult specimens of Mytilus edulis. Averaged over the total area, the size-frequency histogram of energy flow of all intertidal flats of Königshafen showed one peak built by Hydrobia ulvae and a second one, mainly formed by M. edulis. Assuming that up to 10% of the intertidal area is covered by mussel beds, the maximum of the size-specific energy flow will be formed by Mytilus. When only 1% is covered by mussel beds, then the energy flow is dominated by H. ulvae. Both animals represent different trophic types and their dominance in energy flow has consequences for the food web and the carbon flow of the
NASA Technical Reports Server (NTRS)
Toplis, M. J.; Mizzon, H.; Forni, O.; Monnereau, M.; Prettyman, T. H.; McSween, H. Y.; McCoy, T. J.; Mittlefehldt, D. W.; DeSanctis, M. C.; Raymond, C. A.; Russell, C. T.
2012-01-01
Bulk composition (including oxygen content) is a primary control on the internal structure and mineralogy of differentiated asteroids. For example, oxidation state will affect core size, as well as Mg# and pyroxene content of the silicate mantle. The Howardite-Eucrite-Diogenite class of meteorites (HED) provide an interesting test-case of this idea, in particular in light of results of the Dawn mission which provide information on the size, density and differentiation state of Vesta, the parent body of the HED's. In this work we explore plausible bulk compositions of Vesta and use mass-balance and geochemical modelling to predict possible internal structures and crust/mantle compositions and mineralogies. Models are constrained to be consistent with known HED samples, but the approach has the potential to extend predictions to thermodynamically plausible rock types that are not necessarily present in the HED collection. Nine chondritic bulk compositions are considered (CI, CV, CO, CM, H, L, LL, EH, EL). For each, relative proportions and densities of the core, mantle, and crust are quantified. Considering that the basaltic crust has the composition of the primitive eucrite Juvinas and assuming that this crust is in thermodynamic equilibrium with the residual mantle, it is possible to calculate how much iron is in metallic form (in the core) and how much in oxidized form (in the mantle and crust) for a given bulk composition. Of the nine bulk compositions tested, solutions corresponding to CI and LL groups predicted a negative metal fraction and were not considered further. Solutions for enstatite chondrites imply significant oxidation relative to the starting materials and these solutions too are considered unlikely. For the remaining bulk compositions, the relative proportion of crust to bulk silicate is typically in the range 15 to 20% corresponding to crustal thicknesses of 15 to 20 km for a porosity-free Vesta-sized body. The mantle is predicted to be largely
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximation Preserving Reductions among Item Pricing Problems
NASA Astrophysics Data System (ADS)
Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei
When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.
42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 3 2014-10-01 2014-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... General Payment Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed...
42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 3 2013-10-01 2013-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... General Payment Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed...
42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed by Medicare. This...
42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 3 2011-10-01 2011-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed by Medicare. This...
42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 3 2012-10-01 2012-10-01 false Phased-down State contribution to drug benefit costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID... General Payment Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed...
Code of Federal Regulations, 2010 CFR
2010-10-01
...-Governance Tribes carry out construction projects without assuming these Federal environmental... 42 Public Health 1 2010-10-01 2010-10-01 false May Self-Governance Tribes carry out construction projects without assuming these Federal environmental responsibilities? 137.291 Section 137.291...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. L Appendix L to Part 1026—Assumed Loan Periods for...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. L Appendix L to Part 1026—Assumed Loan Periods for...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a...
ERIC Educational Resources Information Center
Chase, Barbara
2011-01-01
How are independent schools to be useful to the wider world? Beyond their common commitment to educate their students for meaningful lives in service of the greater good, can they educate a broader constituency and, thus, share their resources and skills more broadly? Their answers to this question will be shaped by their independence. Any…
NASA Astrophysics Data System (ADS)
Toplis, M. J.; Mizzon, H.; Forni, O.; Monnereau, M.; Prettyman, T. H.; McSween, H. Y.; McCoy, T. J.; Mittlefehldt, D. W.; De Sanctis, M. C.; Raymond, C. A.; Russell, C. T.
2012-12-01
Bulk composition (including oxygen content) is a primary control on the internal structure and mineralogy of differentiated asteroids. For example, oxidation state will affect core size, as well as Mg# and pyroxene content of the silicate mantle. The Howardite-Eucrite-Diogenite class of meteorites (HED) provide an interesting test-case of this idea, in particular in light of results of the Dawn mission which provide information on the size, density and differentiation state of Vesta, the parent body of the HED's. In this work we explore plausible bulk compositions of Vesta and use mass-balance and geochemical modelling to predict possible internal structures and crust/mantle compositions and mineralogies. Models are constrained to be consistent with known HED samples, but the approach has the potential to extend predictions to thermodynamically plausible rock types that are not necessarily present in the HED collection. Nine chondritic bulk compositions are considered (CI, CV, CO, CM, H, L, LL, EH, EL). For each, relative proportions and densities of the core, mantle, and crust are quantified. Considering that the basaltic crust has the composition of the primitive eucrite Juvinas and assuming that this crust is in thermodynamic equilibrium with the residual mantle, it is possible to calculate how much iron is in metallic form (in the core) and how much in oxidized form (in the mantle and crust) for a given bulk composition. Of the nine bulk compositions tested, solutions corresponding to CI and LL groups predicted a negative metal fraction and were not considered further. Solutions for enstatite chondrites imply significant oxidation relative to the starting materials and these solutions too are considered unlikely. For the remaining bulk compositions, the relative proportion of crust to bulk silicate is typically in the range 15 to 20% corresponding to crustal thicknesses of 15 to 20 km for a porosity-free Vesta-sized body. The mantle is predicted to be largely
Approximating subtree distances between phylogenies.
Bonet, Maria Luisa; St John, Katherine; Mahindru, Ruchi; Amenta, Nina
2006-10-01
We give a 5-approximation algorithm to the rooted Subtree-Prune-and-Regraft (rSPR) distance between two phylogenies, which was recently shown to be NP-complete. This paper presents the first approximation result for this important tree distance. The algorithm follows a standard format for tree distances. The novel ideas are in the analysis. In the analysis, the cost of the algorithm uses a "cascading" scheme that accounts for possible wrong moves. This accounting is missing from previous analysis of tree distance approximation algorithms. Further, we show how all algorithms of this type can be implemented in linear time and give experimental results.
Finite difference methods for approximating Heaviside functions
NASA Astrophysics Data System (ADS)
Towers, John D.
2009-05-01
We present a finite difference method for discretizing a Heaviside function H(u(x→)), where u is a level set function u:Rn ↦ R that is positive on a bounded region Ω⊂Rn. There are two variants of our algorithm, both of which are adapted from finite difference methods that we proposed for discretizing delta functions in [J.D. Towers, Two methods for discretizing a delta function supported on a level set, J. Comput. Phys. 220 (2007) 915-931; J.D. Towers, Discretizing delta functions via finite differences and gradient normalization, Preprint at http://www.miracosta.edu/home/jtowers/; J.D. Towers, A convergence rate theorem for finite difference approximations to delta functions, J. Comput. Phys. 227 (2008) 6591-6597]. We consider our approximate Heaviside functions as they are used to approximate integrals over Ω. We prove that our first approximate Heaviside function leads to second order accurate quadrature algorithms. Numerical experiments verify this second order accuracy. For our second algorithm, numerical experiments indicate at least third order accuracy if the integrand f and ∂Ω are sufficiently smooth. Numerical experiments also indicate that our approximations are effective when used to discretize certain singular source terms in partial differential equations. We mostly focus on smooth f and u. By this we mean that f is smooth in a neighborhood of Ω, u is smooth in a neighborhood of ∂Ω, and the level set u(x)=0 is a manifold of codimension one. However, our algorithms still give reasonable results if either f or u has jumps in its derivatives. Numerical experiments indicate approximately second order accuracy for both algorithms if the regularity of the data is reduced in this way, assuming that the level set u(x)=0 is a manifold. Numerical experiments indicate that dependence on the placement of Ω with respect to the grid is quite small for our algorithms. Specifically, a grid shift results in an O(hp) change in the computed solution
Accuracy of approximate inversion schemes in quantitative photacoustic imaging
NASA Astrophysics Data System (ADS)
Hochuli, Roman; Beard, Paul C.; Cox, Ben
2014-03-01
Five numerical phantoms were developed to investigate the accuracy of approximate inversion schemes in the reconstruction of oxygen saturation in photoacoustic imaging. In particular, two types of inversion are considered: Type I, an inversion that assumes fluence is unchanged between illumination wavelengths, and Type II, a method that assumes known background absorption and scattering coefficients to partially correct for the fluence. These approaches are tested in tomography (PAT) and acoustic-resolution microscopy mode (AR-PAM). They are found to produce accurate values of oxygen saturation in a blood vessel of interest at shallow depth - less than 3mm for PAT and less than 1mm for AR-PAM.
Rytov approximation in electron scattering
NASA Astrophysics Data System (ADS)
Krehl, Jonas; Lubk, Axel
2017-06-01
In this work we introduce the Rytov approximation in the scope of high-energy electron scattering with the motivation of developing better linear models for electron scattering. Such linear models play an important role in tomography and similar reconstruction techniques. Conventional linear models, such as the phase grating approximation, have reached their limits in current and foreseeable applications, most importantly in achieving three-dimensional atomic resolution using electron holographic tomography. The Rytov approximation incorporates propagation effects which are the most pressing limitation of conventional models. While predominately used in the weak-scattering regime of light microscopy, we show that the Rytov approximation can give reasonable results in the inherently strong-scattering regime of transmission electron microscopy.
Dual approximations in optimal control
NASA Technical Reports Server (NTRS)
Hager, W. W.; Ianculescu, G. D.
1984-01-01
A dual approximation for the solution to an optimal control problem is analyzed. The differential equation is handled with a Lagrange multiplier while other constraints are treated explicitly. An algorithm for solving the dual problem is presented.
Purvis, A; Bromham, L
1997-01-01
A method is presented for estimating the transition/transversion ratio (TI/TV), based on phylogenetically independent comparisons. TI/TV is a parameter of some models used in phylogeny estimation intended to reflect the fact that nucleotide substitutions are not all equally likely. Previous attempts to estimate TI/TV have commonly faced three problems: (1) few taxa; (2) nonindependence among pairwise comparisons; and (3) multiple hits make the apparent TI/TV between two sequences decrease over time since their divergence, giving a misleading impression of relative substitution probabilities. We have made use of the time dependency, modeling how the observed TI/TV changes over time and extrapolating to estimate the "instantaneous" TI/TV-the relevant parameter for phylogenetic inference. To illustrate our method, TI/TV was estimated for two mammalian mitochondrial genes. For 26 pairs of cytochrome b sequences, the estimate of TI/TV was 5.5; 16 pairs of 12s rRNA yielded an estimate of 9.5. These estimates are higher than those given by the maximum likelihood method and than those obtained by averaging all possible pairwise comparisons (with or without a two-parameter correction for multiple substitutions). We discuss strengths, weaknesses, and further uses of our method.
Wall, Clifton; Boersma, Bendiks Jan; Moin, Parviz
2000-10-01
The assumed beta distribution model for the subgrid-scale probability density function (PDF) of the mixture fraction in large eddy simulation of nonpremixed, turbulent combustion is tested, a priori, for a reacting jet having significant heat release (density ratio of 5). The assumed beta distribution is tested as a model for both the subgrid-scale PDF and the subgrid-scale Favre PDF of the mixture fraction. The beta model is successful in approximating both types of PDF but is slightly more accurate in approximating the normal (non-Favre) PDF. To estimate the subgrid-scale variance of mixture fraction, which is required by the beta model, both a scale similarity model and a dynamic model are used. Predictions using the dynamic model are found to be more accurate. The beta model is used to predict the filtered value of a function chosen to resemble the reaction rate. When no model is used, errors in the predicted value are of the same order as the actual value. The beta model is found to reduce this error by about a factor of two, providing a significant improvement. (c) 2000 American Institute of Physics.
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. Copyright
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Approximation techniques for neuromimetic calculus.
Vigneron, V; Barret, C
1999-06-01
Approximation Theory plays a central part in modern statistical methods, in particular in Neural Network modeling. These models are able to approximate a large amount of metric data structures in their entire range of definition or at least piecewise. We survey most of the known results for networks of neurone-like units. The connections to classical statistical ideas such as ordinary least squares (LS) are emphasized.
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Beer, Andrew; Watson, David; McDade-Montez, Elizabeth
2013-12-01
Trait Negative Affect (NA) and Positive Affect (PA) are strongly associated with Neuroticism and Extraversion, respectively. Nevertheless, measures of the former tend to show substantially weaker self-other agreement-and stronger assumed similarity correlations-than scales assessing the latter. The current study separated the effects of item content versus format on agreement and assumed similarity using two different sets of Neuroticism and Extraversion measures and two different indicators of NA and PA (N = 381 newlyweds). Neuroticism and Extraversion consistently showed stronger agreement than NA and PA; in addition, however, scales with more elaborated items yielded significantly higher agreement correlations than those based on single adjectives. Conversely, the trait affect scales yielded stronger assumed similarity correlations than the personality scales; these coefficients were strongest for the adjectival measures of trait affect. Thus, our data establish a significant role for both content and format in assumed similarity and self-other agreement.
Local discontinuous Galerkin approximations to Richards’ equation
NASA Astrophysics Data System (ADS)
Li, H.; Farthing, M. W.; Dawson, C. N.; Miller, C. T.
2007-03-01
We consider the numerical approximation to Richards' equation because of its hydrological significance and intrinsic merit as a nonlinear parabolic model that admits sharp fronts in space and time that pose a special challenge to conventional numerical methods. We combine a robust and established variable order, variable step-size backward difference method for time integration with an evolving spatial discretization approach based upon the local discontinuous Galerkin (LDG) method. We formulate the approximation using a method of lines approach to uncouple the time integration from the spatial discretization. The spatial discretization is formulated as a set of four differential algebraic equations, which includes a mass conservation constraint. We demonstrate how this system of equations can be reduced to the solution of a single coupled unknown in space and time and a series of local constraint equations. We examine a variety of approximations at discontinuous element boundaries, permeability approximations, and numerical quadrature schemes. We demonstrate an optimal rate of convergence for smooth problems, and compare accuracy and efficiency for a wide variety of approaches applied to a set of common test problems. We obtain robust and efficient results that improve upon existing methods, and we recommend a future path that should yield significant additional improvements.
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
Chadwick, Andrew; Ash, Abigail; Day, James; Borthwick, Mark
2015-11-05
There is an increasing use of herbal remedies and medicines, with a commonly held belief that natural substances are safe. We present the case of a 50-year-old woman who was a trained herbalist and had purchased an 'Atropa belladonna (deadly nightshade) preparation'. Attempting to combat her insomnia, late one evening she deliberately ingested a small portion of this, approximately 50 mL. Unintentionally, this was equivalent to a very large (15 mg) dose of atropine and she presented in an acute anticholinergic syndrome (confused, tachycardic and hypertensive) to our accident and emergency department. She received supportive management in our intensive treatment unit including mechanical ventilation. Fortunately, there were no long-term sequelae from this episode. However, this dramatic clinical presentation does highlight the potential dangers posed by herbal remedies. Furthermore, this case provides clinicians with an important insight into potentially dangerous products available legally within the UK. To help clinicians' understanding of this our discussion explains the manufacture and 'dosing' of the A. belladonna preparation.
Investigating Material Approximations in Spacecraft Radiation Analysis
NASA Technical Reports Server (NTRS)
Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.
2011-01-01
During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
Rational approximations for tomographic reconstructions
NASA Astrophysics Data System (ADS)
Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas
2013-06-01
We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp-Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image.
Approximation techniques of a selective ARQ protocol
NASA Astrophysics Data System (ADS)
Kim, B. G.
Approximations to the performance of selective automatic repeat request (ARQ) protocol with lengthy acknowledgement delays are presented. The discussion is limited to packet-switched communication systems in a single-hop environment such as found with satellite systems. It is noted that retransmission of errors after ARQ is a common situation. ARQ techniques, e.g., stop-and-wait and continuous, are outlined. A simplified queueing analysis of the selective ARQ protocol shows that exact solutions with long delays are not feasible. Two approximation models are formulated, based on known exact behavior of a system with short delays. The buffer size requirements at both ends of a communication channel are cited as significant factor for accurate analysis, and further examinations of buffer overflow and buffer lock-out probability and avoidance are recommended.
Approximated solutions to Born-Infeld dynamics
NASA Astrophysics Data System (ADS)
Ferraro, Rafael; Nigro, Mauro
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Planetary ephemerides approximation for radar astronomy
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1991-01-01
The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.
Heat pipe transient response approximation.
Reid, R. S.
2001-01-01
A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper.
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
Approximating spatially exclusive invasion processes.
Ross, Joshua V; Binder, Benjamin J
2014-05-01
A number of biological processes, such as invasive plant species and cell migration, are composed of two key mechanisms: motility and reproduction. Due to the spatially exclusive interacting behavior of these processes a cellular automata (CA) model is specified to simulate a one-dimensional invasion process. Three (independence, Poisson, and 2D-Markov chain) approximations are considered that attempt to capture the average behavior of the CA. We show that our 2D-Markov chain approximation accurately predicts the state of the CA for a wide range of motility and reproduction rates.
Galerkin approximations for dissipative magnetohydrodynamics
NASA Technical Reports Server (NTRS)
Chen, Hudong; Shan, Xiaowen; Montgomery, David
1990-01-01
A Galerkin approximation scheme is proposed for voltage-driven, dissipative magnetohydrodynamics. The trial functions are exact eigenfunctions of the linearized continuum equations and represent helical deformations of the axisymmetric, zero-flow, driven steady state. The lowest nontrivial truncation is explored: one axisymmetric trial function and one helical trial function each for the magnetic and velocity fields. The system resembles the Lorenz approximation to Benard convection, but in the region of believed applicability, its dynamical behavior is rather different, including relaxation to a helically deformed state similar to those that have emerged in the much higher resolution computations of Dahlburg et al.
Commonness and rarity in the marine biosphere.
Connolly, Sean R; MacNeil, M Aaron; Caley, M Julian; Knowlton, Nancy; Cripps, Ed; Hisano, Mizue; Thibaut, Loïc M; Bhattacharya, Bhaskar D; Benedetti-Cecchi, Lisandro; Brainard, Russell E; Brandt, Angelika; Bulleri, Fabio; Ellingsen, Kari E; Kaiser, Stefanie; Kröncke, Ingrid; Linse, Katrin; Maggi, Elena; O'Hara, Timothy D; Plaisance, Laetitia; Poore, Gary C B; Sarkar, Santosh K; Satpathy, Kamala K; Schückel, Ulrike; Williams, Alan; Wilson, Robin S
2014-06-10
Explaining patterns of commonness and rarity is fundamental for understanding and managing biodiversity. Consequently, a key test of biodiversity theory has been how well ecological models reproduce empirical distributions of species abundances. However, ecological models with very different assumptions can predict similar species abundance distributions, whereas models with similar assumptions may generate very different predictions. This complicates inferring processes driving community structure from model fits to data. Here, we use an approximation that captures common features of "neutral" biodiversity models--which assume ecological equivalence of species--to test whether neutrality is consistent with patterns of commonness and rarity in the marine biosphere. We do this by analyzing 1,185 species abundance distributions from 14 marine ecosystems ranging from intertidal habitats to abyssal depths, and from the tropics to polar regions. Neutrality performs substantially worse than a classical nonneutral alternative: empirical data consistently show greater heterogeneity of species abundances than expected under neutrality. Poor performance of neutral theory is driven by its consistent inability to capture the dominance of the communities' most-abundant species. Previous tests showing poor performance of a neutral model for a particular system often have been followed by controversy about whether an alternative formulation of neutral theory could explain the data after all. However, our approach focuses on common features of neutral models, revealing discrepancies with a broad range of empirical abundance distributions. These findings highlight the need for biodiversity theory in which ecological differences among species, such as niche differences and demographic trade-offs, play a central role.
Singularly Perturbed Lie Bracket Approximation
Durr, Hans-Bernd; Krstic, Miroslav; Scheinker, Alexander; Ebenbauer, Christian
2015-03-27
Here, we consider the interconnection of two dynamical systems where one has an input-affine vector field. We show that by employing a singular perturbation analysis and the Lie bracket approximation technique, the stability of the overall system can be analyzed by regarding the stability properties of two reduced, uncoupled systems.
Approximation of Dynamical System's Separatrix Curves
NASA Astrophysics Data System (ADS)
Cavoretto, Roberto; Chaudhuri, Sanjay; De Rossi, Alessandra; Menduni, Eleonora; Moretti, Francesca; Rodi, Maria Caterina; Venturino, Ezio
2011-09-01
In dynamical systems saddle points partition the domain into basins of attractions of the remaining locally stable equilibria. This problem is rather common especially in population dynamics models, like prey-predator or competition systems. In this paper we construct programs for the detection of points lying on the separatrix curve, i.e. the curve which partitions the domain. Finally, an efficient algorithm, which is based on the Partition of Unity method with local approximants given by Wendland's functions, is used for reconstructing the separatrix curve.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Clement-Spychala, Meagan E; Couper, David; Zhu, Hongtu; Muller, Keith E
2010-01-01
The diffusion tensor imaging (DTI) protocol characterizes diffusion anisotropy locally in space, thus providing rich detail about white matter tissue structure. Although useful metrics for diffusion tensors have been defined, statistical properties of the measures have been little studied. Assuming homogeneity within a region leads to being able to apply Wishart distribution theory. First, it will be shown that common DTI metrics are simple functions of known test statistics. The average diffusion coefficient (ADC) corresponds to the trace of a Wishart, and is also described as the generalized (multivariate) variance, the average variance of the principal components. Therefore ADC has a known exact distribution (a positively weighted quadratic form in Gaussians) as well as a simple and accurate approximation (Satterthwaite) in terms of a scaled chi square. Of particular interest is that fractional anisotropy (FA) values for given regions of interest are functions of the Geisser-Greenhouse (GG) sphericity estimator. The GG sphericity estimator can be approximated well by a linear transformation of a squared beta random variable. Simulated data demonstrates that the fits work well for simulated diffusion tensors. Applying traditional density estimation techniques for a beta to histograms of FA values from a region allow representing the histogram of hundreds or thousands of values in terms of just two estimates for the beta parameters. Thus using the approximate distribution eliminates the "curse of dimensionality" for FA values. A parallel result holds for ADC.
Lowry, R. B.
1985-01-01
Congenital anomalies account for a substantial proportion of childhood morbidity and mortality. They have become proportionately larger because of the decline of such other categories as infections or birth trauma. Approximately 3% of newborns have a serious handicapping or potentially lethal condition; in longterm studies the frequency is much higher. There is no good evidence to suggest that the rates of congenital anomalies are increasing, although this is a common perception. This article discusses diagnosis and management (especially genetic implications) of heart defects, neural tube defects, orofacial clefting, dislocated hip, clubfoot, and hypospadias. PMID:21274150
Ab initio dynamical vertex approximation
NASA Astrophysics Data System (ADS)
Galler, Anna; Thunström, Patrik; Gunacker, Patrik; Tomczak, Jan M.; Held, Karsten
2017-03-01
Diagrammatic extensions of dynamical mean-field theory (DMFT) such as the dynamical vertex approximation (DΓ A) allow us to include nonlocal correlations beyond DMFT on all length scales and proved their worth for model calculations. Here, we develop and implement an Ab initio DΓ A approach (AbinitioDΓ A ) for electronic structure calculations of materials. The starting point is the two-particle irreducible vertex in the two particle-hole channels which is approximated by the bare nonlocal Coulomb interaction and all local vertex corrections. From this, we calculate the full nonlocal vertex and the nonlocal self-energy through the Bethe-Salpeter equation. The AbinitioDΓ A approach naturally generates all local DMFT correlations and all nonlocal G W contributions, but also further nonlocal correlations beyond: mixed terms of the former two and nonlocal spin fluctuations. We apply this new methodology to the prototypical correlated metal SrVO3.
Random-Phase Approximation Methods
NASA Astrophysics Data System (ADS)
Chen, Guo P.; Voora, Vamsee K.; Agee, Matthew M.; Balasubramani, Sree Ganesh; Furche, Filipp
2017-05-01
Random-phase approximation (RPA) methods are rapidly emerging as cost-effective validation tools for semilocal density functional computations. We present the theoretical background of RPA in an intuitive rather than formal fashion, focusing on the physical picture of screening and simple diagrammatic analysis. A new decomposition of the RPA correlation energy into plasmonic modes leads to an appealing visualization of electron correlation in terms of charge density fluctuations. Recent developments in the areas of beyond-RPA methods, RPA correlation potentials, and efficient algorithms for RPA energy and property calculations are reviewed. The ability of RPA to approximately capture static correlation in molecules is quantified by an analysis of RPA natural occupation numbers. We illustrate the use of RPA methods in applications to small-gap systems such as open-shell d- and f-element compounds, radicals, and weakly bound complexes, where semilocal density functional results exhibit strong functional dependence.
Testing the frozen flow approximation
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1993-01-01
We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.
Potential of the approximation method
Amano, K.; Maruoka, A.
1996-12-31
Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction.
Nonlinear Filtering and Approximation Techniques
1991-09-01
Shwartz), Academic Press (1991). [191 M.Cl. ROUTBAUD, Fiting lindairc par morceaux avec petit bruit d’obserration, These. Universit6 de Provence ( 1990...Kernel System (GKS), Academic Press (1983). 181 H.J. KUSHNER, Probability methods for approximations in stochastic control and for elliptic equations... Academic Press (1977). [9] F. LE GLAND, Time discretization of nonlinear filtering equations, in: 28th. IEEE CDC, Tampa, pp. 2601-2606. IEEE Press (1989
Analytical solution approximation for bearing
NASA Astrophysics Data System (ADS)
Hanafi, Lukman; Mufid, M. Syifaul
2017-08-01
The purpose of lubrication is to separate two surfaces sliding past each other with a film of some material which can be sheared without causing any damage to the surfaces. Reynolds equation is a basic equation for fluid lubrication which is applied in the bearing problem. This equation can be derived from Navier-Stokes equation and continuity equation. In this paper Reynolds equation is solved using analytical approximation by making simplification to obtain pressure distribution.
Ultrafast approximation for phylogenetic bootstrap.
Minh, Bui Quang; Nguyen, Minh Anh Thi; von Haeseler, Arndt
2013-05-01
Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and the Shimodaira-Hasegawa-like approximate likelihood ratio test have been introduced to speed up the bootstrap. Here, we suggest an ultrafast bootstrap approximation approach (UFBoot) to compute the support of phylogenetic groups in maximum likelihood (ML) based trees. To achieve this, we combine the resampling estimated log-likelihood method with a simple but effective collection scheme of candidate trees. We also propose a stopping rule that assesses the convergence of branch support values to automatically determine when to stop collecting candidate trees. UFBoot achieves a median speed up of 3.1 (range: 0.66-33.3) to 10.2 (range: 1.32-41.4) compared with RAxML RBS for real DNA and amino acid alignments, respectively. Moreover, our extensive simulations show that UFBoot is robust against moderate model violations and the support values obtained appear to be relatively unbiased compared with the conservative standard bootstrap. This provides a more direct interpretation of the bootstrap support. We offer an efficient and easy-to-use software (available at http://www.cibiv.at/software/iqtree) to perform the UFBoot analysis with ML tree inference.
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.
Approximate Counting of Graphical Realizations
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Working Memory in Nonsymbolic Approximate Arithmetic Processing: A Dual-Task Study with Preschoolers
ERIC Educational Resources Information Center
Xenidou-Dervou, Iro; van Lieshout, Ernest C. D. M.; van der Schoot, Menno
2014-01-01
Preschool children have been proven to possess nonsymbolic approximate arithmetic skills before learning how to manipulate symbolic math and thus before any formal math instruction. It has been assumed that nonsymbolic approximate math tasks necessitate the allocation of Working Memory (WM) resources. WM has been consistently shown to be an…
Working Memory in Nonsymbolic Approximate Arithmetic Processing: A Dual-Task Study with Preschoolers
ERIC Educational Resources Information Center
Xenidou-Dervou, Iro; van Lieshout, Ernest C. D. M.; van der Schoot, Menno
2014-01-01
Preschool children have been proven to possess nonsymbolic approximate arithmetic skills before learning how to manipulate symbolic math and thus before any formal math instruction. It has been assumed that nonsymbolic approximate math tasks necessitate the allocation of Working Memory (WM) resources. WM has been consistently shown to be an…
NASA Astrophysics Data System (ADS)
Hinds, Arianne T.
2011-09-01
Spatial transformations whose kernels employ sinusoidal functions for the decorrelation of signals remain as fundamental components of image and video coding systems. Practical implementations are designed in fixed precision for which the most challenging task is to approximate these constants with values that are both efficient in terms of complexity and accurate with respect to their mathematical definitions. Scaled architectures, for example, as used in the implementations of the order-8 Discrete Cosine Transform and its corresponding inverse both specified in ISO/IEC 23002-2 (MPEG C Pt. 2), can be utilized to mitigate the complexity of these approximations. That is, the implementation of the transform can be designed such that it is completed in two stages: 1) the main transform matrix in which the sinusoidal constants are roughly approximated, and 2) a separate scaling stage to further refine the approximations. This paper describes a methodology termed the Common Factor Method, for finding fixed-point approximations of such irrational values suitable for use in scaled architectures. The order-16 Discrete Cosine Transform provides a framework in which to demonstrate the methodology, but the methodology itself can be employed to design fixed-point implementations of other linear transformations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute and...
Code of Federal Regulations, 2013 CFR
2013-01-01
... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute and...
Code of Federal Regulations, 2012 CFR
2012-01-01
... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute and...
Code of Federal Regulations, 2011 CFR
2011-01-01
... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute and...
Code of Federal Regulations, 2011 CFR
2011-04-01
... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for...
Code of Federal Regulations, 2014 CFR
2014-04-01
... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for...
Code of Federal Regulations, 2012 CFR
2012-04-01
... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for...
Code of Federal Regulations, 2013 CFR
2013-04-01
... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 2 2011-04-01 2011-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 2 2014-04-01 2014-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 2 2013-04-01 2013-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 2 2012-04-01 2012-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 2 2010-04-01 2010-04-01 false How does the AFA specify the services provided, functions... Programs May Be Included in An Afa § 1000.87 How does the AFA specify the services provided, functions... AFA must specify in writing the services, functions, and responsibilities to be assumed by the...
Storton, Sharon
2007-01-01
Step 4 of the Ten Steps of Mother-Friendly Care insures that women have the freedom to walk, move, and assume positions of their choice during labor and birth. The rationales and the evidence in support of this step are presented. PMID:18523670
Code of Federal Regulations, 2010 CFR
2010-04-01
... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF... Requirements § 224.64 How may a tribe assume management of development of different types of energy resources... develop that type of energy resource and will trigger the public notice and opportunity for comment...
ERIC Educational Resources Information Center
Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte
2014-01-01
Researchers question the implications of the way in which "motherhood" is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs of the child and the interest and focus of the…
ERIC Educational Resources Information Center
Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte
2014-01-01
Researchers question the implications of the way in which "motherhood" is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs of the child and the interest and focus of the…
Code of Federal Regulations, 2010 CFR
2010-10-01
... Federal environmental responsibilities assumed by the Self-Governance Tribe. ... 42 Public Health 1 2010-10-01 2010-10-01 false Since Federal environmental responsibilities are... additional funds available to Self-Governance Tribes to carry out these formerly inherently...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects in... performing these Federal environmental responsibilities, Self-Governance Tribes will be considered the... 42 Public Health 1 2010-10-01 2010-10-01 false Do Self-Governance Tribes become Federal...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a) Required...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a) Required...
Neighbourhood approximation using randomized forests.
Konukoglu, Ender; Glocker, Ben; Zikic, Darko; Criminisi, Antonio
2013-10-01
Leveraging available annotated data is an essential component of many modern methods for medical image analysis. In particular, approaches making use of the "neighbourhood" structure between images for this purpose have shown significant potential. Such techniques achieve high accuracy in analysing an image by propagating information from its immediate "neighbours" within an annotated database. Despite their success in certain applications, wide use of these methods is limited due to the challenging task of determining the neighbours for an out-of-sample image. This task is either computationally expensive due to large database sizes and costly distance evaluations, or infeasible due to distance definitions over semantic information, such as ground truth annotations, which is not available for out-of-sample images. This article introduces Neighbourhood Approximation Forests (NAFs), a supervised learning algorithm providing a general and efficient approach for the task of approximate nearest neighbour retrieval for arbitrary distances. Starting from an image training database and a user-defined distance between images, the algorithm learns to use appearance-based features to cluster images approximating the neighbourhood structured induced by the distance. NAF is able to efficiently infer nearest neighbours of an out-of-sample image, even when the original distance is based on semantic information. We perform experimental evaluation in two different scenarios: (i) age prediction from brain MRI and (ii) patch-based segmentation of unregistered, arbitrary field of view CT images. The results demonstrate the performance, computational benefits, and potential of NAF for different image analysis applications. Copyright © 2013 Elsevier B.V. All rights reserved.
Topics in Multivariate Approximation Theory.
1982-05-01
of the Bramble -Hilbert lemma (see Bramble & Hhert (13ŕ). Kergin’s scheme raises some questions. In .ontrast £.t its univar- iate antecedent, it...J. R. Rice (19791# An adaptive algorithm for multivariate approximation giving optimal convergence rates, J.Approx. Theory 25, 337-359. J. H. Bramble ...J.Numer.Anal. 7, 112-124. J. H. Bramble & S. R. Hilbert (19711, BoUnds for a class of linear functionals with applications to Hermite interpolation
Approximate transferability in conjugated polyalkenes
NASA Astrophysics Data System (ADS)
Eskandari, Keiamars; Mandado, Marcos; Mosquera, Ricardo A.
2007-03-01
QTAIM computed atomic and bond properties, as well as delocalization indices (obtained from electron densities computed at HF, MP2 and B3LYP levels) of several linear and branched conjugated polyalkenes and O- and N-containing conjugated polyenes have been employed to assess approximate transferable CH groups. The values of these properties indicate the effects of the functional group extend to four CH groups, whereas those of the terminal carbon affect up to three carbons. Ternary carbons also modify significantly the properties of atoms in α, β and γ.
Improved non-approximability results
Bellare, M.; Sudan, M.
1994-12-31
We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.
Approximation for Bayesian Ability Estimation.
1987-02-18
posterior pdfs of ande are given by p(-[Y) p(F) F P((y lei’ j)P )d. SiiJ i (4) a r~d p(e Iy) - p(t0) 1 J i P(Yij ei, (5) As shown in Tsutakawa and Lin...inverse A Hessian of the log of (27) with respect to , evaulatedat a Then, under regularity conditions, the marginal posterior pdf of O is...two-way contingency tables. Journal of Educational Statistics, 11, 33-56. Lindley, D.V. (1980). Approximate Bayesian methods. Trabajos Estadistica , 31
Fermion tunneling beyond semiclassical approximation
Majhi, Bibhas Ranjan
2009-02-15
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys. 06 (2008) 095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
New Tests of the Fixed Hotspot Approximation
NASA Astrophysics Data System (ADS)
Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.
2005-05-01
We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of
Haggaz, Abdelrahium D; Elbashir, Leana M; Adam, Gamal K; Rayis, Duria A; Adam, Ishag
2014-01-05
Microscopic examination using Giemsa-stained thick blood films remains the reference standard for detection of malaria parasites and it is the only method that is widely and practically available for quantifying malaria parasite density. There are few published data (there was no study during pregnancy) investigating the parasite density (ratio of counted parasites within a given number of microscopic fields against counted white blood cells (WBCs) using actual number of WBCs. Parasitaemia was estimated using assumed WBCs (8,000), which was compared to parasitaemia calculated based on each woman's WBCs in 98 pregnant women with uncomplicated Plasmodium falciparum malaria at Medani Maternity Hospital, Central Sudan. The geometric mean (SD) of the parasite count was 12,014.6 (9,766.5) and 7,870.8 (19,168.8) ring trophozoites /μl, P <0.001 using the actual and assumed (8,000) WBC count, respectively. The median (range) of the ratio between the two parasitaemias (using assumed/actual WBCs) was 1.5 (0.6-5), i e, parasitaemia calculated assuming WBCs equal to median (range) 1.5 (0.6-5) times higher than parasitaemia calculated using actual WBCs. There were 52 out of 98 patients (53%) with ratio between 0.5 and 1.5. For 21 patients (21%) this ratio was higher than 2, and for five patients (5%) it was higher than 3. The estimated parasite density using actual WBC counts was significantly lower than the parasite density estimated using assumed WBC counts. Therefore, it is recommended to use the patient`s actual WBC count in the estimation of the parasite density.
Laguerre approximation of random foams
NASA Astrophysics Data System (ADS)
Liebscher, André
2015-09-01
Stochastic models for the microstructure of foams are valuable tools to study the relations between microstructure characteristics and macroscopic properties. Owing to the physical laws behind the formation of foams, Laguerre tessellations have turned out to be suitable models for foams. Laguerre tessellations are weighted generalizations of Voronoi tessellations, where polyhedral cells are formed through the interaction of weighted generator points. While both share the same topology, the cell curvature of foams allows only an approximation by Laguerre tessellations. This makes the model fitting a challenging task, especially when the preservation of the local topology is required. In this work, we propose an inversion-based approach to fit a Laguerre tessellation model to a foam. The idea is to find a set of generator points whose tessellation best fits the foam's cell system. For this purpose, we transform the model fitting into a minimization problem that can be solved by gradient descent-based optimization. The proposed algorithm restores the generators of a tessellation if it is known to be Laguerre. If, as in the case of foams, no exact solution is possible, an approximative solution is obtained that maintains the local topology.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Rational approximations to fluid properties
Kincaid, J.M.
1990-05-01
The purpose of this report is to summarize some results that were presented at the Spring AIChE meeting in Orlando, Florida (20 March 1990). We report on recent attempts to develop a systematic method, based on the technique of rational approximation, for creating mathematical models of real-fluid equations of state and related properties. Equation-of-state models for real fluids are usually created by selecting a function {tilde p} (T,{rho}) that contains a set of parameters {l brace}{gamma}{sub i}{r brace}; the {l brace}{gamma}{sub i}{r brace} is chosen such that {tilde p}(T,{rho}) provides a good fit to the experimental data. (Here p is the pressure, T the temperature and {rho} is the density). In most cases a nonlinear least-squares numerical method is used to determine {l brace}{gamma}{sub i}{r brace}. There are several drawbacks to this method: one has essentially to guess what {tilde p}(T,{rho}) should be; the critical region is seldom fit very well and nonlinear numerical methods are time consuming and sometimes not very stable. The rational approximation approach we describe may eliminate all of these drawbacks. In particular it lets the data choose the function {tilde p}(T,{rho}) and its numerical implementation involves only linear algorithms. 27 refs., 5 figs.
Introduction to Methods of Approximation in Physics and Astronomy
NASA Astrophysics Data System (ADS)
van Putten, Maurice H. P. M.
2017-04-01
Modern astronomy reveals an evolving Universe rife with transient sources, mostly discovered - few predicted - in multi-wavelength observations. Our window of observations now includes electromagnetic radiation, gravitational waves and neutrinos. For the practicing astronomer, these are highly interdisciplinary developments that pose a novel challenge to be well-versed in astroparticle physics and data analysis. In realizing the full discovery potential of these multimessenger approaches, the latter increasingly involves high-performance supercomputing. These lecture notes developed out of lectures on mathematical-physics in astronomy to advanced undergraduate and beginning graduate students. They are organised to be largely self-contained, starting from basic concepts and techniques in the formulation of problems and methods of approximation commonly used in computation and numerical analysis. This includes root finding, integration, signal detection algorithms involving the Fourier transform and examples of numerical integration of ordinary differential equations and some illustrative aspects of modern computational implementation. In the applications, considerable emphasis is put on fluid dynamical problems associated with accretion flows, as these are responsible for a wealth of high energy emission phenomena in astronomy. The topics chosen are largely aimed at phenomenological approaches, to capture main features of interest by effective methods of approximation at a desired level of accuracy and resolution. Formulated in terms of a system of algebraic, ordinary or partial differential equations, this may be pursued by perturbation theory through expansions in a small parameter or by direct numerical computation. Successful application of these methods requires a robust understanding of asymptotic behavior, errors and convergence. In some cases, the number of degrees of freedom may be reduced, e.g., for the purpose of (numerical) continuation or to identify
Analytical approximations for spiral waves
Löber, Jakob Engel, Harald
2013-12-15
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
Interplay of approximate planning strategies.
Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P
2015-03-10
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options."
Approximating metal-insulator transitions
NASA Astrophysics Data System (ADS)
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
Analytical approximations for spiral waves.
Löber, Jakob; Engel, Harald
2013-12-01
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R(0). For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R(+)) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R(+) with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
Approximate analytic solutions to the NPDD: Short exposure approximations
NASA Astrophysics Data System (ADS)
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices
Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher
2015-01-01
We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667
Approximate joint diagonalization and geometric mean of symmetric positive definite matrices.
Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher
2014-01-01
We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations.
NASA Technical Reports Server (NTRS)
Billingham, John; Tarter, Jill
1989-01-01
The maximum range is calculated at which radar signals from the earth could be detected by a search system similar to the NASA SETI Microwave Observing Project (SETI MOP) assumed to be operating out in the Galaxy. Figures are calculated for the Targeted Search and for the Sky Survey parts of the MOP, both planned to be operating in the 1990s. The probability of detection is calculated for the two most powerful transmitters, the planetary radar at Arecibo (Puerto Rico) and the ballistic missile early warning systems (BMEWSs), assuming that the terrestrial radars are only in the eavesdropping mode. It was found that, for the case of a single transmitter within the maximum range, the highest probability is for the sky survey detecting BMEWSs; this is directly proportional to BMEWS sky coverage and is therefore 0.25.
Fernandes, Ralston; Job, R F Soames; Hatfield, Julie
2007-01-01
In road safety, it may be debated whether all risky behaviors are sufficiently similar to be explained by similar factors. The often assumed generalizability of the factors that influence risky driving behaviors has been inadequately tested. Study 1 (N=116) examined the role of demographic, personality and attitudinal factors in the prediction of a range of risky driving behaviors, for young drivers. Results illustrated that different driving behaviors were predicted by different factors (e.g., speeding was predicted by authority--rebellion, while drink driving was predicted by sensation seeking and optimism bias). Study 2 (N=127) examined the generalizability of these results to the general driving population. Study 1 results did not generalize. Predictive factors remained behavior-specific, but different predictor-behavior relationships were observed in the community sample. Overall, results suggest that future research and practice should focus on a multi-factor framework for specific risky driving behaviors, rather than assuming generalizability across behaviors and driving populations.
NASA Technical Reports Server (NTRS)
Billingham, John; Tarter, Jill
1989-01-01
The maximum range is calculated at which radar signals from the earth could be detected by a search system similar to the NASA SETI Microwave Observing Project (SETI MOP) assumed to be operating out in the Galaxy. Figures are calculated for the Targeted Search and for the Sky Survey parts of the MOP, both planned to be operating in the 1990s. The probability of detection is calculated for the two most powerful transmitters, the planetary radar at Arecibo (Puerto Rico) and the ballistic missile early warning systems (BMEWSs), assuming that the terrestrial radars are only in the eavesdropping mode. It was found that, for the case of a single transmitter within the maximum range, the highest probability is for the sky survey detecting BMEWSs; this is directly proportional to BMEWS sky coverage and is therefore 0.25.
Billingham, J; Tarter, J
1992-01-01
This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.
NASA Technical Reports Server (NTRS)
Billingham, J.; Tarter, J.
1992-01-01
This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.
Bamia, Christina; White, Ian R; Kenward, Michael G
2013-07-10
Linear mixed models are often used for the analysis of data from clinical trials with repeated quantitative outcomes. This paper considers linear mixed models where a particular form is assumed for the treatment effect, in particular constant over time or proportional to time. For simplicity, we assume no baseline covariates and complete post-baseline measures, and we model arbitrary mean responses for the control group at each time. For the variance-covariance matrix, we consider an unstructured model, a random intercepts model and a random intercepts and slopes model. We show that the treatment effect estimator can be expressed as a weighted average of the observed time-specific treatment effects, with weights depending on the covariance structure and the magnitude of the estimated variance components. For an assumed constant treatment effect, under the random intercepts model, all weights are equal, but in the random intercepts and slopes and the unstructured models, we show that some weights can be negative: thus, the estimated treatment effect can be negative, even if all time-specific treatment effects are positive. Our results suggest that particular models for the treatment effect combined with particular covariance structures may result in estimated treatment effects of unexpected magnitude and/or direction. Methods are illustrated using a Parkinson's disease trial. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Billingham, J.; Tarter, J.
1992-01-01
This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.
Bilal, Jalal A; Gasim, Gasim I; Karsani, Amani H; Elbashir, Leana M; Adam, Ishag
2016-04-01
Estimating malaria parasite count is needed for estimating the severity of the disease and during the follow-up. This study was conducted to determine the malaria parasite density among children using actual white blood cell (WBC) and the assumed WBC counts (8.0 × 10(9)/l). A cross-sectional study was conducted at New Halfa Hospital, Sudan. WBC count and count of asexual malaria parasite were performed on blood films. One hundred and three children were enrolled. The mean (SD) WBCs was 6.2 (2.9) cells × 10(9)/l. The geometric mean (SD) of the parasite count using the assumed WBCs (8.0 × 10(9)/l cells/μl) was significantly higher than that estimated using the actual WBC count [7345.76 (31,038.56) vs. 5965 (28,061.57) rings/μl,p = 0.042]. Malaria parasitemia based on assumed (8.0 × 10(9)/) WBCs is higher than parasitemia based on actual WBCs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Stadler, Tanja; Vaughan, Timothy G; Gavryushkin, Alex; Guindon, Stephane; Kühnert, Denise; Leventhal, Gabriel E; Drummond, Alexei J
2015-05-07
One of the central objectives in the field of phylodynamics is the quantification of population dynamic processes using genetic sequence data or in some cases phenotypic data. Phylodynamics has been successfully applied to many different processes, such as the spread of infectious diseases, within-host evolution of a pathogen, macroevolution and even language evolution. Phylodynamic analysis requires a probability distribution on phylogenetic trees spanned by the genetic data. Because such a probability distribution is not available for many common stochastic population dynamic processes, coalescent-based approximations assuming deterministic population size changes are widely employed. Key to many population dynamic models, in particular epidemiological models, is a period of exponential population growth during the initial phase. Here, we show that the coalescent does not well approximate stochastic exponential population growth, which is typically modelled by a birth-death process. We demonstrate that introducing demographic stochasticity into the population size function of the coalescent improves the approximation for values of R0 close to 1, but substantial differences remain for large R0. In addition, the computational advantage of using an approximation over exact models vanishes when introducing such demographic stochasticity. These results highlight that we need to increase efforts to develop phylodynamic tools that correctly account for the stochasticity of population dynamic models for inference.
Approximate Bayesian computation with functional statistics.
Soubeyrand, Samuel; Carpentier, Florence; Guiton, François; Klein, Etienne K
2013-03-26
Functional statistics are commonly used to characterize spatial patterns in general and spatial genetic structures in population genetics in particular. Such functional statistics also enable the estimation of parameters of spatially explicit (and genetic) models. Recently, Approximate Bayesian Computation (ABC) has been proposed to estimate model parameters from functional statistics. However, applying ABC with functional statistics may be cumbersome because of the high dimension of the set of statistics and the dependences among them. To tackle this difficulty, we propose an ABC procedure which relies on an optimized weighted distance between observed and simulated functional statistics. We applied this procedure to a simple step model, a spatial point process characterized by its pair correlation function and a pollen dispersal model characterized by genetic differentiation as a function of distance. These applications showed how the optimized weighted distance improved estimation accuracy. In the discussion, we consider the application of the proposed ABC procedure to functional statistics characterizing non-spatial processes.
Fast Approximate Quadratic Programming for Graph Matching
Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624
Fast approximate quadratic programming for graph matching.
Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance.
Approximate protein structural alignment in polynomial time
Kolodny, Rachel; Linial, Nathan
2004-01-01
Alignment of protein structures is a fundamental task in computational molecular biology. Good structural alignments can help detect distant evolutionary relationships that are hard or impossible to discern from protein sequences alone. Here, we study the structural alignment problem as a family of optimization problems and develop an approximate polynomial-time algorithm to solve them. For a commonly used scoring function, the algorithm runs in O(n10/ε6) time, for globular protein of length n, and it detects alignments that score within an additive error of ε from all optima. Thus, we prove that this task is computationally feasible, although the method that we introduce is too slow to be a useful everyday tool. We argue that such approximate solutions are, in fact, of greater interest than exact ones because of the noisy nature of experimentally determined protein coordinates. The measurement of similarity between a pair of protein structures used by our algorithm involves the Euclidean distance between the structures (appropriately rigidly transformed). We show that an alternative approach, which relies on internal distance matrices, must incorporate sophisticated geometric ingredients if it is to guarantee optimality and run in polynomial time. We use these observations to visualize the scoring function for several real instances of the problem. Our investigations yield insights on the computational complexity of protein alignment under various scoring functions. These insights can be used in the design of scoring functions for which the optimum can be approximated efficiently and perhaps in the development of efficient algorithms for the multiple structural alignment problem. PMID:15304646
Function approximation using adaptive and overlapping intervals
Patil, R.B.
1995-05-01
A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.
Mean-Field Approximation to the Hydrophobic Hydration in the Liquid-Vapor Interface of Water.
Abe, Kiharu; Sumi, Tomonari; Koga, Kenichiro
2016-03-03
A mean-field approximation to the solvation of nonpolar solutes in the liquid-vapor interface of aqueous solutions is proposed. It is first remarked with a numerical illustration that the solvation of a methane-like solute in bulk liquid water is accurately described by the mean-field theory of liquids, the main idea of which is that the probability (Pcav) of finding a cavity in the solvent that can accommodate the solute molecule and the attractive interaction energy (uatt) that the solute would feel if it is inserted in such a cavity are both functions of the solvent density alone. It is then assumed that the basic idea is still valid in the liquid-vapor interface, but Pcav and uatt are separately functions of different coarse-grained local densities, not functions of a common local density. Validity of the assumptions is confirmed for the solvation of the methane-like particle in the interface of model water at temperatures between 253 and 613 K. With the mean-field approximation extended to the inhomogeneous system the local solubility profiles across the interface at various temperatures are calculated from Pcav and uatt obtained at a single temperature. The predicted profiles are in excellent agreement with those obtained by the direct calculation of the excess chemical potential over an interfacial region where the solvent local density varies most rapidly.
PHRAPL: Phylogeographic Inference Using Approximate Likelihoods.
Jackson, Nathon D; Morales, Ariadna E; Carstens, Bryan C; O'Meara, Brian C
2017-02-16
The demographic history of most species is complex, with multiple evolutionary processes combining to shape the observed patterns of genetic diversity. To infer this history, the discipline of phylogeography has (to date) used models that simplify the historical demography of the focal organism, for example by assuming or ignoring ongoing gene flow between populations or by requiring a priori specification of divergence history. Since no single model incorporates every possible evolutionary process, researchers rely on intuition to choose the models that they use to analyze their data. Here, we describe an approximate likelihood approach that reduces this reliance on intuition. PHRAPL allows users to calculate the probability of a large number of complex demographic histories given a set of gene trees, enabling them to identify the most likely underlying model and estimate parameters for a given system. Available model parameters include coalescence time among populations or species, gene flow, and population size. We describe the method and test its performance in model selection and parameter estimation using simulated data. We also compare model probabilities estimated using our approximate likelihood method to those obtained using standard analytical likelihood. The method performs well under a wide range of scenarios, although this is sometimes contingent on sampling many loci. In most scenarios, as long as there are enough loci and if divergence among populations is sufficiently deep, PHRAPL can return the true model in nearly all simulated replicates. Parameter estimates from the method are also generally accurate in most cases. PHRAPL is a valuable new method for phylogeographic model selection and will be particularly useful as a tool to more extensively explore demographic model space than is typically done or to estimate parameters for complex models that are not readily implemented using current methods. Estimating relevant parameters using the most
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Randomized approximate nearest neighbors algorithm
Jones, Peter Wilcox; Osipov, Andrei; Rokhlin, Vladimir
2011-01-01
We present a randomized algorithm for the approximate nearest neighbor problem in d-dimensional Euclidean space. Given N points {xj} in , the algorithm attempts to find k nearest neighbors for each of xj, where k is a user-specified integer parameter. The algorithm is iterative, and its running time requirements are proportional to T·N·(d·(log d) + k·(d + log k)·(log N)) + N·k2·(d + log k), with T the number of iterations performed. The memory requirements of the procedure are of the order N·(d + k). A by-product of the scheme is a data structure, permitting a rapid search for the k nearest neighbors among {xj} for an arbitrary point . The cost of each such query is proportional to T·(d·(log d) + log(N/k)·k·(d + log k)), and the memory requirements for the requisite data structure are of the order N·(d + k) + T·(d + N). The algorithm utilizes random rotations and a basic divide-and-conquer scheme, followed by a local graph search. We analyze the scheme’s behavior for certain types of distributions of {xj} and illustrate its performance via several numerical examples. PMID:21885738
Interplay of approximate planning strategies
Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.
2015-01-01
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480
Femtolensing: Beyond the semiclassical approximation
NASA Technical Reports Server (NTRS)
Ulmer, Andrew; Goodman, Jeremy
1995-01-01
Femtolensoing is a gravitational lensing effect in which the magnification is a function not only of the position and sizes of the source and lens, but also of the wavelength of light. Femtolensing is the only known effect of 10(exp -13) - 10(exp -16) solar mass) dark-matter objects and may possibly be detectable in cosmological gamma-ray burst spectra. We present a new and efficient algorithm for femtolensing calculation in general potentials. The physical optics results presented here differ at low frequencies from the semiclassical approximation, in which the flux is attributed to a finite number of mutually coherent images. At higher frequencies, our results agree well with the semicalssical predictions. Applying our method to a point-mass lens with external shear, we find complex events that have structure at both large and small spectral resolution. In this way, we show that femtolensing may be observable for lenses up to 10(exp -11) solar mass, much larger than previously believed. Additionally, we discuss the possibility of a search femtolensing of white dwarfs in the Large Magellanic Cloud at optical wavelengths.
Generalized stationary phase approximations for mountain waves
NASA Astrophysics Data System (ADS)
Knight, H.; Broutman, D.; Eckermann, S. D.
2016-04-01
Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Adiabatic approximation and fluctuations in exciton-polariton condensates
NASA Astrophysics Data System (ADS)
Bobrovska, Nataliya; Matuszewski, Michał
2015-07-01
We study the relation between the models commonly used to describe the dynamics of nonresonantly pumped exciton-polariton condensates, namely the ones described by the complex Ginzburg-Landau equation, and by the open-dissipative Gross-Pitaevskii equation including a separate equation for the reservoir density. In particular, we focus on the validity of the adiabatic approximation and small density fluctuations approximation that allow one to reduce the coupled condensate-reservoir dynamics to a single partial differential equation. We find that the adiabatic approximation consists of three independent analytical conditions that have to be fulfilled simultaneously. By investigating stochastic versions of the two corresponding models, we verify that the breakdown of these approximations can lead to discrepancies in correlation lengths and distributions of fluctuations. Additionally, we consider the phase diffusion and number fluctuations of a condensate in a box, and show that self-consistent description requires treatment beyond the typical Bogoliubov approximation.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
NASA Astrophysics Data System (ADS)
Liu, Yu-Sheng; McKeen, David; Miller, Gerald A.
2017-02-01
Beam dump experiments have been used to search for new particles with null results interpreted in terms of limits on masses mϕ and coupling constants ɛ . However these limits have been obtained by using approximations [including the Weizsäcker-Williams (WW) approximation] or Monte-Carlo simulations. We display methods, using a new scalar boson as an example, to obtain the cross section and the resulting particle production numbers without using approximations or Monte-Carlo simulations. We show that the approximations cannot be used to obtain accurate values of cross sections. The corresponding exclusion plots differ by substantial amounts when seen on a linear scale. In the event of a discovery, we generate pseudodata (assuming given values of mϕ and ɛ ) in the currently allowed regions of parameter space. The use of approximations to analyze the pseudodata for the future experiments is shown to lead to considerable errors in determining the parameters. Furthermore, a new region of parameter space can be explored without using one of the common approximations, mϕ≫me. Our method can be used as a consistency check for Monte-Carlo simulations.
A consistent collinear triad approximation for operational wave models
NASA Astrophysics Data System (ADS)
Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.
2016-08-01
In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.
On Integral Upper Limits Assuming Power-law Spectra and the Sensitivity in High-energy Astronomy
NASA Astrophysics Data System (ADS)
Ahnen, Max L.
2017-02-01
The high-energy non-thermal universe is dominated by power-law-like spectra. Therefore, results in high-energy astronomy are often reported as parameters of power-law fits, or, in the case of a non-detection, as an upper limit assuming the underlying unseen spectrum behaves as a power law. In this paper, I demonstrate a simple and powerful one-to-one relation of the integral upper limit in the two-dimensional power-law parameter space into the spectrum parameter space and use this method to unravel the so-far convoluted question of the sensitivity of astroparticle telescopes.
Post-Gaussian approximations in phase ordering kinetics
NASA Astrophysics Data System (ADS)
Mazenko, Gene F.
1994-05-01
Existing theories for the growth of order in unstable systems have successfully exploited the use of a Gaussian auxiliary field. The limitations imposed on such theories by assuming this field to be Gaussian have recently become clearer. In this paper it is shown how this Gaussian restriction can be removed in order to obtain improved approximations for the scaling properties of such systems. In particular it is shown how the improved theory can explain the recent numerical results of Blundell, Bray, and Sattler [Phys. Rev. E 48, 2476 (1993)] which are in qualitative disagreement with Gaussian theories.
An Examination of New Paradigms for Spline Approximations.
Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A
2006-01-01
Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case.
Common Variable Immunodeficiency.
Saikia, Biman; Gupta, Sudhir
2016-04-01
Common variable immunodeficiency (CVID) is the most common primary immunodeficiency of young adolescents and adults which also affects the children. The disease remains largely under-diagnosed in India and Southeast Asian countries. Although in majority of cases it is sporadic, disease may be inherited in a autosomal recessive pattern and rarely, in autosomal dominant pattern. Patients, in addition to frequent sino-pulmonary infections, are also susceptible to various autoimmune diseases and malignancy, predominantly lymphoma and leukemia. Other characteristic lesions include lymphocytic and granulomatous interstitial lung disease, and nodular lymphoid hyperplasia of gut. Diagnosis requires reduced levels of at least two immunoglobulin isotypes: IgG with IgA and/or IgM and impaired specific antibody response to vaccines. A number of gene mutations have been described in CVID; however, these genetic alterations account for less than 20% of cases of CVID. Flow cytometry aptly demonstrates a disturbed B cell homeostasis with reduced or absent memory B cells and increased CD21(low) B cells and transitional B cell populations. Approximately one-third of patients with CVID also display T cell functional defects. Immunoglobulin therapy remains the mainstay of treatment. Immunologists and other clinicians in India and other South East Asian countries need to be aware of CVID so that early diagnosis can be made, as currently, majority of these patients still go undiagnosed.
A new observation-based fitting method assuming an elliptical CME frontal shape and a variable speed
NASA Astrophysics Data System (ADS)
Rollett, T.; Moestl, C.; Isavnin, A.; Boakes, P. D.; Kubicka, M.; Amerstorfer, U. V.
2015-12-01
In this study, we present a new method for forecasting arrival times and speeds of coronal mass ejections (CMEs) at any location in the inner heliosphere. This new approach assumes a highly adjustable geometrical shape of the CME front with a variable CME width and a variable curvature of the frontal part, i.e. the assumed geometry is elliptical. An elliptic conversion (ElCon) method is applied to observations from STEREO's heliospheric imagers to convert the angular observations into a unit of radial distance from the Sun. This distance profile of the CME apex is then fitted using the drag-based model (DBM) to comprise the deceleration or acceleration CMEs experience during propagation. The outcome of both methods is then utilized as input for the Ellipse Evolution (ElEvo) model, forecasting the shock arrival times and speeds of CMEs at any position in interplanetary space. We introduce the combination of these three methods as the new ElEvoHI method. To demonstrate the applicability of ElEvoHI we present the forecast of 20 CMEs and compare it to the results from other forecasting utilities. Such a forecasting method is going to be useful when STEREO Ahead is again observing the space between the Sun and Earth, or when an L4/L5 space weather mission is in operation.
NASA Astrophysics Data System (ADS)
Reinoso, J.; Paggi, M.; Linder, C.
2017-02-01
Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.
NASA Astrophysics Data System (ADS)
Reinoso, J.; Paggi, M.; Linder, C.
2017-06-01
Fracture of technological thin-walled components can notably limit the performance of their corresponding engineering systems. With the aim of achieving reliable fracture predictions of thin structures, this work presents a new phase field model of brittle fracture for large deformation analysis of shells relying on a mixed enhanced assumed strain (EAS) formulation. The kinematic description of the shell body is constructed according to the solid shell concept. This enables the use of fully three-dimensional constitutive models for the material. The proposed phase field formulation integrates the use of the (EAS) method to alleviate locking pathologies, especially Poisson thickness and volumetric locking. This technique is further combined with the assumed natural strain method to efficiently derive a locking-free solid shell element. On the computational side, a fully coupled monolithic framework is consistently formulated. Specific details regarding the corresponding finite element formulation and the main aspects associated with its implementation in the general purpose packages FEAP and ABAQUS are addressed. Finally, the applicability of the current strategy is demonstrated through several numerical examples involving different loading conditions, and including linear and nonlinear hyperelastic constitutive models.
Comparison of the Radiative Two-Flux and Diffusion Approximations
NASA Technical Reports Server (NTRS)
Spuckler, Charles M.
2006-01-01
Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and
On the convergence of difference approximations to scalar conservation laws
NASA Technical Reports Server (NTRS)
Osher, S.; Tadmor, E.
1985-01-01
A unified treatment of explicit in time, two level, second order resolution, total variation diminishing, approximations to scalar conservation laws are presented. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced and results in terms of the latter are obtained. The existence of a cell entropy inequality is discussed and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first order accurate in general. Convergence for total variation diminishing-second order resolution schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.
On the convergence of difference approximations to scalar conservation laws
NASA Technical Reports Server (NTRS)
Osher, Stanley; Tadmor, Eitan
1988-01-01
A unified treatment is given for time-explicit, two-level, second-order-resolution (SOR), total-variation-diminishing (TVD) approximations to scalar conservation laws. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced to obtain results in terms of the latter. The existence of a cell entropy inequality is discussed, and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first-order accurate in general. Convergence for TVD-SOR schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.
Approximation of the optimal compensator for a large space structure
NASA Technical Reports Server (NTRS)
Mackay, M. K.
1983-01-01
This paper considers the approximation of the optimal compensator for a Large Space Structure. The compensator is based upon a solution to the Linear Stochastic Quadratic Regulator problem. Colocation of sensors and actuators is assumed. A small gain analytical solution for the optimal compensator is obtained for a single input/single output system, i.e., certain terms in the compensator can be neglected for sufficiently small gain. The compensator is calculated in terms of the kernel to a Volterra integral operator using a Neumann series. The calculation of the compensator is based upon the C sub 0 semigroup for the infinite dimensional system. A finite dimensional approximation of the compensator is, therefore, obtained through analysis of the infinite dimensional compensator which is a compact operator.
Approximate likelihood for large irregularly spaced spatial data
Fuentes, Montserrat
2008-01-01
SUMMARY Likelihood approaches for large irregularly spaced spatial datasets are often very difficult, if not infeasible, to implement due to computational limitations. Even when we can assume normality, exact calculations of the likelihood for a Gaussian spatial process observed at n locations requires O(n3) operations. We present a version of Whittle’s approximation to the Gaussian log likelihood for spatial regular lattices with missing values and for irregularly spaced datasets. This method requires O(nlog2n) operations and does not involve calculating determinants. We present simulations and theoretical results to show the benefits and the performance of the spatial likelihood approximation method presented here for spatial irregularly spaced datasets and lattices with missing values. We apply these methods to estimate the spatial structure of sea surface temperatures (SST) using satellite data with missing values. PMID:19079638
Energy flow: image correspondence approximation for motion analysis
NASA Astrophysics Data System (ADS)
Wang, Liangliang; Li, Ruifeng; Fang, Yajun
2016-04-01
We propose a correspondence approximation approach between temporally adjacent frames for motion analysis. First, energy map is established to represent image spatial features on multiple scales using Gaussian convolution. On this basis, energy flow at each layer is estimated using Gauss-Seidel iteration according to the energy invariance constraint. More specifically, at the core of energy invariance constraint is "energy conservation law" assuming that the spatial energy distribution of an image does not change significantly with time. Finally, energy flow field at different layers is reconstructed by considering different smoothness degrees. Due to the multiresolution origin and energy-based implementation, our algorithm is able to quickly address correspondence searching issues in spite of background noise or illumination variation. We apply our correspondence approximation method to motion analysis, and experimental results demonstrate its applicability.
Dynamical observer for a flexible beam via finite element approximations
NASA Technical Reports Server (NTRS)
Manitius, Andre; Xia, Hong-Xing
1994-01-01
The purpose of this view-graph presentation is a computational investigation of the closed-loop output feedback control of a Euler-Bernoulli beam based on finite element approximation. The observer is part of the classical observer plus state feedback control, but it is finite-dimensional. In the theoretical work on the subject it is assumed (and sometimes proved) that increasing the number of finite elements will improve accuracy of the control. In applications, this may be difficult to achieve because of numerical problems. The main difficulty in computing the observer and simulating its work is the presence of high frequency eigenvalues in the finite-element model and poor numerical conditioning of some of the system matrices (e.g. poor observability properties) when the dimension of the approximating system increases. This work dealt with some of these difficulties.
Diffusive approximation for unsteady mud flows with backwater effect
NASA Astrophysics Data System (ADS)
Di Cristo, Cristiana; Iervolino, Michele; Vacca, Andrea
2015-07-01
The adoption of the Diffusive Wave (DW) instead of the Full Dynamic (FD) model in the analysis of mud flood routing within the shallow-water framework may provide a significant reduction of the computational effort, and the knowledge of the conditions in which this approximation may be employed is therefore important. In this paper, the applicability of the DW approximation of a depth-integrated Herschel-Bulkley model is investigated through linear analysis. Assuming as the initial condition a steady hypocritical decelerated flow, induced by downstream backwater, the propagation characteristics of a small perturbation predicted by the DW and FD models are compared. The results show that the spatial variation on the initial profile may preclude the application of DW model with a prescribed accuracy. Whenever the method is applicable, the rising time of the mud flood must satisfy additional constraints, whose dependence on the flow depth, along with the Froude number and the rheological parameters, is deeply analyzed and discussed.
Calculations of scattered light from rigid polymers by Shifrin and Rayleigh-Debye approximations.
Bishop, M F
1989-01-01
We show that the commonly used Rayleigh-Debye method for calculating light scattering can lead to significant errors when used for describing scattering from dilute solutions of long rigid polymers, errors that can be overcome by use of the easily applied Shifrin approximation. In order to show the extent of the discrepancies between the two methods, we have performed calculations at normal incidence both for polarized and unpolarized incident light with the scattering intensity determined as a function of polarization angle and of scattering angle, assuming that the incident light is in a spectral region where the absorption of hemoglobin is small. When the Shifrin method is used, the calculated intensities using either polarized or unpolarized scattered light give information about the alignment of polymers, a feature that is lost in the Rayleigh-Debye approximation because the effect of the asymmetric shape of the scatterer on the incoming polarized electric field is ignored. Using sickle hemoglobin polymers as an example, we have calculated the intensity of light scattering using both approaches and found that, for totally aligned polymers within parallel planes, the difference can be as large as 25%, when the incident electric field is perpendicular to the polymers, for near forward or near backward scattering (0 degrees or 180 degrees scattering angle), but becomes zero as the scattering angle approaches 90 degrees. For randomly oriented polymers within a plane, or for incident unpolarized light for either totally oriented or randomly oriented polymers, the difference between the two results for near forward or near backward scattering is approximately 15%. PMID:2605302
Producing approximate answers to database queries
NASA Technical Reports Server (NTRS)
Vrbsky, Susan V.; Liu, Jane W. S.
1993-01-01
We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.
Approximate solution for high-frequency Q-switched lasers.
Agnesi, Antonio
2016-06-01
A simple approximation for the energy, pulse width, and build-up time valid for high-repetition-rate Q-switched lasers is discussed. This particular regime of operation is most common in industrial applications where manufacturing time must be minimized. Limits of validity and some considerations for the choice of the most appropriate laser system for specific applications are briefly discussed.
NASA Astrophysics Data System (ADS)
Kirchner, N.; Ahlkrona, J.; Gowan, E. J.; Lötstedt, P.; Lea, J. M.; Noormets, R.; von Sydow, L.; Dowdeswell, J. A.; Benham, T.
2016-09-01
Full Stokes ice sheet models provide the most accurate description of ice sheet flow, and can therefore be used to reduce existing uncertainties in predicting the contribution of ice sheets to future sea level rise on centennial time-scales. The level of accuracy at which millennial time-scale palaeo-ice sheet simulations resolve ice sheet flow lags the standards set by Full Stokes models, especially, when Shallow Ice Approximation (SIA) models are used. Most models used in paleo-ice sheet modeling were developed at a time when computer power was very limited, and rely on several assumptions. At the time there was no means of verifying the assumptions by other than mathematical arguments. However, with the computer power and refined Full Stokes models available today, it is possible to test these assumptions numerically. In this paper, we review (Ahlkrona et al., 2013a) where such tests were performed and inaccuracies in commonly used arguments were found. We also summarize (Ahlkrona et al., 2013b) where the implications of the inaccurate assumptions are analyzed for two paleo-models - the SIA and the SOSIA. We review these works without resorting to mathematical detail, in order to make them accessible to a wider audience with a general interest in palaeo-ice sheet modelling. Specifically, we discuss two implications of relevance for palaeo-ice sheet modelling. First, classical SIA models are less accurate than assumed in their original derivation. Secondly, and contrary to previous recommendations, the SOSIA model is ruled out as a practicable tool for palaeo-ice sheet simulations. We conclude with an outlook concerning the new Ice Sheet Coupled Approximation Level (ISCAL) method presented in Ahlkrona et al. (2016), that has the potential to match the accuracy standards of full Stokes model on palaeo-timescales of tens of thousands of years, and to become an alternative to hybrid models currently used in palaeo-ice sheet modelling. The method is applied to an ice
Gryczynski, Z; Tenenholz, T; Bucci, E
1992-01-01
Using the Förster equations we have estimated the rate of energy transfer from tryptophans to hemes in hemoglobin. Assuming an isotropic distribution of the transition moments of the heme in the plane of the porphyrin, we computed the orientation factors and the consequent transfer rates from the crystallographic coordinates of human oxy- and deoxy-hemoglobin. It appears that the orientation factors do not play a limiting role in regulating the energy transfer and that the rates are controlled almost exclusively by the intrasubunit separations between tryptophans and hemes. In intact hemoglobin tetramers the intrasubunit separations are such as to reduce lifetimes to 5 and 15 ps/ns of tryptophan lifetime. Lifetimes of several hundred picoseconds would be allowed by the intersubunit separations, but intersubunits transfer becomes important only when one heme per tetramer is absent or does not accept transfer. If more than one heme per tetramer is absent lifetimes of more than 1 ns would appear. PMID:1420905
NASA Technical Reports Server (NTRS)
Rengarajan, Govind; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
An improved four-node quadrilateral assumed-stress hybrid shell element with drilling degrees of freedom is presented. The formulation is based on Hellinger-Reissner variational principle and the shape functions are formulated directly for the four-node element. The element has 12 membrane degrees of freedom and 12 bending degrees of freedom. It has nine independent stress parameters to describe the membrane stress resultant field and 13 independent stress parameters to describe the moment and transverse shear stress resultant field. The formulation encompasses linear stress, linear buckling, and linear free vibration problems. The element is validated with standard tests cases and is shown to be robust. Numerical results are presented for linear stress, buckling, and free vibration analyses.
NASA Astrophysics Data System (ADS)
Firl, G. J.; Randall, D. A.
2013-12-01
The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been
Cosmic shear covariance: the log-normal approximation
NASA Astrophysics Data System (ADS)
Hilbert, S.; Hartlap, J.; Schneider, P.
2011-12-01
Context. Accurate estimates of the errors on the cosmological parameters inferred from cosmic shear surveys require accurate estimates of the covariance of the cosmic shear correlation functions. Aims: We seek approximations to the cosmic shear covariance that are as easy to use as the common approximations based on normal (Gaussian) statistics, but yield more accurate covariance matrices and parameter errors. Methods: We derive expressions for the cosmic shear covariance under the assumption that the underlying convergence field follows log-normal statistics. We also derive a simplified version of this log-normal approximation by only retaining the most important terms beyond normal statistics. We use numerical simulations of weak lensing to study how well the normal, log-normal, and simplified log-normal approximations as well as empirical corrections to the normal approximation proposed in the literature reproduce shear covariances for cosmic shear surveys. We also investigate the resulting confidence regions for cosmological parameters inferred from such surveys. Results: We find that the normal approximation substantially underestimates the cosmic shear covariances and the inferred parameter confidence regions, in particular for surveys with small fields of view and large galaxy densities, but also for very wide surveys. In contrast, the log-normal approximation yields more realistic covariances and confidence regions, but also requires evaluating slightly more complicated expressions. However, the simplified log-normal approximation, although as simple as the normal approximation, yields confidence regions that are almost as accurate as those obtained from the log-normal approximation. The empirical corrections to the normal approximation do not yield more accurate covariances and confidence regions than the (simplified) log-normal approximation. Moreover, they fail to produce positive-semidefinite data covariance matrices in certain cases, rendering them
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS.
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2011-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied.
No Common Opinion on the Common Core
ERIC Educational Resources Information Center
Henderson, Michael B.; Peterson, Paul E.; West, Martin R.
2015-01-01
According to the three authors of this article, the 2014 "EdNext" poll yields four especially important new findings: (1) Opinion with respect to the Common Core has yet to coalesce. The idea of a common set of standards across the country has wide appeal, and the Common Core itself still commands the support of a majority of the public.…
Approximate analytic solutions to coupled nonlinear Dirac equations
NASA Astrophysics Data System (ADS)
Khare, Avinash; Cooper, Fred; Saxena, Avadh
2017-03-01
We consider the coupled nonlinear Dirac equations (NLDEs) in 1 + 1 dimensions with scalar-scalar self-interactions g12 / 2 (ψ bar ψ) 2 + g22/2 (ϕ bar ϕ) 2 + g32 (ψ bar ψ) (ϕ bar ϕ) as well as vector-vector interactions of the form g1/22 (ψ bar γμ ψ) (ψ bar γμ ψ) + g22/2 (ϕ bar γμ ϕ) (ϕ bar γμ ϕ) + g32 (ψ bar γμ ψ) (ϕ bar γμ ϕ). Writing the two components of the assumed rest frame solution of the coupled NLDE equations in the form ψ =e - iω1 t {R1 cos θ ,R1 sin θ }, ϕ =e - iω2 t {R2 cos η ,R2 sin η }, and assuming that θ (x) , η (x) have the same functional form they had when g3 = 0, which is an approximation consistent with the conservation laws, we then find approximate analytic solutions for Ri (x) which are valid for small values of g32 / g22 and g32 / g12. In the nonrelativistic limit we show that both of these coupled models go over to the same coupled nonlinear Schrödinger equation for which we obtain two exact pulse solutions vanishing at x → ± ∞.
Approximate analytic solutions to coupled nonlinear Dirac equations
Khare, Avinash; Cooper, Fred; Saxena, Avadh
2017-01-30
Here, we consider the coupled nonlinear Dirac equations (NLDEs) in 1+11+1 dimensions with scalar–scalar self-interactions g12/2(more » $$\\bar{ψ}$$ψ)2 + g22/2($$\\bar{Φ}$$Φ)2 + g23($$\\bar{ψ}$$ψ)($$\\bar{Φ}$$Φ) as well as vector–vector interactions g12/2($$\\bar{ψ}$$γμψ)($$\\bar{ψ}$$γμψ) + g22/2($$\\bar{Φ}$$γμΦ)($$\\bar{Φ}$$γμΦ) + g23($$\\bar{ψ}$$γμψ)($$\\bar{Φ}$$γμΦ). Writing the two components of the assumed rest frame solution of the coupled NLDE equations in the form ψ=e–iω1tR1cosθ,R1sinθΦ=e–iω2tR2cosη,R2sinη, and assuming that θ(x),η(x) have the same functional form they had when g3 = 0, which is an approximation consistent with the conservation laws, we then find approximate analytic solutions for Ri(x) which are valid for small values of g32/g22 and g32/g12. In the nonrelativistic limit we show that both of these coupled models go over to the same coupled nonlinear Schrödinger equation for which we obtain two exact pulse solutions vanishing at x → ±∞.« less
NASA Astrophysics Data System (ADS)
Duchêne, Vincent
2014-08-01
The rigid-lid approximation is a commonly used simplification in the study of density-stratified fluids in oceanography. Roughly speaking, one assumes that the displacements of the surface are negligible compared with interface displacements. In this paper, we offer a rigorous justification of this approximation in the case of two shallow layers of immiscible fluids with constant and quasi-equal mass density. More precisely, we control the difference between the solutions of the Cauchy problem predicted by the shallow-water (Saint-Venant) system in the rigid-lid and free-surface configuration. We show that in the limit of a small density contrast, the flow may be accurately described as the superposition of a baroclinic (or slow) mode, which is well predicted by the rigid-lid approximation, and a barotropic (or fast) mode, whose initial smallness persists for large time. We also describe explicitly the first-order behavior of the deformation of the surface and discuss the case of a nonsmall initial barotropic mode.
Examining the exobase approximation: DSMC models of Titan's upper atmosphere
NASA Astrophysics Data System (ADS)
Tucker, Orenthal J.; Waalkes, William; Tenishev, Valeriy M.; Johnson, Robert E.; Bieler, Andre; Combi, Michael R.; Nagy, Andrew F.
2016-07-01
Chamberlain ([1963] Planet. Space Sci., 11, 901-960) described the use of the exobase layer to determine escape from planetary atmospheres, below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are deemed negligible. De La Haye et al. ([2007] Icarus., 191, 236-250) used this approximation to extract the energy deposition and non-thermal escape rates for Titan's atmosphere by fitting the Cassini Ion Neutral Mass Spectrometer (INMS) density data. De La Haye et al. assumed the gas distributions were composed of an enhanced population of super-thermal molecules (E >> kT) that could be described by a kappa energy distribution function (EDF), and they fit the data using the Liouville theorem. Here we fitted the data again, but we used the conventional form of the kappa EDF. The extracted kappa EDFs were then used with the Direct Simulation Monte Carlo (DSMC) technique (Bird [1994] Molecular Gas Dynamics and the Direct Simulation of Gas Flows) to evaluate the effect of collisions on the exospheric profiles. The INMS density data can be fit reasonably well with thermal and various non-thermal EDFs. However, the extracted energy deposition and escape rates are shown to depend significantly on the assumed exobase altitude, and the usefulness of such fits without directly modeling the collisions is unclear. Our DSMC results indicate that the kappa EDFs used in the Chamberlain approximation can lead to errors in determining the atmospheric temperature profiles and escape rates. Gas kinetic simulations are needed to accurately model measured exospheric density profiles, and to determine the altitude ranges where the Liouville method might be applicable.
Dynamic flow-driven erosion - An improved approximate solution
NASA Astrophysics Data System (ADS)
Yu, Bofu; Guo, Dawei; Rose, Calvin W.
2017-09-01
Rose et al. (2007) published an approximate solution of dynamic sediment concentration for steady and uniform flows, and this approximate solution shows a peak sediment concentration at the early stage of a runoff event, which can be used to describe and explain the first flush effect, a commonly observed phenomenon, especially in the urban environment. However the approximate solution does not converge to the steady state solution that is known exactly. The purpose of the note is to improve the approximate solution of Rose et al. (2007) by maintaining its functional form while forcing its steady state behaviour for sediment concentration to converge to the known steady state solution. The quality of the new approximate solution was assessed by comparing the new approximate solution with an exact solution for the single size class case, and with the numerical solution for the multiple size classes. It was found that 1) the relative error, or discrepancy, decreases as the stream power increases for all three soils considered; 2) the largest discrepancy occurs for the peak sediment concentration, and the average discrepancy in the peak concentration is less than 10% for the three soils considered; 3) for the majority of the 27 slope-flow combinations and for the three soils considered, the new approximate solution modestly underestimates the peak sediment concentration.
Signal Approximation with a Wavelet Neural Network
1992-12-01
specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .
Rough Set Approximations in Formal Concept Analysis
NASA Astrophysics Data System (ADS)
Yamaguchi, Daisuke; Murata, Atsuo; Li, Guo-Dong; Nagai, Masatake
Conventional set approximations are based on a set of attributes; however, these approximations cannot relate an object to the corresponding attribute. In this study, a new model for set approximation based on individual attributes is proposed for interval-valued data. Defining an indiscernibility relation is omitted since each attribute value itself has a set of values. Two types of approximations, single- and multiattribute approximations, are presented. A multi-attribute approximation has two solutions: a maximum and a minimum solution. A maximum solution is a set of objects that satisfy the condition of approximation for at least one attribute. A minimum solution is a set of objects that satisfy the condition for all attributes. The proposed set approximation is helpful in finding the features of objects relating to condition attributes when interval-valued data are given. The proposed model contributes to feature extraction in interval-valued information systems.
An approximation technique for jet impingement flow
Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.
2015-03-10
The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.
Energy conservation - A test for scattering approximations
NASA Technical Reports Server (NTRS)
Acquista, C.; Holland, A. C.
1980-01-01
The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.
Approximation method for the kinetic Boltzmann equation
NASA Technical Reports Server (NTRS)
Shakhov, Y. M.
1972-01-01
The further development of a method for approximating the Boltzmann equation is considered and a case of pseudo-Maxwellian molecules is treated in detail. A method of approximating the collision frequency is discussed along with a method for approximating the moments of the Boltzmann collision integral. Since the return collisions integral and the collision frequency are expressed through the distribution function moments, use of the proposed methods make it possible to reduce the Boltzmann equation to a series of approximating equations.
Energy conservation - A test for scattering approximations
NASA Technical Reports Server (NTRS)
Acquista, C.; Holland, A. C.
1980-01-01
The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.
Compressive Imaging via Approximate Message Passing
2015-09-04
We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction
Fractal Trigonometric Polynomials for Restricted Range Approximation
NASA Astrophysics Data System (ADS)
Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.
2016-05-01
One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.
On Approximation of Distribution and Density Functions.
ERIC Educational Resources Information Center
Wolff, Hans
Stochastic approximation algorithms for least square error approximation to density and distribution functions are considered. The main results are necessary and sufficient parameter conditions for the convergence of the approximation processes and a generalization to some time-dependent density and distribution functions. (Author)
NASA Astrophysics Data System (ADS)
Damanhuri, Nor Alisa; Ayob, Syafikah
2017-09-01
A general numerical approximation of the stress equilibrium equations and constructing axisymmetric ideal plastic plane deformation of a granular material is considered. The stress components are assumed to satisfy the Coulomb yield criterion and the self-weight of the material is neglected. The standard method of numerical approximation leads to the construction of the small segments of the stress characteristic field. Using the Matlab program, the method is applied to a problem of granular indentation by a smooth flat surface.
Gutzwiller approximation in strongly correlated electron systems
NASA Astrophysics Data System (ADS)
Li, Chunhua
Gutzwiller wave function is an important theoretical technique for treating local electron-electron correlations nonperturbatively in condensed matter and materials physics. It is concerned with calculating variationally the ground state wave function by projecting out multi-occupation configurations that are energetically costly. The projection can be carried out analytically in the Gutzwiller approximation that offers an approximate way of calculating expectation values in the Gutzwiller projected wave function. This approach has proven to be very successful in strongly correlated systems such as the high temperature cuprate superconductors, the sodium cobaltates, and the heavy fermion compounds. In recent years, it has become increasingly evident that strongly correlated systems have a strong propensity towards forming inhomogeneous electronic states with spatially periodic superstrutural modulations. A good example is the commonly observed stripes and checkerboard states in high- Tc superconductors under a variety of conditions where superconductivity is weakened. There exists currently a real challenge and demand for new theoretical ideas and approaches that treats strongly correlated inhomogeneous electronic states, which is the subject matter of this thesis. This thesis contains four parts. In the first part of the thesis, the Gutzwiller approach is formulated in the grand canonical ensemble where, for the first time, a spatially (and spin) unrestricted Gutzwiller approximation (SUGA) is developed for studying inhomogeneous (both ordered and disordered) quantum electronic states in strongly correlated electron systems. The second part of the thesis applies the SUGA to the t-J model for doped Mott insulators which led to the discovery of checkerboard-like inhomogeneous electronic states competing with d-wave superconductivity, consistent with experimental observations made on several families of high-Tc superconductors. In the third part of the thesis, new
NASA Astrophysics Data System (ADS)
Witkovský, Viktor; Wimmer, Gejza; Ďuriš, Stanislav
2015-08-01
We consider a problem of constructing the exact and/or approximate coverage intervals for the common mean of several independent distributions. In a metrological context, this problem is closely related to evaluation of the interlaboratory comparison experiments, and in particular, to determination of the reference value (estimate) of a measurand and its uncertainty, or alternatively, to determination of the coverage interval for a measurand at a given level of confidence, based on such comparison data. We present a brief overview of some specific statistical models, methods, and algorithms useful for determination of the common mean and its uncertainty, or alternatively, the proper interval estimator. We illustrate their applicability by a simple simulation study and also by example of interlaboratory comparisons for temperature. In particular, we shall consider methods based on (i) the heteroscedastic common mean fixed effect model, assuming negligible laboratory biases, (ii) the heteroscedastic common mean random effects model with common (unknown) distribution of the laboratory biases, and (iii) the heteroscedastic common mean random effects model with possibly different (known) distributions of the laboratory biases. Finally, we consider a method, recently suggested by Singh et al., for determination of the interval estimator for a common mean based on combining information from independent sources through confidence distributions.
Wildhaber, Mark L.; Lamberson, Peter J.
2004-01-01
Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.
Rosen, M. J.; Levin, E. C.; Hoy, R. R.
2009-01-01
In the obligatory reproductive dependence of a parasite on its host, the parasite must trade the benefit of ‘outsourcing’ functions like reproduction for the risk of assuming hazards associated with the host. In the present study, we report behavioral adaptations of a parasitic fly, Ormia ochracea, that resemble those of its cricket hosts. Ormia females home in on the male cricket's songs and deposit larvae, which burrow into the cricket, feed and emerge to pupate. Because male crickets call at night, gravid female Ormia in search of hosts are subject to bat predation, in much the same way as female crickets are when responding to male song. We show that Ormia has evolved the same evasive behavior as have crickets: an acoustic startle response to bat-like ultrasound that manifests clearly only during flight. Furthermore, like crickets, Ormia has a sharp response boundary between the frequencies of song and bat cries, resembling categorical perception first described in the context of human speech. PMID:19946084
NASA Astrophysics Data System (ADS)
Leiva, A. M.; Briozzo, C. B.
In a previous work we successfully implemented a control algorithm to stabilize unstable periodic orbits in the Sun-Earth-Moon Quasi-Bicircular Problem (QBCP). Applying the same techniques, in this work we stabilize an unstable trajectory performing fast transfers between the Earth and the Moon in a dynamical system similar to the QBCP but incorporating the gravitational perturbation of the planets Mercury, Venus, Mars, Jupiter, Saturn, Uranus, and Neptune, assumed to move on circular coplanar heliocentric orbits. In the control stage we used as a reference trajectory an unstable periodic orbit from the unperturbed QBCP. We performed 400 numerical experiments integrating the trajectories over time spans of ~40 years, taking for each one random values for the initial positions of the planets. In all cases the control impulses applied were larger than 20 cm/s, consistently with realistic implementations. The minimal and maximal yearly mean consumptions were ~10 m/s and ~71 m/s, respectively. FULL TEXT IN SPANISH
Honarvar, Mohammad; Sahebjavaher, Ramin; Sinkus, Ralph; Rohling, Robert; Salcudean, Septimiu E
2013-12-01
In elasticity imaging, the shear modulus is obtained from measured tissue displacement data by solving an inverse problem based on the wave equation describing the tissue motion. In most inversion approaches, the wave equation is simplified using local homogeneity and incompressibility assumptions. This causes a loss of accuracy and therefore imaging artifacts in the resulting elasticity images. In this paper we present a new curl-based finite element method inversion technique that does not rely upon these simplifying assumptions. As done in previous research, we use the curl operator to eliminate the dilatational term in the wave equation, but we do not make the assumption of local homogeneity. We evaluate our approach using simulation data from a virtual tissue phantom assuming time harmonic motion and linear, isotropic, elastic behavior of the tissue. We show that our reconstruction results are superior to those obtained using previous curl-based methods with homogeneity assumption. We also show that with our approach, in the 2-D case, multi-frequency measurements provide better results than single-frequency measurements. Experimental results from magnetic resonance elastography of a CIRS elastography phantom confirm our simulation results and further demonstrate, in a quantitative and repeatable manner, that our method is accurate and robust.
Matrix product approximations to conformal field theories
NASA Astrophysics Data System (ADS)
König, Robert; Scholz, Volkher B.
2017-07-01
We establish rigorous error bounds for approximating correlation functions of conformal field theories (CFTs) by certain finite-dimensional tensor networks. For chiral CFTs, the approximation takes the form of a matrix product state. For full CFTs consisting of a chiral and an anti-chiral part, the approximation is given by a finitely correlated state. We show that the bond dimension scales polynomially in the inverse of the approximation error and sub-exponentially in inverse of the minimal distance between insertion points. We illustrate our findings using Wess-Zumino-Witten models, and show that there is a one-to-one correspondence between group-covariant MPS and our approximation.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
A unified approach to the Darwin approximation
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-10-15
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.
Carkeet, Andrew; Goh, Yee Teng
2016-09-01
Bland and Altman described approximate methods in 1986 and 1999 for calculating confidence limits for their 95% limits of agreement, approximations which assume large subject numbers. In this paper, these approximations are compared with exact confidence intervals calculated using two-sided tolerance intervals for a normal distribution. The approximations are compared in terms of the tolerance factors themselves but also in terms of the exact confidence limits and the exact limits of agreement coverage corresponding to the approximate confidence interval methods. Using similar methods the 50th percentile of the tolerance interval are compared with the k values of 1.96 and 2, which Bland and Altman used to define limits of agreements (i.e. [Formula: see text]+/- 1.96Sd and [Formula: see text]+/- 2Sd). For limits of agreement outer confidence intervals, Bland and Altman's approximations are too permissive for sample sizes <40 (1999 approximation) and <76 (1986 approximation). For inner confidence limits the approximations are poorer, being permissive for sample sizes of <490 (1986 approximation) and all practical sample sizes (1999 approximation). Exact confidence intervals for 95% limits of agreements, based on two-sided tolerance factors, can be calculated easily based on tables and should be used in preference to the approximate methods, especially for small sample sizes.
ERIC Educational Resources Information Center
Huynh, Huynh; Mandeville, Garrett K.
Assuming that the density p of the true ability theta in the binomial test score model is continuous in the closed interval (0, 1), a Bernstein polynomial can be used to uniformly approximate p. Then via quadratic programming techniques, least-square estimates may be obtained for the coefficients defining the polynomial. The approximation, in turn…
Crisco, J J; Blume, J; Teeple, E; Fleming, B C; Jay, G D
2007-04-01
A pendulum test with a whole articular joint serving as the fulcrum is commonly used to measure the bulk coefficient of friction (COF). In such tests it is universally assumed that energy loss is due to frictional damping only, and accordingly the decay of pendulum amplitude is linear with time. The purpose of this work was to determine whether the measurement of the COF is improved when viscous damping and exponential decay of pendulum amplitude are incorporated into a lumped-parameter model. Various pendulum models with a range of values for COF and for viscous damping were constructed. The resulting decay was fitted with an exponential function (including both frictional and viscous damping) and with a linear decay function (frictional damping only). The values predicted from the fit of each function were then compared to the known values. It was found that the exponential decay function was able to predict the COF values within 2 per cent error. This error increased for models in which the damping coefficient was relatively small and the COF was relatively large. On the other hand, the linear decay function resulted in large errors in the prediction of the COF, even for small values of viscous damping. The exponential decay function including both frictional and constant viscous damping presented herein dramatically increased the accuracy of measuring the COF in a pendulum test of modelled whole articular joints.
Approximate Formula for the Vertical Asymptote of Projectile Motion in Midair
ERIC Educational Resources Information Center
Chudinov, Peter Sergey
2010-01-01
The classic problem of the motion of a point mass (projectile) thrown at an angle to the horizon is reviewed. The air drag force is taken into account with the drag factor assumed to be constant. An analytical approach is used for the investigation. An approximate formula is obtained for one of the characteristics of the motion--the vertical…
Approximate Formula for the Vertical Asymptote of Projectile Motion in Midair
ERIC Educational Resources Information Center
Chudinov, Peter Sergey
2010-01-01
The classic problem of the motion of a point mass (projectile) thrown at an angle to the horizon is reviewed. The air drag force is taken into account with the drag factor assumed to be constant. An analytical approach is used for the investigation. An approximate formula is obtained for one of the characteristics of the motion--the vertical…
NASA Astrophysics Data System (ADS)
Smith, Nathan; Stassun, Keivan G.
2017-03-01
The strong mass loss of Luminous Blue Variables (LBVs) is thought to play a critical role in massive-star evolution, but their place in the evolutionary sequence remains debated. A key to understanding their peculiar instability is their high observed luminosities, which often depends on uncertain distances. Here we report direct distances and space motions of four canonical Milky Way LBVs—AG Car, HR Car, HD 168607, and (candidate) Hen 3-519—from the Gaia first data release. Whereas the distances of HR Car and HD 168607 are consistent with previous literature estimates within the considerable uncertainties, Hen 3-519 and AG Car, both at ˜2 kpc, are much closer than the 6-8 kpc distances previously assumed. As a result, Hen 3-519 moves far from the locus of LBVs on the Hertzsprung-Russell diagram, making it a much less luminous object. For AG Car, considered a defining example of a classical LBV, its lower luminosity would also move it off the S Dor instability strip. Lower luminosities allow both AG Car and Hen 3-519 to have passed through a previous red supergiant phase, lower the mass estimates for their shell nebulae, and imply that binary evolution is needed to account for their peculiarities. These results may also impact our understanding of LBVs as potential supernova progenitors and their isolated environments. Improved distances will be provided in the Gaia second data release, which will include additional LBVs. AG Car and Hen 3-519 hint that this new information may alter our traditional view of LBVs.
Weight-Bearing Ankle Dorsiflexion Range of Motion—Can Side-to-Side Symmetry Be Assumed?
Rabin, Alon; Kozol, Zvi; Spitzer, Elad; Finestone, Aharon S.
2015-01-01
Context: In clinical practice, the range of motion (ROM) of the noninvolved side often serves as the reference for comparison with the injured side. Previous investigations of non–weight-bearing (NWB) ankle dorsiflexion (DF) ROM measurements have indicated bilateral symmetry for the most part. Less is known about ankle DF measured under weight-bearing (WB) conditions. Because WB and NWB ankle DF are not strongly correlated, there is a need to determine whether WB ankle DF is also symmetrical in a healthy population. Objective: To determine whether WB ankle DF is bilaterally symmetrical. A secondary goal was to further explore the correlation between WB and NWB ankle DF ROM. Design: Cross-sectional study. Setting: Training facility of the Israeli Defense Forces. Patients or Other Participants: A total of 64 healthy males (age = 19.6 ± 1.0 years, height = 175.0 ± 6.4 cm, and body mass = 71.4 ± 7.7 kg). Main Outcome Measure(s): Dorsiflexion ROM in WB was measured with an inclinometer and DF ROM in NWB was measured with a universal goniometer. All measurements were taken bilaterally by a single examiner. Results: Weight-bearing ankle DF was greater on the nondominant side compared with the dominant side (P < .001). Non–weight-bearing ankle DF was not different between sides (P = .64). The correlation between WB and NWB DF was moderate, with the NWB DF measurement accounting for 30% to 37% of the variance of the WB measurement. Conclusions: Weight-bearing ankle DF ROM should not be assumed to be bilaterally symmetrical. These findings suggest that side-to-side differences in WB DF may need to be interpreted while considering which side is dominant. The difference in bilateral symmetry between the WB and NWB measurements, as well as the only moderate level of correlation between them, suggests that both measurements should be performed routinely. PMID:25329350
Jacobian transformed and detailed balance approximations for photon induced scattering
NASA Astrophysics Data System (ADS)
Wienke, B. R.; Budge, K. G.; Chang, J. H.; Dahl, J. A.; Hungerford, A. L.
2012-01-01
Photon emission and scattering are enhanced by the number of photons in the final state, and the photon transport equation reflects this in scattering-emission kernels and source terms. This is often a complication in both theoretical and numerical analyzes, requiring approximations and assumptions about background and material temperatures, incident and exiting photon energies, local thermodynamic equilibrium, plus other related aspects of photon scattering and emission. We review earlier schemes parameterizing photon scattering-emission processes, and suggest two alternative schemes. One links the product of photon and electron distributions in the final state to the product in the initial state by Jacobian transformation of kinematical variables (energy and angle), and the other links integrands of scattering kernels in a detailed balance requirement for overall (integrated) induced effects. Compton and inverse Compton differential scattering cross sections are detailed in appropriate limits, numerical integrations are performed over the induced scattering kernel, and for tabulation induced scattering terms are incorporated into effective cross sections for comparisons and numerical estimates. Relativistic electron distributions are assumed for calculations. Both Wien and Planckian distributions are contrasted for impact on induced scattering as LTE limit points. We find that both transformed and balanced approximations suggest larger induced scattering effects at high photon energies and low electron temperatures, and smaller effects in the opposite limits, compared to previous analyzes, with 10-20% increases in effective cross sections. We also note that both approximations can be simply implemented within existing transport modules or opacity processors as an additional term in the effective scattering cross section. Applications and comparisons include effective cross sections, kernel approximations, and impacts on radiative transport solutions in 1D
Cosmological applications of Padé approximant
Wei, Hao; Yan, Xiao-Peng; Zhou, Ya-Nan E-mail: 764644314@qq.com
2014-01-01
As is well known, in mathematics, any function could be approximated by the Padé approximant. The Padé approximant is the best approximation of a function by a rational function of given order. In fact, the Padé approximant often gives better approximation of the function than truncating its Taylor series, and it may still work where the Taylor series does not converge. In the present work, we consider the Padé approximant in two issues. First, we obtain the analytical approximation of the luminosity distance for the flat XCDM model, and find that the relative error is fairly small. Second, we propose several parameterizations for the equation-of-state parameter (EoS) of dark energy based on the Padé approximant. They are well motivated from the mathematical and physical points of view. We confront these EoS parameterizations with the latest observational data, and find that they can work well. In these practices, we show that the Padé approximant could be an useful tool in cosmology, and it deserves further investigation.
NASA Technical Reports Server (NTRS)
Hark, Frank; Britton, Paul; Ring, Robert; Novack, Steven
2015-01-01
Space Launch System (SLS) Agenda: Objective; Key Definitions; Calculating Common Cause; Examples; Defense against Common Cause; Impact of varied Common Cause Failure (CCF) and abortability; Response Surface for various CCF Beta; Takeaways.
Approximations for column effect in airplane wing spars
NASA Technical Reports Server (NTRS)
Warner, Edward P; Short, Mac
1927-01-01
The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.
Approximate dynamic model of a turbojet engine
NASA Technical Reports Server (NTRS)
Artemov, O. A.
1978-01-01
An approximate dynamic nonlinear model of a turbojet engine is elaborated on as a tool in studying the aircraft control loop, with the turbojet engine treated as an actuating component. Approximate relationships linking the basic engine parameters and shaft speed are derived to simplify the problem, and to aid in constructing an approximate nonlinear dynamic model of turbojet engine performance useful for predicting aircraft motion.
The JWKB approximation in loop quantum cosmology
NASA Astrophysics Data System (ADS)
Craig, David; Singh, Parampreet
2017-01-01
We explore the JWKB approximation in loop quantum cosmology in a flat universe with a scalar matter source. Exact solutions of the quantum constraint are studied at small volume in the JWKB approximation in order to assess the probability of tunneling to small or zero volume. Novel features of the approximation are discussed which appear due to the fact that the model is effectively a two-dimensional dynamical system. Based on collaborative work with Parampreet Singh.
Approximation by Ridge Functions and Neural Networks
1997-01-01
univariate spaces Xn Other authors most notably Micchelli and Mhaskar MM MM and Mhaskar M have also considered approximation problems of the...type treated here The work of Micchelli and Mhaskar does not give the best order of approximation Mhaskar M has given best possible results but...function from its projections Duke Math J pp M H Mhaskar Neural networks for optimal approximation of smooth and ana lytic
Melchinger, Albrecht E; Technow, Frank; Dhillon, Baldev S
2011-12-01
Recent progress in genotyping and doubled haploid (DH) techniques has created new opportunities for development of improved selection methods in numerous crops. Assuming a finite number of unlinked loci (ℓ) and a given total number (n) of individuals to be genotyped, we compared, by theory and simulations, three methods of marker-assisted selection (MAS) for gene stacking in DH lines derived from biparental crosses: (1) MAS for high values of the marker score (T, corresponding to the total number of target alleles) in the F(2) generation and subsequently among DH lines derived from the selected F(2) individual (Method 1), (2) MAS for augmented F(2) enrichment and subsequently for T among DH lines from the best carrier F(2) individual (Method 2), and (3) MAS for T among DH lines derived from the F(1) generation (Method 3). Our objectives were to (a) determine the optimum allocation of resources to the F(2) ([Formula: see text]) and DH generations [Formula: see text] for Methods 1 and 2 by simulations, (b) compare the efficiency of all three methods for gene stacking by simulations, and (c) develop theory to explain the general effect of selection on the segregation variance and interpret our simulation results. By theory, we proved that for smaller values of ℓ, the segregation variance of T among DH lines derived from F(2) individuals, selected for high values of T, can be much smaller than expected in the absence of selection. This explained our simulation results, showing that for Method 1, it is best to genotype more F(2) individuals than DH lines ([Formula: see text]), whereas under Method 2, the optimal ratio [Formula: see text] was close to 0.5. However, for ratios deviating moderately from the optimum, the mean [Formula: see text] of T in the finally selected DH line ([Formula: see text]) was hardly reduced. Method 3 had always the lowest mean [Formula: see text] of [Formula: see text] except for small numbers of loci (ℓ = 4) and is favorable only if
NASA Astrophysics Data System (ADS)
Zwitter, T.; Matijevič, G.; Breddels, M. A.; Smith, M. C.; Helmi, A.; Munari, U.; Bienaymé, O.; Binney, J.; Bland-Hawthorn, J.; Boeche, C.; Brown, A. G. A.; Campbell, R.; Freeman, K. C.; Fulbright, J.; Gibson, B.; Gilmore, G.; Grebel, E. K.; Navarro, J. F.; Parker, Q. A.; Seabroke, G. M.; Siebert, A.; Siviero, A.; Steinmetz, M.; Watson, F. G.; Williams, M.; Wyse, R. F. G.
2010-11-01
The RAdial Velocity Experiment (RAVE) is a spectroscopic survey of the Milky Way which already collected over 400 000 spectra of ~ 330 000 different stars. We use the subsample of spectra with spectroscopically determined values of stellar parameters to determine the distances to these stars. The list currently contains 235 064 high quality spectra which show no peculiarities and belong to 210 872 different stars. The numbers will grow as the RAVE survey progresses. The public version of the catalog will be made available through the CDS services along with the ongoing RAVE public data releases. The distances are determined with a method based on the work by Breddels et al. (2010, A&A, 511, A16). Here we assume that the star undergoes a standard stellar evolution and that its spectrum shows no peculiarities. The refinements include: the use of either of the three isochrone sets, a better account of the stellar ages and masses, use of more realistic errors of stellar parameter values, and application to a larger dataset. The derived distances of both dwarfs and giants match within ~ 21% to the astrometric distances of Hipparcos stars and to the distances of observed members of open and globular clusters. Multiple observations of a fraction of RAVE stars show that repeatability of the derived distances is even better, with half of the objects showing a distance scatter of ⪉ 11%. RAVE dwarfs are ~ 300 pc from the Sun, and giants are at distances of 1 to 2 kpc, and up to 10 kpc. This places the RAVE dataset between the more local Geneva-Copenhagen survey and the more distant and fainter SDSS sample. As such it is ideal to address some of the fundamental questions of Galactic structure and evolution in the pre-Gaia era. Individual applications are left to separate papers, here we show that the full 6-dimensional information on position and velocity is accurate enough to discuss the vertical structure and kinematic properties of the thin and thick disks. The catalog is
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1990-01-01
This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.
Bent approximations to synchrotron radiation optics
Heald, S.
1981-01-01
Ideal optical elements can be approximated by bending flats or cylinders. This paper considers the applications of these approximate optics to synchrotron radiation. Analytic and raytracing studies are used to compare their optical performance with the corresponding ideal elements. It is found that for many applications the performance is adequate, with the additional advantages of lower cost and greater flexibility. Particular emphasis is placed on obtaining the practical limitations on the use of the approximate elements in typical beamline configurations. Also considered are the possibilities for approximating very long length mirrors using segmented mirrors.
Common Career Technical Core: Common Standards, Common Vision for CTE
ERIC Educational Resources Information Center
Green, Kimberly
2012-01-01
This article provides an overview of the National Association of State Directors of Career Technical Education Consortium's (NASDCTEc) Common Career Technical Core (CCTC), a state-led initiative that was created to ensure that career and technical education (CTE) programs are consistent and high quality across the United States. Forty-two states,…
Common Career Technical Core: Common Standards, Common Vision for CTE
ERIC Educational Resources Information Center
Green, Kimberly
2012-01-01
This article provides an overview of the National Association of State Directors of Career Technical Education Consortium's (NASDCTEc) Common Career Technical Core (CCTC), a state-led initiative that was created to ensure that career and technical education (CTE) programs are consistent and high quality across the United States. Forty-two states,…
NASA Technical Reports Server (NTRS)
Hildebrand, Francis B
1943-01-01
A mathematical procedure is herein developed for obtaining exact solutions of shear-lag problems in flat panels and box beams: the method is based on the assumption that the amount of stretching of the sheets in the direction perpendicular to the direction of essential normal stresses is negligible. Explicit solutions, including the treatment of cut-outs, are given for several cases and numerical results are presented in graphic and tabular form. The general theory is presented in a from which further solutions can be readily obtained. The extension of the theory to cover certain cases of non-uniform cross section is indicated. Although the solutions are obtained in terms of infinite series, the present developments differ from those previously given in that, in practical cases, the series usually converge so rapidly that sufficient accuracy is afforded by a small number of terms. Comparisons are made in several cases between the present results and the corresponding solutions obtained by approximate procedures devised by Reissner and by Kuhn and Chiarito.
Vogl, Claus; Clemente, Florian
2012-01-01
We analyze a decoupled Moran model with haploid population size N, a biallelic locus under mutation and drift with scaled forward and backward mutation rates θ1=μ1N and θ0=μ0N, and directional selection with scaled strength γ=sN. With small scaled mutation rates θ0 and θ1, which is appropriate for single nucleotide polymorphism data in highly recombining regions, we derive a simple approximate equilibrium distribution for polymorphic alleles with a constant of proportionality. We also put forth an even simpler model, where all mutations originate from monomorphic states. Using this model we derive the sojourn times, conditional on the ancestral and fixed allele, and under equilibrium the distributions of fixed and polymorphic alleles and fixation rates. Furthermore, we also derive the distribution of small samples in the diffusion limit and provide convenient recurrence relations for calculating this distribution. This enables us to give formulas analogous to the Ewens–Watterson estimator of θ for biased mutation rates and selection. We apply this theory to a polymorphism dataset of fourfold degenerate sites in Drosophila melanogaster. PMID:22269092
Uteshev, V V; Patlak, J B; Pennefather, P S
2000-01-01
Real synaptic systems consist of a nonuniform population of synapses with a broad spectrum of probability and response distributions varying between synapses, and broad amplitude distributions of postsynaptic unitary responses within a given synapse. A common approach to such systems has been to assume identical synapses and recover apparent quantal parameters by deconvolution procedures from measured evoked (ePSC) and unitary evoked postsynaptic current (uePSC) distributions. Here we explicitly consider nonuniform synaptic systems with both intra (type I) and intersynaptic (type II) response variability and formally define an equivalent system of uniform synapses in which both uePSC and ePSC amplitude distributions best approximate those of the actual nonuniform synaptic system. This equivalent system has the advantage of being fully defined by just four quantal parameters: ñ, the number of equivalent synapses;p, the mean probability of quantal release; mu, mean; and sigma(2), variance of the uePSC distribution. We show that these equivalent parameters are weighted averages of intrinsic parameters and can be approximated by apparent quantal parameters, therefore establishing a useful analytical link between the apparent and intrinsic parameters. The present study extends previous work on compound binomial analysis of synaptic transmission by highlighting the importance of the product of p and mu, and the variance of that product. Conditions for a unique deconvolution of apparent uniform synaptic parameters have been derived and justified. Our approach does not require independence of synaptic parameters, such as p and mu from each other, therefore the approach will hold even if feedback (i.e., via retrograde transmission) exists between pre and postsynaptic signals. Using numerical simulations we demonstrate how equivalent parameters are meaningful even when there is considerable variation in intrinsic parameters, including systems where subpopulations of high
Convergence of finite element approximations of large eddy motion.
Iliescu, T.; John, V.; Layton, W. J.; Mathematics and Computer Science; Otto-von-Guericke Univ.; Univ. of Pittsburgh
2002-11-01
This report considers 'numerical errors' in LES. Specifically, for one family of space filtered flow models, we show convergence of the finite element approximation of the model and give an estimate of the error. Keywords: Navier Stokes equations, large eddy simulation, finite element method I. INTRODUCTION Consider the (turbulent) flow of an incompressible fluid. One promising and common approach to the simulation of the motion of the large fluid structures is Large Eddy Simulation (LES). Various models are used in LES; a common one is to find (w, q), where w : {Omega}
Local Approximations to the Gravitational Collapse of Cold Matter
NASA Astrophysics Data System (ADS)
Hui, Lam; Bertschinger, Edmund
1996-11-01
We investigate three different local approximations for nonlinear gravitational instability in the framework of cosmological Lagrangian fluid dynamics of cold dust By local we mean that the evolution is described by a set of ordinary differential equations in time for each mass element, with no coupling to other mass elements aside from those implied by the initial conditions. We first show that the Zel'dovich approximation (ZA) can be cast in this form. Next, we consider extensions involving the evolution of the Newtonian tidal tensor. We show that two approximations can be found that are exact for plane-parallel and spherical perturbations. The first one ("nonmagnetic" approximation, or NMA) neglects the Newtonian counterpart of the magnetic part of the Weyl tensor in the fluid frame and was investigated previously by Bertschinger & Jain. A new approximation ("local tidal," or LTA) involves neglecting still more terms in the tidal evolution equation. It is motivated by the analytic demonstration that it is exact for any perturbations whose gravitational and velocity equipotentials have the same constant shape with time. Thus, the LTA is exact for spherical, cylindrical, and plane-parallel perturbations. It corresponds physically to neglecting the curl of the magnetic part of the Weyl tensor in the comoving threading as well as an advection term in the tidal evolution equation. All three approximations can be applied up to the point of orbit crossing. We tested them in the case of the collapse of a homogeneous triaxial ellipsoid, for which an exact solution exists for an ellipsoid embedded in empty space and an excellent approximation is known in the cosmological context. We find that the LTA is significantly more accurate in general than the ZA and the NMA. Like the ZA, but unlike the NMA, the LTA generically leads to pancake collapse. For a randomly chosen mass element in an Sitter universe, assuming a Gaussian random field of initial density fluctuations, the
Berkel, M. van; Tamura, N.; Ida, K.; Hogeweij, G. M. D.; Zwart, H. J.; Inagaki, S.; Baar, M. R. de
2014-11-15
, cylindrical approximations are treated for heat waves traveling towards the plasma edge assuming a semi-infinite domain.
Inversion and approximation of Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.
Approximate methods for equations of incompressible fluid
NASA Astrophysics Data System (ADS)
Galkin, V. A.; Dubovik, A. O.; Epifanov, A. A.
2017-02-01
Approximate methods on the basis of sequential approximations in the theory of functional solutions to systems of conservation laws is considered, including the model of dynamics of incompressible fluid. Test calculations are performed, and a comparison with exact solutions is carried out.
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Spline approximations for nonlinear hereditary control systems
NASA Technical Reports Server (NTRS)
Daniel, P. L.
1982-01-01
A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
An approximation for inverse Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1981-01-01
Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.
The Average Field Approximation for Almost Bosonic Extended Anyons
NASA Astrophysics Data System (ADS)
Lundholm, Douglas; Rougerie, Nicolas
2015-12-01
Anyons are 2D or 1D quantum particles with intermediate statistics, interpolating between bosons and fermions. We study the ground state of a large number N of 2D anyons, in a scaling limit where the statistics parameter α is proportional to N ^{-1} when N→ ∞ . This means that the statistics is seen as a "perturbation from the bosonic end". We model this situation in the magnetic gauge picture by bosons interacting through long-range magnetic potentials. We assume that these effective statistical gauge potentials are generated by magnetic charges carried by each particle, smeared over discs of radius R (extended anyons). Our method allows to take R→ 0 not too fast at the same time as N→ ∞ . In this limit we rigorously justify the so-called "average field approximation": the particles behave like independent, identically distributed bosons interacting via a self-consistent magnetic field.
Parametric study of the Orbiter rollout using an approximate solution
NASA Technical Reports Server (NTRS)
Garland, B. J.
1979-01-01
An approximate solution to the motion of the Orbiter during rollout is used to perform a parametric study of the rollout distance required by the Orbiter. The study considers the maximum expected dispersions in the landing speed and the touchdown point. These dispersions are assumed to be correlated so that a fast landing occurs before the nominal touchdown point. The maximum rollout distance is required by the maximum landing speed with a 10 knot tailwind and the center of mass at the forward limit of its longitudinal travel. The maximum weight that can be stopped within 15,000 feet on a hot day at Kennedy Space Center is 248,800 pounds. The energy absorbed by the brakes would exceed the limit for reuse of the brakes.
Parabolic approximation method for the mode conversion-tunneling equation
Phillips, C.K.; Colestock, P.L.; Hwang, D.Q.; Swanson, D.G.
1987-07-01
The derivation of the wave equation which governs ICRF wave propagation, absorption, and mode conversion within the kinetic layer in tokamaks has been extended to include diffraction and focussing effects associated with the finite transverse dimensions of the incident wavefronts. The kinetic layer considered consists of a uniform density, uniform temperature slab model in which the equilibrium magnetic field is oriented in the z-direction and varies linearly in the x-direction. An equivalent dielectric tensor as well as a two-dimensional energy conservation equation are derived from the linearized Vlasov-Maxwell system of equations. The generalized form of the mode conversion-tunneling equation is then extracted from the Maxwell equations, using the parabolic approximation method in which transverse variations of the wave fields are assumed to be weak in comparison to the variations in the primary direction of propagation. Methods of solving the generalized wave equation are discussed. 16 refs.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1987-01-01
Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Approximating maximum clique with a Hopfield network.
Jagota, A
1995-01-01
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic.
Double power series method for approximating cosmological perturbations
NASA Astrophysics Data System (ADS)
Wren, Andrew J.; Malik, Karim A.
2017-04-01
We introduce a double power series method for finding approximate analytical solutions for systems of differential equations commonly found in cosmological perturbation theory. The method was set out, in a noncosmological context, by Feshchenko, Shkil' and Nikolenko (FSN) in 1966, and is applicable to cases where perturbations are on subhorizon scales. The FSN method is essentially an extension of the well known Wentzel-Kramers-Brillouin (WKB) method for finding approximate analytical solutions for ordinary differential equations. The FSN method we use is applicable well beyond perturbation theory to solve systems of ordinary differential equations, linear in the derivatives, that also depend on a small parameter, which here we take to be related to the inverse wave-number. We use the FSN method to find new approximate oscillating solutions in linear order cosmological perturbation theory for a flat radiation-matter universe. Together with this model's well-known growing and decaying Mészáros solutions, these oscillating modes provide a complete set of subhorizon approximations for the metric potential, radiation and matter perturbations. Comparison with numerical solutions of the perturbation equations shows that our approximations can be made accurate to within a typical error of 1%, or better. We also set out a heuristic method for error estimation. A Mathematica notebook which implements the double power series method is made available online.
Linear-phase approximation in the triangular facet near-field physical optics computer program
NASA Technical Reports Server (NTRS)
Imbriale, W. A.; Hodges, R. E.
1990-01-01
Analyses of reflector antenna surfaces use a computer program based on a discrete approximation of the radiation integral. The calculation replaces the actual surface with a triangular facet representation; the physical optics current is assumed to be constant over each facet. Described here is a method of calculation using linear-phase approximation of the surface currents of parabolas, ellipses, and shaped subreflectors and compares results with a previous program that used a constant-phase approximation of the triangular facets. The results show that the linear-phase approximation is a significant improvement over the constant-phase approximation, and enables computation of 100 to 1,000 lambda reflectors within a reasonable time on a Cray computer.
NASA Astrophysics Data System (ADS)
Ireland, M. J.; Scholz, M.; Wood, P. R.
2008-12-01
We describe the Cool Opacity-sampling Dynamic EXtended (CODEX) atmosphere models of Mira variable stars, and examine in detail the physical and numerical approximations that go in-to the model creation. The CODEX atmospheric models are obtained by computing the temperature and the chemical and radiative states of the atmospheric layers, assuming gas pressure and velocity profiles from Mira pulsation models, which extend from near the H-burning shell to the outer layers of the atmosphere. Although the code uses the approximation of Local Thermodynamic Equilibrium (LTE) and a grey approximation in the dynamical atmosphere code, many key observable quantities, such as infrared diameters and low-resolution spectra, are predicted robustly in spite of these approximations. We show that in visible light, radiation from Mira variables is dominated by fluorescence scattering processes, and that the LTE approximation likely underpredicts visible-band fluxes by a factor of 2.
Error assessments of widely-used orbit error approximations in satellite altimetry
NASA Technical Reports Server (NTRS)
Tai, Chang-Kou
1988-01-01
From simulations, the orbit error can be assumed to be a slowly varying sine wave with a predominant wavelength comparable to the Earth's circumference. Thus, one can derive analytically the error committed in representing the orbit error along a segment of the satellite ground track by a bias; by a bias and tilt (linear approximation); or by a bias, tilt, and curvature (quadratic approximation). The result clearly agrees with what is obvious intuitively, i.e., (1) the fit is better with more parameters, and (2) as the length of the segment increases, the approximation gets worse. But more importantly, it provides a quantitative basis to evaluate the accuracy of past results and, in the future, to select the best approximation according to the required precision and the efficiency of various approximations.
Xenidou-Dervou, Iro; van Lieshout, Ernest C D M; van der Schoot, Menno
2014-01-01
Preschool children have been proven to possess nonsymbolic approximate arithmetic skills before learning how to manipulate symbolic math and thus before any formal math instruction. It has been assumed that nonsymbolic approximate math tasks necessitate the allocation of Working Memory (WM) resources. WM has been consistently shown to be an important predictor of children's math development and achievement. The aim of our study was to uncover the specific role of WM in nonsymbolic approximate math. For this purpose, we conducted a dual-task study with preschoolers with active phonological, visual, spatial, and central executive interference during the completion of a nonsymbolic approximate addition dot task. With regard to the role of WM, we found a clear performance breakdown in the central executive interference condition. Our findings provide insight into the underlying cognitive processes involved in storing and manipulating nonsymbolic approximate numerosities during early arithmetic. Copyright © 2013 Cognitive Science Society, Inc.
Approximating Light Rays in the Schwarzschild Field
NASA Astrophysics Data System (ADS)
Semerák, O.
2015-02-01
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various "low-order competitors," namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Approximate knowledge compilation: The first order case
Val, A. del
1996-12-31
Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.
Approximate Bruechner orbitals in electron propagator calculations
Ortiz, J.V.
1999-12-01
Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.
APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD
Semerák, O.
2015-02-10
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Information geometry of mean-field approximation.
Tanaka, T
2000-08-01
I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics.
Polynomial approximations of a class of stochastic multiscale elasticity problems
NASA Astrophysics Data System (ADS)
Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing
2016-06-01
We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together
Establishing Conventional Communication Systems: Is Common Knowledge Necessary?
ERIC Educational Resources Information Center
Barr, Dale J.
2004-01-01
How do communities establish shared communication systems? The Common Knowledge view assumes that symbolic conventions develop through the accumulation of common knowledge regarding communication practices among the members of a community. In contrast with this view, it is proposed that coordinated communication emerges a by-product of local…
Establishing Conventional Communication Systems: Is Common Knowledge Necessary?
ERIC Educational Resources Information Center
Barr, Dale J.
2004-01-01
How do communities establish shared communication systems? The Common Knowledge view assumes that symbolic conventions develop through the accumulation of common knowledge regarding communication practices among the members of a community. In contrast with this view, it is proposed that coordinated communication emerges a by-product of local…
Finding Common Ground with the Common Core
ERIC Educational Resources Information Center
Moisan, Heidi
2015-01-01
This article examines the journey of museum educators at the Chicago History Museum in understanding the Common Core State Standards and implementing them in our work with the school audience. The process raised questions about our teaching philosophy and our responsibility to our audience. Working with colleagues inside and outside of our…
How Common Is the Common Core?
ERIC Educational Resources Information Center
Thomas, Amande; Edson, Alden J.
2014-01-01
Since the introduction of the Common Core State Standards for Mathematics (CCSSM) in 2010, stakeholders in adopting states have engaged in a variety of activities to understand CCSSM standards and transition from previous state standards. These efforts include research, professional development, assessment and modification of curriculum resources,…
Finding Common Ground with the Common Core
ERIC Educational Resources Information Center
Moisan, Heidi
2015-01-01
This article examines the journey of museum educators at the Chicago History Museum in understanding the Common Core State Standards and implementing them in our work with the school audience. The process raised questions about our teaching philosophy and our responsibility to our audience. Working with colleagues inside and outside of our…
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is to provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
AN APPROXIMATE EQUATION OF STATE OF SOLIDS.
research. By generalizing experimental data and obtaining unified relations describing the thermodynamic properties of solids, and approximate equation of state is derived which can be applied to a wide class of materials. (Author)
Approximate Controllability Results for Linear Viscoelastic Flows
NASA Astrophysics Data System (ADS)
Chowdhury, Shirshendu; Mitra, Debanjana; Ramaswamy, Mythily; Renardy, Michael
2017-09-01
We consider linear viscoelastic flow of a multimode Maxwell or Jeffreys fluid in a bounded domain with smooth boundary, with a distributed control in the momentum equation. We establish results on approximate and exact controllability.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Computational aspects of pseudospectral Laguerre approximations
NASA Technical Reports Server (NTRS)
Funaro, Daniele
1989-01-01
Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-04-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Approximate String Matching with Reduced Alphabet
NASA Astrophysics Data System (ADS)
Salmela, Leena; Tarhio, Jorma
We present a method to speed up approximate string matching by mapping the factual alphabet to a smaller alphabet. We apply the alphabet reduction scheme to a tuned version of the approximate Boyer-Moore algorithm utilizing the Four-Russians technique. Our experiments show that the alphabet reduction makes the algorithm faster. Especially in the k-mismatch case, the new variation is faster than earlier algorithms for English data with small values of k.
Some Recent Progress for Approximation Algorithms
NASA Astrophysics Data System (ADS)
Kawarabayashi, Ken-ichi
We survey some recent progress on approximation algorithms. Our main focus is the following two problems that have some recent breakthroughs; the edge-disjoint paths problem and the graph coloring problem. These breakthroughs involve the following three ingredients that are quite central in approximation algorithms: (1) Combinatorial (graph theoretical) approach, (2) LP based approach and (3) Semi-definite programming approach. We also sketch how they are used to obtain recent development.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Nonlinear Stochastic PDEs: Analysis and Approximations
2016-05-23
3.4.1 Nonlinear Stochastic PDEs: Analysis and Approximations We compare Wiener chaos and stochastic collocation methods for linear advection-reaction...ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 nonlinear stochastic PDEs (SPDEs), nonlocal SPDEs, Navier...3.4.1 Nonlinear Stochastic PDEs: Analysis and Approximations Report Title We compare Wiener chaos and stochastic collocation methods for linear
Approximations and Solution Estimates in Optimization
2016-04-06
Approximations and Solution Estimates in Optimization Johannes O. Royset Operations Research Department Naval Postgraduate School joroyset@nps.edu...Abstract. Approximation is central to many optimization problems and the supporting theory pro- vides insight as well as foundation for algorithms. In...functions quantifies epi-convergence, we are able to obtain estimates of optimal solutions and optimal values through estimates of that distance. In
The closure approximation in the hierarchy equations.
NASA Technical Reports Server (NTRS)
Adomian, G.
1971-01-01
The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.
Canonical Commonality Analysis.
ERIC Educational Resources Information Center
Leister, K. Dawn
Commonality analysis is a method of partitioning variance that has advantages over more traditional "OVA" methods. Commonality analysis indicates the amount of explanatory power that is "unique" to a given predictor variable and the amount of explanatory power that is "common" to or shared with at least one predictor…
Knowledge representation for commonality
NASA Technical Reports Server (NTRS)
Yeager, Dorian P.
1990-01-01
Domain-specific knowledge necessary for commonality analysis falls into two general classes: commonality constraints and costing information. Notations for encoding such knowledge should be powerful and flexible and should appeal to the domain expert. The notations employed by the Commonality Analysis Problem Solver (CAPS) analysis tool are described. Examples are given to illustrate the main concepts.
An improved proximity force approximation for electrostatics
Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.
2012-08-15
A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.
Approximating centrality in evolving graphs: toward sublinearity
NASA Astrophysics Data System (ADS)
Priest, Benjamin W.; Cybenko, George
2017-05-01
The identification of important nodes is a ubiquitous problem in the analysis of social networks. Centrality indices (such as degree centrality, closeness centrality, betweenness centrality, PageRank, and others) are used across many domains to accomplish this task. However, the computation of such indices is expensive on large graphs. Moreover, evolving graphs are becoming increasingly important in many applications. It is therefore desirable to develop on-line algorithms that can approximate centrality measures using memory sublinear in the size of the graph. We discuss the challenges facing the semi-streaming computation of many centrality indices. In particular, we apply recent advances in the streaming and sketching literature to provide a preliminary streaming approximation algorithm for degree centrality utilizing CountSketch and a multi-pass semi-streaming approximation algorithm for closeness centrality leveraging a spanner obtained through iteratively sketching the vertex-edge adjacency matrix. We also discuss possible ways forward for approximating betweenness centrality, as well as spectral measures of centrality. We provide a preliminary result using sketched low-rank approximations to approximate the output of the HITS algorithm.
This is a draft of the recommendations that that Assumable Waters Subcommittee will present to NACEPT on May 10. It should be considered a draft until it is approved and transmitted to the EPA by NACEPT
Efficient crosswell EM tomography using localized nonlinear approximation
Kim, Hee Joon; Song, Yoonho; Lee, Ki Ha; Wilt, Michael J.
2003-07-21
This paper presents a fast and stable imaging scheme using the localized nonlinear (LN) approximation of integral equation (IE) solutions for inverting electromagnetic data obtained in a crosswell survey. The medium is assumed to be cylindrically symmetric about a source borehole and to maintain the symmetry a vertical magnetic dipole is used as a source. To find an optimum balance between data fitting and smoothness constraint, we introduce an automatic selection scheme of Lagrange multiplier, which is sought at each iteration with a least misfit criterion. In this selection scheme, the IE algorithm is quite attractive in speed because Green's functions, a most time-consuming part in IE methods, are repeatedly reusable throughout the inversion process. The inversion scheme using the LN approximation has been tested to show its stability and efficiency using both synthetic and field data. The inverted image derived from the field data, collected in a pilot experiment of water flood monitoring in an oil field, is successfully compared with that of a 2.5-dimensional inversion scheme.
Nonadiabatic charged spherical evolution in the postquasistatic approximation
Rosales, L.; Barreto, W.; Peralta, C.; Rodriguez-Mueller, B.
2010-10-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of dissipative and electrically charged distributions in general relativity. The numerical implementation of our approach leads to a solver which is globally second-order convergent. We evolve nonadiabatic distributions assuming an equation of state that accounts for the anisotropy induced by the electric charge. Dissipation is described by streaming-out or diffusion approximations. We match the interior solution, in noncomoving coordinates, with the Vaidya-Reissner-Nordstroem exterior solution. Two models are considered: (i) a Schwarzschild-like shell in the diffusion limit; and (ii) a Schwarzschild-like interior in the free-streaming limit. These toy models tell us something about the nature of the dissipative and electrically charged collapse. Diffusion stabilizes the gravitational collapse producing a spherical shell whose contraction is halted in a short characteristic hydrodynamic time. The streaming-out radiation provides a more efficient mechanism for emission of energy, redistributing the electric charge on the whole sphere, while the distribution collapses indefinitely with a longer hydrodynamic time scale.
Near distance approximation in astrodynamical applications of Lambert's theorem
NASA Astrophysics Data System (ADS)
Rauh, Alexander; Parisi, Jürgen
2014-01-01
The smallness parameter of the approximation method is defined in terms of the non-dimensional initial distance between target and chaser satellite. In the case of a circular target orbit, compact analytical expressions are obtained for the interception travel time up to third order. For eccentric target orbits, an explicit result is worked out to first order, and the tools are prepared for numerical evaluation of higher order contributions. The possible transfer orbits are examined within Lambert's theorem. For an eventual rendezvous it is assumed that the directions of the angular momenta of the two orbits enclose an acute angle. This assumption, together with the property that the travel time should vanish with vanishing initial distance, leads to a condition on the admissible initial positions of the chaser satellite. The condition is worked out explicitly in the general case of an eccentric target orbit and a non-coplanar transfer orbit. The condition is local. However, since during a rendezvous maneuver, the chaser eventually passes through the local space, the condition propagates to non-local initial distances. As to quantitative accuracy, the third order approximation reproduces the elements of Mars, in the historical problem treated by Gauss, to seven decimals accuracy, and in the case of the International Space Station, the method predicts an encounter error of about 12 m for an initial distance of 70 km.
Approximate controllability of a system of parabolic equations with delay
NASA Astrophysics Data System (ADS)
Carrasco, Alexander; Leiva, Hugo
2008-09-01
In this paper we give necessary and sufficient conditions for the approximate controllability of the following system of parabolic equations with delay: where [Omega] is a bounded domain in , D is an n×n nondiagonal matrix whose eigenvalues are semi-simple with nonnegative real part, the control and B[set membership, variant]L(U,Z) with , . The standard notation zt(x) defines a function from [-[tau],0] to (with x fixed) by zt(x)(s)=z(t+s,x), -[tau][less-than-or-equals, slant]s[less-than-or-equals, slant]0. Here [tau][greater-or-equal, slanted]0 is the maximum delay, which is supposed to be finite. We assume that the operator is linear and bounded, and [phi]0[set membership, variant]Z, [phi][set membership, variant]L2([-[tau],0];Z). To this end: First, we reformulate this system into a standard first-order delay equation. Secondly, the semigroup associated with the first-order delay equation on an appropriate product space is expressed as a series of strongly continuous semigroups and orthogonal projections related with the eigenvalues of the Laplacian operator (); this representation allows us to reduce the controllability of this partial differential equation with delay to a family of ordinary delay equations. Finally, we use the well-known result on the rank condition for the approximate controllability of delay system to derive our main result.
NASA Technical Reports Server (NTRS)
Ito, K.
1984-01-01
The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A charactristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.
NASA Technical Reports Server (NTRS)
Ito, K.
1985-01-01
The stability and convergence properties of the Legendre-tau approximation for hereditary differential systems are analyzed. A characteristic equation is derived for the eigenvalues of the resulting approximate system. As a result of this derivation the uniform exponential stability of the solution semigroup is preserved under approximation. It is the key to obtaining the convergence of approximate solutions of the algebraic Riccati equation in trace norm.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different
The tendon approximator device in traumatic injuries.
Forootan, Kamal S; Karimi, Hamid; Forootan, Nazilla-Sadat S
2015-01-01
Precise and tension-free approximation of two tendon endings is the key predictor of outcomes following tendon lacerations and repairs. We evaluate the efficacy of a new tendon approximator device in tendon laceration repairs. In a comparative study, we used our new tendon approximator device in 99 consecutive patients with laceration of 266 tendons who attend a university hospital and evaluated the operative time to repair the tendons, surgeons' satisfaction as well as patient's outcomes in a long-term follow-up. Data were compared with the data of control patients undergoing tendon repair by conventional method. Totally 266 tendons were repaired by approximator device and 199 tendons by conventional technique. 78.7% of patients in first group were male and 21.2% were female. In approximator group 38% of patients had secondary repair of cut tendons and 62% had primary repair. Patients were followed for a mean period of 3years (14-60 months). Time required for repair of each tendon was significantly reduced with the approximator device (2 min vs. 5.5 min, p<0.0001). After 3-4 weeks of immobilization, passive and active physiotherapy was started. Functional Results of tendon repair were identical in the two groups and were not significantly different. 1% of tendons in group A and 1.2% in group B had rupture that was not significantly different. The new nerve approximator device is cheap, feasible to use and reduces the time of tendon repair with sustained outcomes comparable to the conventional methods.
Common mechanisms of synaptic plasticity in vertebrates and invertebrates
Glanzman, David L.
2016-01-01
Until recently, the literature on learning-related synaptic plasticity in invertebrates has been dominated by models assuming plasticity is mediated by presynaptic changes, whereas the vertebrate literature has been dominated by models assuming it is mediated by postsynaptic changes. Here I will argue that this situation does not reflect a biological reality and that, in fact, invertebrate and vertebrate nervous systems share a common set of mechanisms of synaptic plasticity. PMID:20152143
On the mathematical treatment of the Born-Oppenheimer approximation
Jecko, Thierry
2014-05-15
Motivated by the paper by Sutcliffe and Woolley [“On the quantum theory of molecules,” J. Chem. Phys. 137, 22A544 (2012)], we present the main ideas used by mathematicians to show the accuracy of the Born-Oppenheimer approximation for molecules. Based on mathematical works on this approximation for molecular bound states, in scattering theory, in resonance theory, and for short time evolution, we give an overview of some rigorous results obtained up to now. We also point out the main difficulties mathematicians are trying to overcome and speculate on further developments. The mathematical approach does not fit exactly to the common use of the approximation in Physics and Chemistry. We criticize the latter and comment on the differences, contributing in this way to the discussion on the Born-Oppenheimer approximation initiated by Sutcliffe and Woolley. The paper neither contains mathematical statements nor proofs. Instead, we try to make accessible mathematically rigourous results on the subject to researchers in Quantum Chemistry or Physics.
On uniform approximation of elliptic functions by Padé approximants
NASA Astrophysics Data System (ADS)
Khristoforov, Denis V.
2009-06-01
Diagonal Padé approximants of elliptic functions are studied. It is known that the absence of uniform convergence of such approximants is related to them having spurious poles that do not correspond to any singularities of the function being approximated. A sequence of piecewise rational functions is proposed, which is constructed from two neighbouring Padé approximants and approximates an elliptic function locally uniformly in the Stahl domain. The proof of the convergence of this sequence is based on deriving strong asymptotic formulae for the remainder function and Padé polynomials and on the analysis of the behaviour of a spurious pole. Bibliography: 23 titles.
Approximation of Bivariate Functions via Smooth Extensions
Zhang, Zhihua
2014-01-01
For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316
Recent advances in discrete dipole approximation
NASA Astrophysics Data System (ADS)
Flatau, P. J.
2012-12-01
I will describe recent advances and results related to Discrete Dipole Approximation. I will concentrate on Discrete Dipole Scattering (DDSCAT) code which has been jointly developed by myself and Bruce T. Draine. Discussion will concentrate on calculation of scattering and absorption by isolated particles (e.g., dust grains, ice crystals), calculations of scattering by periodic structures with applications to studies of scattering and absorption by periodic arrangement of finite cylinders, cubes, etc), very fast near field calculation, ways to display scattering targets and their composition using three dimensional graphical codes. I will discuss possible extensions. References Flatau, P. J. and Draine, B. T., 2012, Fast near field calculations in the discrete dipole approximation for regular rectilinear grids, Optics Express, 20, 1247-1252. Draine B. T. and Flatau P. J., 2008, Discrete-dipole approximation for periodic targets: theory and tests , J. Opt. Soc. Am. A., 25, 2693-2703. Draine BT and Flatau PJ, 2012, User Guide for the Discrete Dipole Approximation Code DDSCAT 7.2, arXiv:1202.3424v3.ear field calculations (Fast near field calculations in the discrete dipole approximation for regular rectilinear grids P. J. Flatau and B. T. Draine, Optics Express, Vol. 20, Issue 2, pp. 1247-1252 (2012))
Estimation of distribution algorithms with Kikuchi approximations.
Santana, Roberto
2005-01-01
The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables.
Ancilla-approximable quantum state transformations
Blass, Andreas; Gurevich, Yuri
2015-04-15
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.
Separable approximations of two-body interactions
NASA Astrophysics Data System (ADS)
Haidenbauer, J.; Plessas, W.
1983-01-01
We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.
Approximate solutions of the hyperbolic Kepler equation
NASA Astrophysics Data System (ADS)
Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge
2015-12-01
We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L.
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
NASA Astrophysics Data System (ADS)
Mielimąka, Ryszard; Orwat, Justyna
2017-07-01
The mean square approximation of average course of mining area subsidence with the use of a flat spline as a approximating function was conducted to determine an approximate course of curvatures observed on a measuring line set over exploitation run by "Chwałowice" coalmine. The assumed minimising criterion of loss function was referred to the forecast values of vertical ground displacements calculated via Knothe's formulas. The average courses of inclinations and curvatures of mining area were obtained via the calculation of their values by means of empirical formulas taking into consideration the obtained approximate values of measured subsidence.
Potential energy changes and the Boussinesq approximation in stratified fluids
NASA Astrophysics Data System (ADS)
Seshadri, K.; Rottman, J. W.; Nomura, K. K.; Stretch, D. D.
2002-11-01
The evolution of the potential energy of an ideal binary fluid mixture that is initially stably stratified is re-examined. The initial stable stratification evolves to a state of uniform density under the influence of molecular diffusion. We derive the appropriate governing equations using either a mass-averaged or a volume-averaged definition of velocity, and develop an energy budget describing the changes between kinetic, potential and internal energies without invoking the Boussinesq approximation. We compare the energy evolution equations with those based on the commonly used Boussinesq approximation and clarify some subtleties associated with the exchanges between the different forms of energy in this problem. In particular, we show that the mass-averaged velocity is nonzero and that all of the increase in potential energy comes from the initial kinetic energy.
A Randomized Approximate Nearest Neighbors Algorithm
2010-09-14
Introduction to Harmonic Analysis, Second edition, Dover Publi- cations (1976). [12] D. Knuth , Seminumerical Algorithms, vol. 2 of The Art of Computer ...ES) Yale University ,Department of Computer Science,New Haven,CT,06520 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY...may further assume that t > a2 and evaluate the cdf of D−a at t by computing the probability of D−a being smaller than t to obtain FD−a (t) = ∫ t a2
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Approximate convective heating equations for hypersonic flows
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.; Sutton, K.
1979-01-01
Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.
Bronchopulmonary segments approximation using anatomical atlas
NASA Astrophysics Data System (ADS)
Busayarat, Sata; Zrimec, Tatjana
2007-03-01
Bronchopulmonary segments are valuable as they give more accurate localization than lung lobes. Traditionally, determining the segments requires segmentation and identification of segmental bronchi, which, in turn, require volumetric imaging data. In this paper, we present a method for approximating the bronchopulmonary segments for sparse data by effectively using an anatomical atlas. The atlas is constructed from a volumetric data and contains accurate information about bronchopulmonary segments. A new ray-tracing based image registration is used for transferring the information from the atlas to a query image. Results show that the method is able to approximate the segments on sparse HRCT data with slice gap up to 25 millimeters.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
Very fast approximate reconstruction of MR images.
Angelidis, P A
1998-11-01
The ultra fast Fourier transform (UFFT) provides the means for a very fast computation of a magnetic resonance (MR) image, because it is implemented using only additions and no multiplications at all. It achieves this by approximating the complex exponential functions involved in the Fourier transform (FT) sum with computationally simpler periodic functions. This approximation introduces erroneous spectrum peaks of small magnitude. We examine the performance of this transform in some typical MRI signals. The results show that this transform can very quickly provide an MR image. It is proposed to be used as a replacement of the classically used FFT whenever a fast general overview of an image is required.
Characterizing inflationary perturbations: The uniform approximation
Habib, Salman; Heinen, Andreas; Heitmann, Katrin; Jungman, Gerard; Molina-Paris, Carmen
2004-10-15
The spectrum of primordial fluctuations from inflation can be obtained using a mathematically controlled, and systematically extendable, uniform approximation. Closed-form expressions for power spectra and spectral indices may be found without making explicit slow-roll assumptions. Here we provide details of our previous calculations, extend the results beyond leading-order in the approximation, and derive general error bounds for power spectra and spectral indices. Already at next-to-leading-order, the errors in calculating the power spectrum are less than a percent. This meets the accuracy requirement for interpreting next-generation cosmic microwave background observations.
An Approximation Scheme for Delay Equations.
1980-06-16
AD-Am" 155 BtO~i UNkIV PROVIDENCE RI LEFSCI4ETZ CENTER FOR DYNAMI-flO F/f 12/ 1 AN APPROXIMATION SCIEME FOR DELAY EQUATIONS (U) JUN 80 F KAPPEL DAA629...for publ.ic release IAM 19.. and 1s aftnaotaton in unhi tea.0 ( f) 1 DDC UtB Distwifeaton A_._il .rd/or 1 . Introduction. In recent years one can see...Banach spaces. Fundamental for our approach is the following approximation theorem for semigroups of type W: Theorem 1 ([10]). Let AN, N - 1,2,..., and A
Approximate learning algorithm in Boltzmann machines.
Yasuda, Muneki; Tanaka, Kazuyuki
2009-11-01
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.
Approximate Thermodynamics State Relations in Partially Ionized Gas Mixtures
Ramshaw, J D
2003-12-30
In practical applications, the thermodynamic state relations of partially ionized gas mixtures are usually approximated in terms of the state relations of the pure partially ionized constituent gases or materials in isolation. Such approximations are ordinarily based on an artificial partitioning or separation of the mixture into its constituent materials, with material k regarded as being confined by itself within a compartment or subvolume with volume fraction {alpha}k and possessing a fraction {beta}k of the total internal energy of the mixture. In a mixture of N materials, the quantities {alpha}k and {beta}k constitute an additional 2N--2 independent variables. The most common procedure for determining these variables, and hence the state relations for the mixture, is to require that the subvolumes all have the same temperature and pressure. This intuitively reasonable procedure is easily shown to reproduce the correct thermal and caloric state equations for a mixture of neutral (non-ionized) ideal gases. Here we wish to point out that (a) this procedure leads to incorrect state equations for a mixture of partially ionized ideal gases, whereas (b) the alternative procedure of requiring that the subvolumes all have the same temperature and free electron density reproduces the correct thermal and caloric state equations for such a mixture. These results readily generalize to the case of partially degenerate and/or relativistic electrons, to a common approximation used to represent pressure ionization effects, and to two-temperature plasmas. This suggests that equating the subvolume electron number densities or chemical potentials instead of pressures is likely to provide a more accurate approximation even in nonideal plasma mixtures.
Analytical approximations to the Hotelling trace for digital x-ray detectors
NASA Astrophysics Data System (ADS)
Clarkson, Eric; Pineda, Angel R.; Barrett, Harrison H.
2001-06-01
The Hotelling trace is the signal-to-noise ratio for the ideal linear observer in a detection task. We provide an analytical approximation for this figure of merit when the signal is known exactly and the background is generated by a stationary random process, and the imaging system is an ideal digital x-ray detector. This approximation is based on assuming that the detector is infinite in extent. We test this approximation for finite-size detectors by comparing it to exact calculations using matrix inversion of the data covariance matrix. After verifying the validity of the approximation under a variety of circumstances, we use it to generate plots of the Hotelling trace as a function of pairs of parameters of the system, the signal and the background.
Small-angle approximation to the transfer of narrow laser beams in anisotropic scattering media
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1981-01-01
The broadening and the signal power detected of a laser beam traversing an anisotropic scattering medium were examined using the small-angle approximation to the radiative transfer equation in which photons suffering large-angle deflections are neglected. To obtain tractable answers, simple Gaussian and non-Gaussian functions for the scattering phase functions are assumed. Two other approximate approaches employed in the field to further simplify the small-angle approximation solutions are described, and the results obtained by one of them are compared with those obtained using small-angle approximation. An exact method for obtaining the contribution of each higher order scattering to the radiance field is examined but no results are presented.
Approximation Algorithms for the Highway Problem under the Coupon Model
NASA Astrophysics Data System (ADS)
Hamane, Ryoso; Itoh, Toshiya; Tomita, Kouhei
When a store sells items to customers, the store wishes to decide the prices of items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy the items, and also assume that each item i ∈ V has the production cost di and each customer ej ∈ E has the valuation vj on the bundle ej ⊆ V of items. When the store sells an item i ∈ V at the price ri, the profit for the item i is pi = ri - di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each i ∈ V, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of “loss-leader, ” and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we consider the line highway problem (in which each customer is interested in an interval on the line of the items) and the cycle highway problem (in which each customer is interested in an interval on the cycle of the items), and show approximation algorithms for the line highway problem and the cycle highway problem in which the smallest valuation is s and the largest valuation is l (this is called an [s, l]-valuation setting) or all valuations are identical (this is called a single valuation setting).