ERIC Educational Resources Information Center
Michaelides, Michalis P.; Haertel, Edward H.
2014-01-01
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
Stanke, Monika; Adamowicz, Ludwik
2013-10-01
In this work, we describe how the energies obtained in molecular calculations performed without assuming the Born-Oppenheimer (BO) approximation can be augmented with corrections accounting for the leading relativistic effects. Unlike the conventional BO approach, where these effects only concern the relativistic interactions between the electrons, the non-BO approach also accounts for the relativistic effects due to the nuclei and due to the coupling of the coupled electron-nucleus motion. In the numerical sections, the results obtained with the two approaches are compared. The first comparison concerns the dissociation energies of the two-electron isotopologues of the H2 molecule, H2, HD, D2, T2, and the HeH(+) ion. The comparison shows that, as expected, the differences in the relativistic contributions obtained with the two approaches increase as the nuclei become lighter. The second comparison concerns the relativistic corrections to all 23 pure vibrational states of the HD(+) ion. An interesting charge asymmetry caused by the nonadiabatic electron-nucleus interaction appears in this system, and this effect significantly increases with the vibration excitation. The comparison of the non-BO results with the results obtained with the conventional BO approach, which in the lowest order does not describe the charge-asymmetry effect, reveals how this effect affects the values of the relativistic corrections. PMID:23679131
Approximate natural vibration analysis of rectangular plates with openings using assumed mode method
NASA Astrophysics Data System (ADS)
Cho, Dae Seung; Vladimir, Nikola; Choi, Tae MuK
2013-09-01
Natural vibration analysis of plates with openings of different shape represents an important issue in naval architecture and ocean engineering applications. In this paper, a procedure for vibration analysis of plates with openings and arbitrary edge constraints is presented. It is based on the assumed mode method, where natural frequencies and modes are determined by solving an eigenvalue problem of a multi-degree-of-freedom system matrix equation derived by using Lagrange's equations of motion. The presented solution represents an extension of a procedure for natural vibration analysis of rectangular plates without openings, which has been recently presented in the literature. The effect of an opening is taken into account in an intuitive way, i.e. by subtracting its energy from the total plate energy without opening. Illustrative numerical examples include dynamic analysis of rectangular plates with rectangular, elliptic, circular as well as oval openings with various plate thicknesses and different combinations of boundary conditions. The results are compared with those obtained by the finite element method (FEM) as well as those available in the relevant literature, and very good agreement is achieved.
Rocca, Jennifer D; Hall, Edward K; Lennon, Jay T; Evans, Sarah E; Waldrop, Mark P; Cotner, James B; Nemergut, Diana R; Graham, Emily B; Wallenstein, Matthew D
2015-08-01
For any enzyme-catalyzed reaction to occur, the corresponding protein-encoding genes and transcripts are necessary prerequisites. Thus, a positive relationship between the abundance of gene or transcripts and corresponding process rates is often assumed. To test this assumption, we conducted a meta-analysis of the relationships between gene and/or transcript abundances and corresponding process rates. We identified 415 studies that quantified the abundance of genes or transcripts for enzymes involved in carbon or nitrogen cycling. However, in only 59 of these manuscripts did the authors report both gene or transcript abundance and rates of the appropriate process. We found that within studies there was a significant but weak positive relationship between gene abundance and the corresponding process. Correlations were not strengthened by accounting for habitat type, differences among genes or reaction products versus reactants, suggesting that other ecological and methodological factors may affect the strength of this relationship. Our findings highlight the need for fundamental research on the factors that control transcription, translation and enzyme function in natural systems to better link genomic and transcriptomic data to ecosystem processes. PMID:25535936
Rocca, Jennifer D; Hall, Edward K; Lennon, Jay T; Evans, Sarah E; Waldrop, Mark P; Cotner, James B; Nemergut, Diana R; Graham, Emily B; Wallenstein, Matthew D
2015-01-01
For any enzyme-catalyzed reaction to occur, the corresponding protein-encoding genes and transcripts are necessary prerequisites. Thus, a positive relationship between the abundance of gene or transcripts and corresponding process rates is often assumed. To test this assumption, we conducted a meta-analysis of the relationships between gene and/or transcript abundances and corresponding process rates. We identified 415 studies that quantified the abundance of genes or transcripts for enzymes involved in carbon or nitrogen cycling. However, in only 59 of these manuscripts did the authors report both gene or transcript abundance and rates of the appropriate process. We found that within studies there was a significant but weak positive relationship between gene abundance and the corresponding process. Correlations were not strengthened by accounting for habitat type, differences among genes or reaction products versus reactants, suggesting that other ecological and methodological factors may affect the strength of this relationship. Our findings highlight the need for fundamental research on the factors that control transcription, translation and enzyme function in natural systems to better link genomic and transcriptomic data to ecosystem processes. PMID:25535936
Mapping biological entities using the longest approximately common prefix method
2014-01-01
Background The significant growth in the volume of electronic biomedical data in recent decades has pointed to the need for approximate string matching algorithms that can expedite tasks such as named entity recognition, duplicate detection, terminology integration, and spelling correction. The task of source integration in the Unified Medical Language System (UMLS) requires considerable expert effort despite the presence of various computational tools. This problem warrants the search for a new method for approximate string matching and its UMLS-based evaluation. Results This paper introduces the Longest Approximately Common Prefix (LACP) method as an algorithm for approximate string matching that runs in linear time. We compare the LACP method for performance, precision and speed to nine other well-known string matching algorithms. As test data, we use two multiple-source samples from the Unified Medical Language System (UMLS) and two SNOMED Clinical Terms-based samples. In addition, we present a spell checker based on the LACP method. Conclusions The Longest Approximately Common Prefix method completes its string similarity evaluations in less time than all nine string similarity methods used for comparison. The Longest Approximately Common Prefix outperforms these nine approximate string matching methods in its Maximum F1 measure when evaluated on three out of the four datasets, and in its average precision on two of the four datasets. PMID:24928653
Performance Improvement Assuming Complexity
ERIC Educational Resources Information Center
Rowland, Gordon
2007-01-01
Individual performers, work teams, and organizations may be considered complex adaptive systems, while most current human performance technologies appear to assume simple determinism. This article explores the apparent mismatch and speculates on future efforts to enhance performance if complexity rather than simplicity is assumed. Included are…
An improved approximation algorithm for scaffold filling to maximize the common adjacencies.
Liu, Nan; Jiang, Haitao; Zhu, Daming; Zhu, Binhai
2013-01-01
Scaffold filling is a new combinatorial optimization problem in genome sequencing. The one-sided scaffold filling problem can be described as given an incomplete genome I and a complete (reference) genome G, fill the missing genes into I such that the number of common (string) adjacencies between the resulting genome I' and G is maximized. This problem is NP-complete for genome with duplicated genes and the best known approximation factor is 1.33, which uses a greedy strategy. In this paper, we prove a better lower bound of the optimal solution, and devise a new algorithm by exploiting the maximum matching method and a local improvement technique, which improves the approximation factor to 1.25. For genome with gene repetitions, this is the only known NP-complete problem which admits an approximation with a small constant factor (less than 1.5). PMID:24334385
Collaboration: Assumed or Taught?
ERIC Educational Resources Information Center
Kaplan, Sandra N.
2014-01-01
The relationship between collaboration and gifted and talented students often is assumed to be an easy and successful learning experience. However, the transition from working alone to working with others necessitates an understanding of issues related to ability, sociability, and mobility. Collaboration has been identified as both an asset and a…
New officers assume leadership
NASA Astrophysics Data System (ADS)
The 2000-2002 AGU officers assumed their leadership roles on July l.On the weekend of May 27-29 the new council members and committee chairmen participated in a leadership conference. The primary focus for this was to set priorities and goals and to exchange information that will help each member of AGU's leadership team contribute effectively throughout his or her term. Participants emphasized the importance of continuing to encourage communication and the need for strengthening ties throughout the world and to address many of the complex problems of interest to us today. Another primary topic was continuing to increase the effectiveness of AGU's electronic communications—both formal electronic publications and informal communication among the leadership and members.
NASA Astrophysics Data System (ADS)
Râsander, M.; Moram, M. A.
2015-10-01
We have performed density functional calculations using a range of local and semi-local as well as hybrid density functional approximations of the structure and elastic constants of 18 semiconductors and insulators. We find that most of the approximations have a very small error in the lattice constants, of the order of 1%, while the errors in the elastic constants and bulk modulus are much larger, at about 10% or better. When comparing experimental and theoretical lattice constants and bulk modulus we have included zero-point phonon effects. These effects make the experimental reference lattice constants 0.019 Å smaller on average while making the bulk modulus 4.3 GPa stiffer on average. According to our study, the overall best performing density functional approximations for determining the structure and elastic properties are the PBEsol functional, the two hybrid density functionals PBE0 and HSE (Heyd, Scuseria, and Ernzerhof), as well as the AM05 functional.
Achtman, Mark; Zhou, Zhemin; Didelot, Xavier
2015-01-01
In 2013 Zhou et al. concluded that Salmonella enterica serovar Agona represents a genetically monomorphic lineage of recent ancestry, whose most recent common ancestor existed in 1932, or earlier. The Abstract stated ‘Agona consists of three lineages with minimal mutational diversity: only 846 single nucleotide polymorphisms (SNPs) have accumulated in the non-repetitive, core genome since Agona evolved in 1932 and subsequently underwent a major population expansion in the 1960s.’ These conclusions have now been criticized by Pettengill, who claims that the evolutionary models used to date Agona may not have been appropriate, the dating estimates were inaccurate, and the age of emergence of Agona should have been qualified by an upper limit reflecting the date of its divergence from an outgroup, serovar Soerenga. We dispute these claims. Firstly, Pettengill’s analysis of Agona is not justifiable on technical grounds. Secondly, an upper limit for divergence from an outgroup would only be meaningful if the outgroup were closely related to Agona, but close relatives of Agona are yet to be identified. Thirdly, it is not possible to reliably date the time of divergence between Agona and Soerenga. We conclude that Pettengill’s criticism is comparable to a tempest in a teapot. PMID:26274924
Achtman, Mark; Zhou, Zhemin; Didelot, Xavier
2015-01-01
In 2013 Zhou et al. concluded that Salmonella enterica serovar Agona represents a genetically monomorphic lineage of recent ancestry, whose most recent common ancestor existed in 1932, or earlier. The Abstract stated 'Agona consists of three lineages with minimal mutational diversity: only 846 single nucleotide polymorphisms (SNPs) have accumulated in the non-repetitive, core genome since Agona evolved in 1932 and subsequently underwent a major population expansion in the 1960s.' These conclusions have now been criticized by Pettengill, who claims that the evolutionary models used to date Agona may not have been appropriate, the dating estimates were inaccurate, and the age of emergence of Agona should have been qualified by an upper limit reflecting the date of its divergence from an outgroup, serovar Soerenga. We dispute these claims. Firstly, Pettengill's analysis of Agona is not justifiable on technical grounds. Secondly, an upper limit for divergence from an outgroup would only be meaningful if the outgroup were closely related to Agona, but close relatives of Agona are yet to be identified. Thirdly, it is not possible to reliably date the time of divergence between Agona and Soerenga. We conclude that Pettengill's criticism is comparable to a tempest in a teapot. PMID:26274924
Assuming Multiple Roles: The Time Crunch.
ERIC Educational Resources Information Center
McKitric, Eloise J.
Women's increased labor force participation and continued responsibility for most household work and child care have resulted in "time crunch." This strain results from assuming multiple roles within a fixed time period. The existence of an egalitarian family has been assumed by family researchers and writers but has never been verified. Time…
Evolution of assumed stress hybrid finite element
NASA Technical Reports Server (NTRS)
Pian, T. H. H.
1984-01-01
Early versions of the assumed stress hybrid finite elements were based on the a priori satisifaction of stress equilibrium conditions. In the new version such conditions are relaxed but are introduced through additional internal displacement functions as Lagrange multipliers. A rational procedure is to choose the displacement terms such that the resulting strains are now of complete polynomials up to the same degree as that of the assumed stresses. Several example problems indicate that optimal element properties are resulted by this method.
Assumed modes method and flexible multibody dynamics
NASA Technical Reports Server (NTRS)
Tadikonda, S. S. K.; Mordfin, T. G.; Hu, T. G.
1993-01-01
The use of assumed modes in flexible multibody dynamics algorithms requires the evaluation of several domain dependent integrals that are affected by the type of modes used. The implications of these integrals - often called zeroth, first and second order terms - are investigated in this paper, for arbitrarily shaped bodies. Guidelines are developed for the use of appropriate boundary conditions while generating the component modal models. The issue of whether and which higher order terms must be retained is also addressed. Analytical results, and numerical results using the Shuttle Remote Manipulator System as the multibody system, are presented to qualitatively and quantitatively address these issues.
Unit hydrograph approximations assuming linear flow through topologically random channel networks.
Troutman, B.M.; Karlinger, M.R.
1985-01-01
The instantaneous unit hydrograph (IUH) of a drainage basin is derived in terms of fundamental basin characteristics (Z, alpha, beta), where alpha parameterizes the link (channel segment) length distribution, and beta is a vector of hydraulic parameters. -from Authors
46 CFR 174.075 - Compartments assumed flooded: general.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated...
46 CFR 174.075 - Compartments assumed flooded: general.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated...
46 CFR 174.075 - Compartments assumed flooded: general.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated...
46 CFR 174.075 - Compartments assumed flooded: general.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated...
46 CFR 174.075 - Compartments assumed flooded: general.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Units § 174.075 Compartments assumed flooded: general. The individual flooding of each of the... § 174.065 (a). Simultaneous flooding of more than one compartment must be assumed only when indicated...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not...
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not...
24 CFR 234.66 - Free assumability; exceptions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Free assumability; exceptions. 234... CONDOMINIUM OWNERSHIP MORTGAGE INSURANCE Eligibility Requirements-Individually Owned Units § 234.66 Free assumability; exceptions. For purposes of HUD's policy of free assumability with no restrictions, as...
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not...
24 CFR 203.512 - Free assumability; exceptions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 24 Housing and Urban Development 2 2011-04-01 2011-04-01 false Free assumability; exceptions. 203... AUTHORITIES SINGLE FAMILY MORTGAGE INSURANCE Servicing Responsibilities General Requirements § 203.512 Free assumability; exceptions. (a) Policy of free assumability with no restrictions. A mortgagee shall not...
24 CFR 203.41 - Free assumability; exceptions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Free assumability; exceptions. 203... § 203.41 Free assumability; exceptions. (a) Definitions. As used in this section: (1) Low- or moderate... benefit of any member, founder, contributor or individual. (b) Policy of free assumability with...
Spring Allergies? Don't Assume It's Only Pollen
... html Spring Allergies? Don't Assume It's Only Pollen Identifying your triggers is the first step toward ... reducing your symptoms, experts say. You may believe pollen is the culprit. But, other substances such as ...
Abstraction and Assume-Guarantee Reasoning for Automated Software Verification
NASA Technical Reports Server (NTRS)
Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.
2004-01-01
Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.
Assume-Guarantee Abstraction Refinement Meets Hybrid Systems
NASA Technical Reports Server (NTRS)
Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas
2014-01-01
Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.
Solutions of contact problems by the assumed stress hybrid model
NASA Technical Reports Server (NTRS)
Kubomura, K.; Pian, T. H. H.
1980-01-01
A method was developed for contact problems which may be either frictional or frictionless and may involve extensive sliding between deformable bodies. It was based on an assumed stress hybrid approach and on an incremental variational principle for which the Euler's equations of the functional include the equilibrium and compatibility conditions at the contact surface. The tractions at an assumed contact surface were introduced as Lagrangian multipliers in the formulation. It was concluded from the results of several example solutions that the extensive sliding contact between deformable bodies can be solved by the present method.
A Report on Women West Point Graduates Assuming Nontraditional Roles.
ERIC Educational Resources Information Center
Yoder, Janice D.; Adams, Jerome
In 1980 the first women graduated from the military and college training program at West Point. To investigate the progress of both male and female graduates as they assume leadership roles in the regular Army, 35 women and 113 men responded to a survey assessing career involvement and planning, commitment and adjustment, and satisfaction.…
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-31
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles methodmore » can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.« less
Pixelwise-adaptive blind optical flow assuming nonstationary statistics.
Foroosh, Hassan
2005-02-01
In this paper, we address some of the major issues in optical flow within a new framework assuming nonstationary statistics for the motion field and for the errors. Problems addressed include the preservation of discontinuities, model/data errors, outliers, confidence measures, and performance evaluation. In solving these problems, we assume that the statistics of the motion field and the errors are not only spatially varying, but also unknown. We, thus, derive a blind adaptive technique based on generalized cross validation for estimating an independent regularization parameter for each pixel. Our formulation is pixelwise and combines existing first- and second-order constraints with a new second-order temporal constraint. We derive a new confidence measure for an adaptive rejection of erroneous and outlying motion vectors, and compare our results to other techniques in the literature. A new performance measure is also derived for estimating the signal-to-noise ratio for real sequences when the ground truth is unknown. PMID:15700527
Modeling turbulent/chemistry interactions using assumed pdf methods
NASA Technical Reports Server (NTRS)
Gaffney, R. L, Jr.; White, J. A.; Girimaji, S. S.; Drummond, J. P.
1992-01-01
Two assumed probability density functions (pdfs) are employed for computing the effect of temperature fluctuations on chemical reaction. The pdfs assumed for this purpose are the Gaussian and the beta densities of the first kind. The pdfs are first used in a parametric study to determine the influence of temperature fluctuations on the mean reaction-rate coefficients. Results indicate that temperature fluctuations significantly affect the magnitude of the mean reaction-rate coefficients of some reactions depending on the mean temperature and the intensity of the fluctuations. The pdfs are then tested on a high-speed turbulent reacting mixing layer. Results clearly show a decrease in the ignition delay time due to increases in the magnitude of most of the mean reaction rate coefficients.
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Chemically reacting supersonic flow calculation using an assumed PDF model
NASA Technical Reports Server (NTRS)
Farshchi, M.
1990-01-01
This work is motivated by the need to develop accurate models for chemically reacting compressible turbulent flow fields that are present in a typical supersonic combustion ramjet (SCRAMJET) engine. In this paper the development of a new assumed probability density function (PDF) reaction model for supersonic turbulent diffusion flames and its implementation into an efficient Navier-Stokes solver are discussed. The application of this model to a supersonic hydrogen-air flame will be considered.
17. Photographic copy of photograph. Location unknown but assumed to ...
17. Photographic copy of photograph. Location unknown but assumed to be uper end of canal. Features no longer extant. (Source: U.S. Department of Interior. Office of Indian Affairs. Indian Irrigation service. Annual Report, Fiscal Year 1925. Vol. I, Narrative and Photographs, Irrigation District #4, California and Southern Arizona, RG 75, Entry 655, Box 28, National Archives, Washington, DC.) Photographer unknown. MAIN (TITLED FLORENCE) CANAL, WASTEWAY, SLUICEWAY, & BRIDGE, 1/26/25. - San Carlos Irrigation Project, Marin Canal, Amhurst-Hayden Dam to Picacho Reservoir, Coolidge, Pinal County, AZ
Finite elements based on consistently assumed stresses and displacements
NASA Technical Reports Server (NTRS)
Pian, T. H. H.
1985-01-01
Finite element stiffness matrices are derived using an extended Hellinger-Reissner principle in which internal displacements are added to serve as Lagrange multipliers to introduce the equilibrium constraint in each element. In a consistent formulation the assumed stresses are initially unconstrained and complete polynomials and the total displacements are also complete such that the corresponding strains are complete in the same order as the stresses. Several examples indicate that resulting properties for elements constructed by this consistent formulation are ideal and are less sensitive to distortions of element geometries. The method has been used to find the optimal stress terms for plane elements, 3-D solids, axisymmetric solids, and plate bending elements.
Organohalogens in nature: More widespread than previously assumed
Asplund, G.; Grimvall, A. )
1991-08-01
Although the natural production of organohalogens has been observed in several studies, it is generally assumed to be much smaller than the industrial production of these compounds. Nevertheless, two important natural sources have been known since the 1970s: red algae in marine ecosystems produce large amounts of brominated compounds, and methyl halides of natural origin are present in the atmosphere. During the past few years it has been shown that organohalogens are so widespread in groundwater, surface water, and soil that all samples in the studies referred to contain measurable amounts of absorbable organohalogens (AOX). The authors document the widespread occurrence of organohalogens in unpolluted soil and water and discuss possible sources of these compounds. It has been suggested that these organohalogens originate from long-range atmospheric transport of industrially produced compounds. The authors review existing evidence of enzymatically mediated halogenation of organic matter in soil and show that, most probably, natural halogenation in the terrestrial environment is the largest source.
An assumed partition algorithm for determining processor inter-communication
Baker, A H; Falgout, R D; Yang, U M
2005-09-23
The recent advent of parallel machines with tens of thousands of processors is presenting new challenges for obtaining scalability. A particular challenge for large-scale scientific software is determining the inter-processor communications required by the computation when a global description of the data is unavailable or too costly to store. We present a type of rendezvous algorithm that determines communication partners in a scalable manner by assuming the global distribution of the data. We demonstrate the scaling properties of the algorithm on up to 32,000 processors in the context of determining communication patterns for a matrix-vector multiply in the hypre software library. Our algorithm is very general and is applicable to a variety of situations in parallel computing.
Students Learn Statistics When They Assume a Statistician's Role.
ERIC Educational Resources Information Center
Sullivan, Mary M.
Traditional elementary statistics instruction for non-majors has focused on computation. Rarely have students had an opportunity to interact with real data sets or to use questioning to drive data analysis, common activities among professional statisticians. Inclusion of data gathering and analysis into whole class and small group activities…
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Assumed Probability Density Functions for Shallow and Deep Convection
NASA Astrophysics Data System (ADS)
Bogenschutz, Peter A.; Krueger, Steven K.; Khairoutdinov, Marat
2010-04-01
The assumed joint probability density function (PDF) between vertical velocity and conserved temperature and total water scalars has been suggested to be a relatively computationally inexpensive and unified subgrid-scale (SGS) parameterization for boundary layer clouds and turbulent moments. This paper analyzes the performance of five families of PDFs using large-eddy simulations of deep convection, shallow convection, and a transition from stratocumulus to trade wind cumulus. Three of the PDF families are based on the double Gaussian form and the remaining two are the single Gaussian and a Double Delta Function (analogous to a mass flux model). The assumed PDF method is tested for grid sizes as small as 0.4 km to as large as 204.8 km. In addition, studies are performed for PDF sensitivity to errors in the input moments and for how well the PDFs diagnose some higher-order moments. In general, the double Gaussian PDFs more accurately represent SGS cloud structure and turbulence moments in the boundary layer compared to the single Gaussian and Double Delta Function PDFs for the range of grid sizes tested. This is especially true for small SGS cloud fractions. While the most complex PDF, Lewellen-Yoh, better represents shallow convective cloud properties (cloud fraction and liquid water mixing ratio) compared to the less complex Analytic Double Gaussian 1 PDF, there appears to be no advantage in implementing Lewellen-Yoh for deep convection. However, the Analytic Double Gaussian 1 PDF better represents the liquid water flux, is less sensitive to errors in the input moments, and diagnoses higher order moments more accurately. Between the Lewellen-Yoh and Analytic Double Gaussian 1 PDFs, it appears that neither family is distinctly better at representing cloudy layers. However, due to the reduced computational cost and fairly robust results, it appears that the Analytic Double Gaussian 1 PDF could be an ideal family for SGS cloud and turbulence representation in coarse
Collisionless tearing in a field-reversed sheet pinch assuming nonparallel propagation
NASA Technical Reports Server (NTRS)
Quest, K. B.; Coroniti, F. V.
1985-01-01
The problem of collisionless linear tearing is examined assuming a wave vector with a component normal to the equilibrium field. The geometry is defined and the general form of the linear dispersion equation is calculated. The linear theory results when k is parallel to B are reviewed, and Ampere's law is calculated for the external adiabatic region when k times B does not equal zero, using two-fluid theory. A solution is obtained for the approximate form of the perturbed currents and vector potential assuming quasi-parallel k. The resonant current contributions within the singular layer are calculated, obtaining an estimate of the dispersion equation. The form of the adiabatic currents within the singular layer is calculated, showing that an x-z current system persists even in the limit k perpendicular to B goes to zero. Finally, the perturbed vector potential solutions across the singular layer are matched to obtain the shape of the complete eigenfunction.
One sign ion mobile approximation
NASA Astrophysics Data System (ADS)
Barbero, G.
2011-12-01
The electrical response of an electrolytic cell to an external excitation is discussed in the simple case where only one group of positive and negative ions is present. The particular case where the diffusion coefficients of the negative ions, Dm, is very small with respect to that of the positive ions, Dp, is considered. In this framework, it is discussed under what conditions the one mobile approximation, in which the negative ions are assumed fixed, works well. The analysis is performed by assuming that the external excitation is sinusoidal with circular frequency ω, as that used in the impedance spectroscopy technique. In this framework, we show that there exists a circular frequency, ω*, such that for ω > ω*, the one mobile ion approximation works well. We also show that for Dm ≪ Dp, ω* is independent of Dm.
Computing Functions by Approximating the Input
ERIC Educational Resources Information Center
Goldberg, Mayer
2012-01-01
In computing real-valued functions, it is ordinarily assumed that the input to the function is known, and it is the output that we need to approximate. In this work, we take the opposite approach: we show how to compute the values of some transcendental functions by approximating the input to these functions, and obtaining exact answers for their…
Approximation by hinge functions
Faber, V.
1997-05-01
Breiman has defined {open_quotes}hinge functions{close_quotes} for use as basis functions in least squares approximations to data. A hinge function is the max (or min) function of two linear functions. In this paper, the author assumes the existence of smooth function f(x) and a set of samples of the form (x, f(x)) drawn from a probability distribution {rho}(x). The author hopes to find the best fitting hinge function h(x) in the least squares sense. There are two problems with this plan. First, Breiman has suggested an algorithm to perform this fit. The author shows that this algorithm is not robust and also shows how to create examples on which the algorithm diverges. Second, if the author tries to use the data to minimize the fit in the usual discrete least squares sense, the functional that must be minimized is continuous in the variables, but has a derivative which jumps at the data. This paper takes a different approach. This approach is an example of a method that the author has developed called {open_quotes}Monte Carlo Regression{close_quotes}. (A paper on the general theory is in preparation.) The author shall show that since the function f is continuous, the analytic form of the least squares equation is continuously differentiable. A local minimum is solved for by using Newton`s method, where the entries of the Hessian are estimated directly from the data by Monte Carlo. The algorithm has the desirable properties that it is quadratically convergent from any starting guess sufficiently close to a solution and that each iteration requires only a linear system solve.
Common Space, Common Time, Common Work
ERIC Educational Resources Information Center
Shank, Melody J.
2005-01-01
The most valued means of support and learning cited by new teachers at Poland Regional High School in rural Maine are the collegial interactions that common workspace, common planning time, and common tasks make possible. The school has used these everyday structures to enable new and veteran teachers to converse about curricular and pedagogical…
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-27
... Responsibilities; Notice of Proposed Information Collection: Comment Request AGENCY: Office of the Assistant...: Environmental Review Procedures for Entities Assuming HUD Environmental Responsibilities. OMB Control...
Common Schools for Common Education.
ERIC Educational Resources Information Center
Callan, Eamonn
1995-01-01
A vision of common education for citizens of a liberal democracy warrants faith in common schools as an instrument of social good. Some kinds of separate schooling are not inconsistent with common schooling and are even desirable. Equal respect, as defined by J. Rawls, is a basis for common education. (SLD)
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
The Motivation of Teachers to Assume the Role of Cooperating Teacher
ERIC Educational Resources Information Center
Jonett, Connie L. Foye
2009-01-01
The Motivation of Teachers to Assume the Role of Cooperating Teacher This study explored a phenomenological understanding of the motivation and influences that cause experienced teachers to assume pedagogical training of student teachers through the role of cooperating teacher. The research question guiding the study was what motivates teachers to…
Code of Federal Regulations, 2013 CFR
2013-07-01
... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...
Code of Federal Regulations, 2012 CFR
2012-01-01
... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...
Code of Federal Regulations, 2011 CFR
2011-01-01
... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...
Code of Federal Regulations, 2010 CFR
2010-07-01
... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...
Code of Federal Regulations, 2014 CFR
2014-01-01
... must Federal agencies assume historic preservation responsibilities? 102-78.55 Section 102-78.55 Public... MANAGEMENT REGULATION REAL PROPERTY 78-HISTORIC PRESERVATION Historic Preservation § 102-78.55 For which properties must Federal agencies assume historic preservation responsibilities? Federal agencies must...
39 CFR 3060.40 - Calculation of the assumed Federal income tax.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 39 Postal Service 1 2013-07-01 2013-07-01 false Calculation of the assumed Federal income tax... Federal income tax. (a) The assumed Federal income tax on competitive products income shall be based on... income tax on competitive products income shall be September 30. (c) The calculation of the...
39 CFR 3060.40 - Calculation of the assumed Federal income tax.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 39 Postal Service 1 2012-07-01 2012-07-01 false Calculation of the assumed Federal income tax... Federal income tax. (a) The assumed Federal income tax on competitive products income shall be based on... income tax on competitive products income shall be September 30. (c) The calculation of the...
39 CFR 3060.40 - Calculation of the assumed Federal income tax.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 39 Postal Service 1 2014-07-01 2014-07-01 false Calculation of the assumed Federal income tax... Federal income tax. (a) The assumed Federal income tax on competitive products income shall be based on... income tax on competitive products income shall be September 30. (c) The calculation of the...
Pre-Service Teachers' Personal Epistemic Beliefs and the Beliefs They Assume Their Pupils to Have
ERIC Educational Resources Information Center
Rebmann, Karin; Schloemer, Tobias; Berding, Florian; Luttenberger, Silke; Paechter, Manuela
2015-01-01
In their workaday life, teachers are faced with multiple complex tasks. How they carry out these tasks is also influenced by their epistemic beliefs and the beliefs they assume their pupils hold. In an empirical study, pre-service teachers' epistemic beliefs and those they assume of their pupils were investigated in the setting of teacher…
... News & Events Volunteer NIAID > Health & Research Topics > Common Cold Skip Website Tools Website Tools Print this page ... Help people who are suffering from the common cold by volunteering for NIAID clinical studies on ClinicalTrials. ...
... coughing - everyone knows the symptoms of the common cold. It is probably the most common illness. In ... people in the United States suffer 1 billion colds. You can get a cold by touching your ...
Coon, H.; Jensen, S.; Hoff, M.; Holik, J.; Plaetke, R.; Reimherr, F.; Wender, P.; Leppert, M.; Byerley, W. )
1993-06-01
Manic-depressive illness (MDI), also known as [open quotes]bipolar affective disorder[close quotes], is a common and devastating neuropsychiatric illness. Although pivotal biochemical alterations underlying the disease are unknown, results of family, twin, and adoption studies consistently implicate genetic transmission in the pathogenesis of MDI. In order to carry out linkage analysis, the authors ascertained eight moderately sized pedigrees containing multiple cases of the disease. For a four-allele marker mapping at 5 cM from the disease gene, the pedigree sample has >97% power to detect a dominant allele under genetic homogeneity and has >73% power under 20% heterogeneity. To date, the eight pedigrees have been genotyped with 328 polymorphic DNA loci throughout the genome. When autosomal dominant inheritance was assumed, 273 DNA markers gave lod scores <[minus]2.0 at [theta] = .05, and 4 DNA marker loci yielded lod scores >1 (chromosome 5 -- D5S39, D5S43, and D5S62; chromosome 11 -- D11S85). Of the markers giving lod scores >1, only D5S62 continued to show evidence for linkage when the affected-pedigree-member method was used. The D5S62 locus maps to distal 5q, a region containing neurotransmitter-receptor genes for dopamine, norepinephrine, glutamate, and gamma-aminobutyric acid. Although additional work in this region may be warranted, the linkage results should be interpreted as preliminary data, as 68 unaffected individuals are not past the age of risk. 72 refs., 2 tabs.
Coon, H; Jensen, S; Hoff, M; Holik, J; Plaetke, R; Reimherr, F; Wender, P; Leppert, M; Byerley, W
1993-01-01
Manic-depressive illness (MDI), also known as "bipolar affective disorder," is a common and devastating neuropsychiatric illness. Although pivotal biochemical alterations underlying the disease are unknown, results of family, twin, and adoption studies consistently implicate genetic transmission in the pathogenesis of MDI. In order to carry out linkage analysis, we ascertained eight moderately sized pedigrees containing multiple cases of the disease. For a four-allele marker mapping 5 cM from the disease gene, the pedigree sample has > 97% power to detect a dominant allele under genetic homogeneity and has > 73% power under 20% heterogeneity. To date, the eight pedigrees have been genotyped with 328 polymorphic DNA loci throughout the genome. When autosomal dominant inheritance was assumed, 273 DNA markers gave lod scores < -2.0 at recombination fraction (theta) = .0, 174 DNA loci produced lod scores < -2.0 at theta = .05, and 4 DNA marker loci yielded lod scores > 1 (chromosome 5--D5S39, D5S43, and D5S62; chromosome 11--D11S85). Of the markers giving lod scores > 1, only D5S62 continued to show evidence for linkage when the affected-pedigree-member method was used. The D5S62 locus maps to distal 5q, a region containing neurotransmitter-receptor genes for dopamine, norepinephrine, glutamate, and gamma-aminobutyric acid. Although additional work in this region may be warranted, our linkage results should be interpreted as preliminary data, as 68 unaffected individuals are not past the age of risk. PMID:8503452
Zito, Sarah; Morton, John; Vankan, Dianne; Paterson, Mandy; Bennett, Pauleen C; Rand, Jacquie; Phillips, Clive J C
2016-01-01
Most cats surrendered to nonhuman animal shelters are identified as unowned, and the surrender reason for these cats is usually simply recorded as "stray." A cross-sectional study was conducted with people surrendering cats to 4 Australian animal shelters. Surrenderers of unowned cats commonly gave surrender reasons relating to concern for the cat and his/her welfare. Seventeen percent of noncaregivers had considered adopting the cat. Barriers to assuming ownership most commonly related to responsible ownership concerns. Unwanted kittens commonly contributed to the decision to surrender for both caregivers and noncaregivers. Nonowners gave more surrender reasons than owners, although many owners also gave multiple surrender reasons. These findings highlight the multifactorial nature of the decision-making process leading to surrender and demonstrate that recording only one reason for surrender does not capture the complexity of the surrender decision. Collecting information about multiple reasons for surrender, particularly reasons for surrender of unowned cats and barriers to assuming ownership, could help to develop strategies to reduce the number of cats surrendered. PMID:27045191
NASA Technical Reports Server (NTRS)
Paris, Isabelle L.; Krueger, Ronald; OBrien, T. Kevin
2004-01-01
The difference in delamination onset predictions based on the type and location of the assumed initial damage are compared in a specimen consisting of a tapered flange laminate bonded to a skin laminate. From previous experimental work, the damage was identified to consist of a matrix crack in the top skin layer followed by a delamination between the top and second skin layer (+45 deg./-45 deg. interface). Two-dimensional finite elements analyses were performed for three different assumed flaws and the results show a considerable reduction in critical load if an initial delamination is assumed to be present, both under tension and bending loads. For a crack length corresponding to the peak in the strain energy release rate, the delamination onset load for an assumed initial flaw in the bondline is slightly higher than the critical load for delamination onset from an assumed skin matrix crack, both under tension and bending loads. As a result, assuming an initial flaw in the bondline is simpler while providing a critical load relatively close to the real case. For the configuration studied, a small delamination might form at a lower tension load than the critical load calculated for a 12.7 mm (0.5") delamination, but it would grow in a stable manner. For the bending case, assuming an initial flaw of 12.7 mm (0.5") is conservative, the crack would grow unstably.
Calculator Function Approximation.
ERIC Educational Resources Information Center
Schelin, Charles W.
1983-01-01
The general algorithm used in most hand calculators to approximate elementary functions is discussed. Comments on tabular function values and on computer function evaluation are given first; then the CORDIC (Coordinate Rotation Digital Computer) scheme is described. (MNS)
A Concept Analysis: Assuming Responsibility for Self-Care among Adolescents with Type 1 Diabetes
Hanna, Kathleen M.; Decker, Carol L.
2009-01-01
Purpose This concept analysis clarifies “assuming responsibility for self-care” by adolescents with type 1 diabetes. Methods Walker and Avant’s (2005) methodology guided the analysis. Results Assuming responsibility for self-care was defined as a process specific to diabetes within the context of development. It is daily, gradual, individualized to person, and unique to task. The goal is ownership that involves autonomy in behaviors and decision-making. Practice Implications Adolescents with type 1 diabetes need to be assessed for assuming responsibility for self-care. This achievement has implications for adolescents’ diabetes management, short- and long-term health, and psychosocial quality of life. PMID:20367781
Automatic Registration of Approximately Leveled Point Clouds of Urban Scenes
NASA Astrophysics Data System (ADS)
Moussa, A.; Elsheimy, N.
2015-08-01
Registration of point clouds is a necessary step to obtain a complete overview of scanned objects of interest. The majority of the current registration approaches target the general case where a full range of the registration parameters search space is assumed and searched. It is very common in urban objects scanning to have leveled point clouds with small roll and pitch angles and with also a small height differences. For such scenarios the registration search problem can be handled faster to obtain a coarse registration of two point clouds. In this paper, a fully automatic approach is proposed for registration of approximately leveled point clouds. The proposed approach estimates a coarse registration based on three registration parameters and then conducts a fine registration step using iterative closest point approach. The approach has been tested on three data sets of different areas and the achieved registration results validate the significance of the proposed approach.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. PMID:25528318
Virta, R.L.
1998-01-01
Part of a special section on the state of industrial minerals in 1997. The state of the common clay industry worldwide for 1997 is discussed. Sales of common clay in the U.S. increased from 26.2 Mt in 1996 to an estimated 26.5 Mt in 1997. The amount of common clay and shale used to produce structural clay products in 1997 was estimated at 13.8 Mt.
25 CFR 117.5 - Procedure for hearings to assume supervision of expenditure of allowance funds.
Code of Federal Regulations, 2010 CFR
2010-04-01
... INDIANS WHO DO NOT HAVE CERTIFICATES OF COMPETENCY § 117.5 Procedure for hearings to assume supervision of... not having certificates of competency, including amounts paid for each minor, shall, in case...
ERIC Educational Resources Information Center
Gordon, Douglas
2010-01-01
Student commons are no longer simply congregation spaces for students with time on their hands. They are integral to providing a welcoming environment and effective learning space for students. Many student commons have been transformed into spaces for socialization, an environment for alternative teaching methods, a forum for large group meetings…
Covariant approximation averaging
NASA Astrophysics Data System (ADS)
Shintani, Eigo; Arthur, Rudy; Blum, Thomas; Izubuchi, Taku; Jung, Chulwoo; Lehner, Christoph
2015-06-01
We present a new class of statistical error reduction techniques for Monte Carlo simulations. Using covariant symmetries, we show that correlation functions can be constructed from inexpensive approximations without introducing any systematic bias in the final result. We introduce a new class of covariant approximation averaging techniques, known as all-mode averaging (AMA), in which the approximation takes account of contributions of all eigenmodes through the inverse of the Dirac operator computed from the conjugate gradient method with a relaxed stopping condition. In this paper we compare the performance and computational cost of our new method with traditional methods using correlation functions and masses of the pion, nucleon, and vector meson in Nf=2 +1 lattice QCD using domain-wall fermions. This comparison indicates that AMA significantly reduces statistical errors in Monte Carlo calculations over conventional methods for the same cost.
Fast approximate motif statistics.
Nicodème, P
2001-01-01
We present in this article a fast approximate method for computing the statistics of a number of non-self-overlapping matches of motifs in a random text in the nonuniform Bernoulli model. This method is well suited for protein motifs where the probability of self-overlap of motifs is small. For 96% of the PROSITE motifs, the expectations of occurrences of the motifs in a 7-million-amino-acids random database are computed by the approximate method with less than 1% error when compared with the exact method. Processing of the whole PROSITE takes about 30 seconds with the approximate method. We apply this new method to a comparison of the C. elegans and S. cerevisiae proteomes. PMID:11535175
The Guiding Center Approximation
NASA Astrophysics Data System (ADS)
Pedersen, Thomas Sunn
The guiding center approximation for charged particles in strong magnetic fields is introduced here. This approximation is very useful in situations where the charged particles are very well magnetized, such that the gyration (Larmor) radius is small compared to relevant length scales of the confinement device, and the gyration is fast relative to relevant timescales in an experiment. The basics of motion in a straight, uniform, static magnetic field are reviewed, and are used as a starting point for analyzing more complicated situations where more forces are present, as well as inhomogeneities in the magnetic field -- magnetic curvature as well as gradients in the magnetic field strength. The first and second adiabatic invariant are introduced, and slowly time-varying fields are also covered. As an example of the use of the guiding center approximation, the confinement concept of the cylindrical magnetic mirror is analyzed.
Monotone Boolean approximation
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application for the analysis of noncoherent fault trees and event tree sequences.
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Multicriteria approximation through decomposition
Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |
1997-12-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Multicriteria approximation through decomposition
Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.
1998-06-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
... are the most common reason that children miss school and parents miss work. Parents often get colds ... other children. A cold can spread quickly through schools or daycares. Colds can occur at any time ...
NASA Technical Reports Server (NTRS)
Stankiewicz, N.; Palmer, R. W.
1972-01-01
Three-dimensional potential and current distributions in a Faraday segmented MHD generator operating in the Hall mode are computed. Constant conductivity and a Hall parameter of 1.0 is assumed. The electric fields and currents are assumed to be coperiodic with the electrode structure. The flow is assumed to be fully developed and a family of power-law velocity profiles, ranging from parabolic to turbulent, is used to show the effect of the fullness of the velocity profile. Calculation of the square of the current density shows that nonequilibrium heating is not likely to occur along the boundaries. This seems to discount the idea that the generator insulating walls are regions of high conductivity and are therefore responsible for boundary-layer shorting, unless the shorting is a surface phenomenon on the insulating material.
Assuming a Pharmacy Organization Leadership Position: A Guide for Pharmacy Leaders.
Shay, Blake; Weber, Robert J
2015-11-01
Important and influential pharmacy organization leadership positions, such as president, board member, or committee chair, are volunteer positions and require a commitment of personal and professional time. These positions provide excellent opportunities for leadership development, personal promotion, and advancement of the profession. In deciding to assume a leadership position, interested individuals must consider the impact on their personal and professional commitments and relationships, career planning, employer support, current and future department projects, employee support, and personal readiness. This article reviews these factors and also provides an assessment tool that leaders can use to determine their readiness to assume leadership positions. By using an assessment tool, pharmacy leaders can better understand their ability to assume an important and influential leadership position while achieving job and personal goals. PMID:27621512
Collisional tearing in a field-reversed sheet pinch assuming nonparallel propagation
NASA Technical Reports Server (NTRS)
Quest, K. B.; Coroniti, F. V.
1985-01-01
Linear tearing in a collisional reversed-field sheet pinch is examined assuming that the wave vector k is not parallel to the equilibrium magnetic field. Equilibrium and magnetic geometry are defined, and a set of perturbed moment equations is derived assuming quasi-parallel propagation. It is shown that the usual expression for collisional growth is recovered, assuming that k sub y = 0. It is shown that the y component of momentum balance requires the generation of nonzero dJ sub x well away from the null, and an interial coupling when z not equal to 0. The effects of k sub y not equal to 0 on the growth rate are discussed.
Optimizing the Zeldovich approximation
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.
1994-01-01
We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment
NASA Astrophysics Data System (ADS)
Anggriani, N.; Wicaksono, B. C.; Supriatna, A. K.
2016-06-01
Tuberculosis (TB) is one of the deadliest infectious disease in the world which caused by Mycobacterium tuberculosis. The disease is spread through the air via the droplets from the infectious persons when they are coughing. The World Health Organization (WHO) has paid a special attention to the TB by providing some solution, for example by providing BCG vaccine that prevent an infected person from becoming an active infectious TB. In this paper we develop a mathematical model of the spread of the TB which assumes endogeneous reactivation and exogeneous reinfection factors. We also assume that some of the susceptible population are vaccinated. Furthermore we investigate the optimal vaccination level for the disease.
Assumed process of piping failure in nuclear power plants under destructive earthquake conditions
Shibata, H. . Inst. of Industrial Science)
1991-05-01
This paper deals with an assumed process of piping failure in nuclear power plants which may cause a catastrophic accident during a destructive earthquake conditions. The type of failure discussed is the so-called double-ended guillotine break, DEGB. As a safety problems, we are going to eliminate this type of failure by LBB, and we have assumed that this would then not occur by an earthquake. The author tries to clarify the possibility of failure during earthquakes. He reviews his related papers since 1973, and discusses zipping failure of snubbers and supporting devices. He shows a procedure to simulate the zipping failure of a piping system supported by snubbers.
Wang, Jianwei; Zhang, Yong; Wang, Lin-Wang
2015-07-31
We propose a systematic approach that can empirically correct three major errors typically found in a density functional theory (DFT) calculation within the local density approximation (LDA) simultaneously for a set of common cation binary semiconductors, such as III-V compounds, (Ga or In)X with X = N,P,As,Sb, and II-VI compounds, (Zn or Cd)X, with X = O,S,Se,Te. By correcting (1) the binary band gaps at high-symmetry points , L, X, (2) the separation of p-and d-orbital-derived valence bands, and (3) conduction band effective masses to experimental values and doing so simultaneously for common cation binaries, the resulting DFT-LDA-based quasi-first-principles method can be used to predict the electronic structure of complex materials involving multiple binaries with comparable accuracy but much less computational cost than a GW level theory. This approach provides an efficient way to evaluate the electronic structures and other material properties of complex systems, much needed for material discovery and design.
13 CFR 120.1718 - SBA's right to assume Seller's responsibilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false SBA's right to assume Seller's responsibilities. 120.1718 Section 120.1718 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION BUSINESS LOANS Establishment of SBA Secondary Market Guarantee Program for First Lien Position 504 Loan Pools § 120.1718 SBA's right to...
13 CFR 120.1718 - SBA's right to assume Seller's responsibilities.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 15 calendar days of Seller's receipt of such request. SBA will notify the Obligor of the change in... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false SBA's right to assume Seller's... LOANS Establishment of SBA Secondary Market Guarantee Program for First Lien Position 504 Loan...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-09
...., Washington, DC 20420 or e-mail nancy.kessinger@va.gov . Please refer to ``OMB Control No. 2900-0111'' in any... AFFAIRS Proposed Information Collection (Statement of Purchaser or Owner Assuming Seller's Loans, VA Form... Affairs (VA), is announcing an opportunity for public comment on the proposed collection of...
The Impact of Assumed Knowledge Entry Standards on Undergraduate Mathematics Teaching in Australia
ERIC Educational Resources Information Center
King, Deborah; Cattlin, Joann
2015-01-01
Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who…
22 CFR 72.21 - Consular officer may not assume financial responsibility for the estate.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 22 Foreign Relations 1 2011-04-01 2011-04-01 false Consular officer may not assume financial responsibility for the estate. 72.21 Section 72.21 Foreign Relations DEPARTMENT OF STATE PROTECTION AND WELFARE... next of kin)....
22 CFR 72.21 - Consular officer may not assume financial responsibility for the estate.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Consular officer may not assume financial responsibility for the estate. 72.21 Section 72.21 Foreign Relations DEPARTMENT OF STATE PROTECTION AND WELFARE... next of kin)....
A Model for Teacher Effects from Longitudinal Data without Assuming Vertical Scaling
ERIC Educational Resources Information Center
Mariano, Louis T.; McCaffrey, Daniel F.; Lockwood, J. R.
2010-01-01
There is an increasing interest in using longitudinal measures of student achievement to estimate individual teacher effects. Current multivariate models assume each teacher has a single effect on student outcomes that persists undiminished to all future test administrations (complete persistence [CP]) or can diminish with time but remains…
Taylor, Z.T.; Pratt, R.G.
1990-09-01
The analysis in this report was driven by two primary objectives: to determine whether and to what extent the lighting and miscellaneous equipment electricity consumption measured by metering in real buildings differs from the levels assumed in the various prototypes used in power forecasting; and to determine the reasons for those differences if, in fact, differences were found. 13 refs., 47 figs., 4 tabs.
42 CFR 476.82 - Continuation of functions not assumed by QIOs.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Continuation of functions not assumed by QIOs. 476.82 Section 476.82 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS UTILIZATION AND QUALITY CONTROL REVIEW Review Responsibilities of Utilization and...
42 CFR 476.82 - Continuation of functions not assumed by QIOs.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Continuation of functions not assumed by QIOs. 476.82 Section 476.82 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS UTILIZATION AND QUALITY CONTROL REVIEW Review Responsibilities of Utilization and...
42 CFR 476.82 - Continuation of functions not assumed by QIOs.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Continuation of functions not assumed by QIOs. 476.82 Section 476.82 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS UTILIZATION AND QUALITY CONTROL REVIEW Review Responsibilities of Utilization and...
42 CFR 476.82 - Continuation of functions not assumed by QIOs.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 4 2014-10-01 2014-10-01 false Continuation of functions not assumed by QIOs. 476.82 Section 476.82 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) QUALITY IMPROVEMENT ORGANIZATIONS QUALITY IMPROVEMENT ORGANIZATION REVIEW Review Responsibilities of Quality...
39 CFR 3060.40 - Calculation of the assumed Federal income tax.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 39 Postal Service 1 2010-07-01 2010-07-01 false Calculation of the assumed Federal income tax. 3060.40 Section 3060.40 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL ACCOUNTING PRACTICES AND TAX RULES FOR THE THEORETICAL COMPETITIVE PRODUCTS ENTERPRISE § 3060.40 Calculation of the...
39 CFR 3060.40 - Calculation of the assumed Federal income tax.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 39 Postal Service 1 2011-07-01 2011-07-01 false Calculation of the assumed Federal income tax. 3060.40 Section 3060.40 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL ACCOUNTING PRACTICES AND... liability on the taxable income from the competitive products of the Postal Service theoretical...
49 CFR 568.7 - Requirements for manufacturers who assume legal responsibility for a vehicle.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 CFR 567.5(f). (b) If an intermediate manufacturer of a vehicle assumes legal responsibility for... intermediate manufacturer shall ensure that a label is affixed to the final vehicle in conformity with 49 CFR... responsibility for a vehicle. 568.7 Section 568.7 Transportation Other Regulations Relating to...
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Beyond the Kirchhoff approximation
NASA Technical Reports Server (NTRS)
Rodriguez, Ernesto
1989-01-01
The three most successful models for describing scattering from random rough surfaces are the Kirchhoff approximation (KA), the small-perturbation method (SPM), and the two-scale-roughness (or composite roughness) surface-scattering (TSR) models. In this paper it is shown how these three models can be derived rigorously from one perturbation expansion based on the extinction theorem for scalar waves scattering from perfectly rigid surface. It is also shown how corrections to the KA proportional to the surface curvature and higher-order derivatives may be obtained. Using these results, the scattering cross section is derived for various surface models.
Börsch, G; Jahnke, A; Bergbauer, M; Nebel, W
1983-11-01
We present a case of solitary nonspecific ileal ulcer found by coloileoscopy in a patient with previously assumed irritable bowel syndrome. Follow-up endoscopies two weeks after initiation of short-term prednisone therapy, and again four months later, demonstrated rapid and persistent healing. This observation raises the question of whether or not primary ileal ulcers are indeed as rare as previously assumed when only surgical and autopsy findings were taken into consideration. Also, the natural history of this clinical entity, in general, could be somewhat more benign than suggested by those ulcers in which complications make surgery necessary, since these cases may not adequately reflect the full clinical spectrum of nonspecific small-bowel ulcers. PMID:6628147
ANS shell elements with improved transverse shear accuracy. [Assumed Natural Coordinate Strain
NASA Technical Reports Server (NTRS)
Jensen, Daniel D.; Park, K. C.
1992-01-01
A method of forming assumed natural coordinate strain (ANS) plate and shell elements is presented. The ANS method uses equilibrium based constraints and kinematic constraints to eliminate hierarchical degrees of freedom which results in lower order elements with improved stress recovery and displacement convergence. These techniques make it possible to easily implement the element into the standard finite element software structure, and a modified shape function matrix can be used to create consistent nodal loads.
The impact of assumed knowledge entry standards on undergraduate mathematics teaching in Australia
NASA Astrophysics Data System (ADS)
King, Deborah; Cattlin, Joann
2015-10-01
Over the last two decades, many Australian universities have relaxed their selection requirements for mathematics-dependent degrees, shifting from hard prerequisites to assumed knowledge standards which provide students with an indication of the prior learning that is expected. This has been regarded by some as a positive move, since students who may be returning to study, or who are changing career paths but do not have particular prerequisite study, now have more flexible pathways. However, there is mounting evidence to indicate that there are also significant negative impacts associated with assumed knowledge approaches, with large numbers of students enrolling in degrees without the stated assumed knowledge. For students, there are negative impacts on pass rates and retention rates and limitations to pathways within particular degrees. For institutions, the necessity to offer additional mathematics subjects at a lower level than normal and more support services for under-prepared students impacts on workloads and resources. In this paper, we discuss early research from the First Year in Maths project, which begins to shed light on the realities of a system that may in fact be too flexible.
ERIC Educational Resources Information Center
Chase, Barbara
2011-01-01
How are independent schools to be useful to the wider world? Beyond their common commitment to educate their students for meaningful lives in service of the greater good, can they educate a broader constituency and, thus, share their resources and skills more broadly? Their answers to this question will be shaped by their independence. Any…
NASA Astrophysics Data System (ADS)
Bailey, Scott M.; Thomas, Gary E.; Hervig, Mark E.; Lumpe, Jerry D.; Randall, Cora E.; Carstens, Justin N.; Thurairajah, Brentha; Rusch, David W.; Russell, James M.; Gordley, Larry L.
2015-05-01
Nadir viewing observations of Polar Mesospheric Clouds (PMCs) from the Cloud Imaging and Particle Size (CIPS) instrument on the Aeronomy of Ice in the Mesosphere (AIM) spacecraft are compared to Common Volume (CV), limb-viewing observations by the Solar Occultation For Ice Experiment (SOFIE) also on AIM. CIPS makes multiple observations of PMC-scattered UV sunlight from a given location at a variety of geometries and uses the variation of the radiance with scattering angle to determine a cloud albedo, particle size distribution, and Ice Water Content (IWC). SOFIE uses IR solar occultation in 16 channels (0.3-5 μm) to obtain altitude profiles of ice properties including the particle size distribution and IWC in addition to temperature, water vapor abundance, and other environmental parameters. CIPS and SOFIE made CV observations from 2007 to 2009. In order to compare the CV observations from the two instruments, SOFIE observations are used to predict the mean PMC properties observed by CIPS. Initial agreement is poor with SOFIE predicting particle size distributions with systematically smaller mean radii and a factor of two more albedo and IWC than observed by CIPS. We show that significantly improved agreement is obtained if the PMC ice is assumed to contain 0.5% meteoric smoke by mass, in agreement with previous studies. We show that the comparison is further improved if an adjustment is made in the CIPS data processing regarding the removal of Rayleigh scattered sunlight below the clouds. This change has an effect on the CV PMC, but is negligible for most of the observed clouds outside the CV. Finally, we examine the role of the assumed shape of the ice particle size distribution. Both experiments nominally assume the shape is Gaussian with a width parameter roughly half of the mean radius. We analyze modeled ice particle distributions and show that, for the column integrated ice distribution, Log-normal and Exponential distributions better represent the range
NASA Astrophysics Data System (ADS)
Chubb, Scott
2007-03-01
Only recently (talk by P.A. Mosier-Boss et al, in this session) has it become possible to trigger high energy particle emission and Excess Heat, on demand, in LENR involving PdD. Also, most nuclear physicists are bothered by the fact that the dominant reaction appears to be related to the least common deuteron(d) fusion reaction,d+d ->α+γ. A clear consensus about the underlying effect has also been illusive. One reason for this involves confusion about the approximate (SU2) symmetry: The fact that all d-d fusion reactions conserve isospin has been widely assumed to mean the dynamics is driven by the strong force interaction (SFI), NOT EMI. Thus, most nuclear physicists assume: 1. EMI is static; 2. Dominant reactions have smallest changes in incident kinetic energy (T); and (because of 2), d+d ->α+γ is suppressed. But this assumes a stronger form of SU2 symmetry than is present; d+d ->α+γ reactions are suppressed not because of large changes in T but because the interaction potential involves EMI, is dynamic (not static), the SFI is static, and because the two incident deuterons must have approximate Bose Exchange symmetry and vanishing spin. A generalization of this idea involves a resonant form of reaction, similar to the de-excitation of an atom. These and related (broken gauge) symmetry EMI effects on LENR are discussed.
Countably QC-Approximating Posets
Mao, Xuxin; Xu, Luoshan
2014-01-01
As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730
Approximate Bayesian multibody tracking.
Lanz, Oswald
2006-09-01
Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The Hybrid Joint-Separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance. PMID:16929730
Traction free finite elements with the assumed stress hybrid model. M.S. Thesis, 1981
NASA Technical Reports Server (NTRS)
Kafie, Kurosh
1991-01-01
An effective approach in the finite element analysis of the stress field at the traction free boundary of a solid continuum was studied. Conventional displacement and assumed stress finite elements were used in the determination of stress concentrations around circular and elliptical holes. Specialized hybrid elements were then developed to improve the satisfaction of prescribed traction boundary conditions. Results of the stress analysis indicated that finite elements which exactly satisfy the free stress boundary conditions are the most accurate and efficient in such problems. A general approach for hybrid finite elements which incorporate traction free boundaries of arbitrary geometry was formulated.
Comparison of symbolic and numerical integration methods for an assumed-stress hybrid shell element
NASA Technical Reports Server (NTRS)
Rengarajan, Govind; Knight, Norman F., Jr.; Aminpour, Mohammad A.
1993-01-01
Hybrid shell elements have long been regarded with reserve by the commercial finite element developers despite the high degree of reliability and accuracy associated with such formulations. The fundamental reason is the inherent higher computational cost of the hybrid approach as compared to the displacement-based formulations. However, a noteworthy factor in favor of hybrid elements is that numerical integration to generate element matrices can be entirely avoided by the use of symbolic integration. In this paper, the use of the symbolic computational approach is presented for an assumed-stress hybrid shell element with drilling degrees of freedom and the significant time savings achieved is demonstrated through an example.
Improved finite strip Mindlin plate bending element using assumed shear strain distributions
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Thompson, Robert L.
1988-01-01
A linear finite strip plate element based on Mindlin/Reissner plate theory is developed. The analysis is suitable for both thin and thick plates. In the formulation new transverse shear strains are introduced and assumed constant in each two-code linear strip. The element stiffness matrix is explicitly formulated for efficient computation and computer implementation. Numerical results showing the efficiency and predictive capability of the element for the analysis of plates are presented for different support and loading conditions and a wide range of thicknesses. No sign of shear locking phenomenon was observed with the newly developed element.
A variational justification of the assumed natural strain formulation of finite elements
NASA Technical Reports Server (NTRS)
Militello, Carmelo; Felippa, Carlos A.
1991-01-01
The objective is to study the assumed natural strain (ANS) formulation of finite elements from a variational standpoint. The study is based on two hybrid extensions of the Reissner-type functional that uses strains and displacements as independent fields. One of the forms is a genuine variational principle that contains an independent boundary traction field, whereas the other one represents a restricted variational principle. Two procedures for element level elimination of the strain field are discussed, and one of them is shown to be equivalent to the inclusion of incompatible displacement modes. Also, the 4-node C(exp 0) plate bending quadrilateral element is used to illustrate applications of this theory.
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Mullen, Robert L.
1989-01-01
A linear finite strip plate element based on Mindlin-Reissner plate theory is developed. The analysis is suitable for both thin and thick plates. In the formulation, new transverse shear strains are introduced and assumed constant in each two-node linear strip. The element stiffness matrix is explicitly formulated for efficient computation and computer implementation. Numerical results showing the efficiency and predictive capability of the element for the analysis of plates are presented for different support and loading conditions and a wide range of thicknesses. No sign of shear locking is observed with the newly developed element.
'LTE-diffusion approximation' for arc calculations
NASA Astrophysics Data System (ADS)
Lowke, J. J.; Tanaka, M.
2006-08-01
This paper proposes the use of the 'LTE-diffusion approximation' for predicting the properties of electric arcs. Under this approximation, local thermodynamic equilibrium (LTE) is assumed, with a particular mesh size near the electrodes chosen to be equal to the 'diffusion length', based on De/W, where De is the electron diffusion coefficient and W is the electron drift velocity. This approximation overcomes the problem that the equilibrium electrical conductivity in the arc near the electrodes is almost zero, which makes accurate calculations using LTE impossible in the limit of small mesh size, as then voltages would tend towards infinity. Use of the LTE-diffusion approximation for a 200 A arc with a thermionic cathode gives predictions of total arc voltage, electrode temperatures, arc temperatures and radial profiles of heat flux density and current density at the anode that are in approximate agreement with more accurate calculations which include an account of the diffusion of electric charges to the electrodes, and also with experimental results. Calculations, which include diffusion of charges, agree with experimental results of current and heat flux density as a function of radius if the Milne boundary condition is used at the anode surface rather than imposing zero charge density at the anode.
Shear viscosity in the postquasistatic approximation
Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.
2010-05-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.
Analytical Study on Fire and Explosion Accidents Assumed in HTGR Hydrogen Production System
Inaba, Yoshitomo; Nishihara, Tetsuo; Nitta, Yoshikazu
2004-04-15
One of the most important safety design issues for a hydrogen production system coupling with a high-temperature gas-cooled reactor (HTGR) is to ensure reactor safety against fire and explosion accidents because a large amount of combustible fluid is dealt with in the system. The Japan Atomic Energy Research Institute has a demonstration test plan of a hydrogen production system by steam reforming of methane coupling with the high-temperature engineering test reactor (HTTR). In the plan, we developed the P2A code system to analyze event sequences and consequences in detail on the fire and explosion accidents assumed in the HTGR or HTTR hydrogen production system. This paper describes the three accident scenarios assumed in the system, the structure of P2A, the analysis procedure with P2A, and the results of the numerical analyses based on the accident scenarios. It is shown that P2A is a useful tool for the accident analysis in the system.
Effects of assumed tow architecture on the predicted moduli and stresses in woven composites
NASA Technical Reports Server (NTRS)
Chapman, Clinton Dane
1994-01-01
This study deals with the effect of assumed tow architecture on the elastic material properties and stress distributions of plain weave woven composites. Specifically, the examination of how a cross-section is assumed to sweep-out the tows of the composite is examined in great detail. The two methods studied are extrusion and translation. This effect is also examined to determine how sensitive this assumption is to changes in waviness ratio. 3D finite elements were used to study a T300/Epoxy plain weave composite with symmetrically stacked mats. 1/32nd of the unit cell is shown to be adequate for analysis of this type of configuration with the appropriate set of boundary conditions. At low waviness, results indicate that for prediction of elastic properties, either method is adequate. At high waviness, certain elastic properties become more sensitive to the method used. Stress distributions at high waviness ratio are shown to vary greatly depending on the type of loading applied. At low waviness, both methods produce similar results.
Factors that affect action possibility judgments: the assumed abilities of other people.
Welsh, Timothy N; Wong, Lokman; Chandrasekharan, Sanjay
2013-06-01
Judging what actions are possible and impossible to complete is a skill that is critical for planning and executing movements in both individual and joint actions contexts. The present experiments explored the ability to adapt action possibility judgments to the assumed characteristics of another person. Participants watched alternating pictures of a person's hand moving at different speeds between targets of different indexes of difficulty (according to Fitts' Law) and judged whether or not it was possible for individuals with different characteristics to maintain movement accuracy at the presented speed. Across four studies, the person in the pictures and the background information about the person were manipulated to determine how and under what conditions participants adapted their judgments. Results revealed that participants adjusted their possibility judgments to the assumed motor capabilities of the individual they were judging. However, these adjustments only occurred when participants were instructed to take the other person into consideration suggesting that the adaption process is a voluntary process. Further, it was observed that the slopes of the regression equations relating movement time and index of difficulty did not differ across conditions. All differences between conditions were in the y-intercept of the regression lines. This pattern of findings suggests that participants formed the action possibility judgments by first simulating their own performance, and then adjusted the "possibility" threshold by adding or subtracting a correction factor to determine what is and is not possible for the other person to perform. PMID:23644579
Srivastava, Sanjay; Guglielmo, Steve; Beer, Jennifer S
2010-03-01
In interpersonal perception, "perceiver effects" are tendencies of perceivers to see other people in a particular way. Two studies of naturalistic interactions examined perceiver effects for personality traits: seeing a typical other as sympathetic or quarrelsome, responsible or careless, and so forth. Several basic questions were addressed. First, are perceiver effects organized as a global evaluative halo, or do perceptions of different traits vary in distinct ways? Second, does assumed similarity (as evidenced by self-perceiver correlations) reflect broad evaluative consistency or trait-specific content? Third, are perceiver effects a manifestation of stable beliefs about the generalized other, or do they form in specific contexts as group-specific stereotypes? Findings indicated that perceiver effects were better described by a differentiated, multidimensional structure with both trait-specific content and a higher order global evaluation factor. Assumed similarity was at least partially attributable to trait-specific content, not just to broad evaluative similarity between self and others. Perceiver effects were correlated with gender and attachment style, but in newly formed groups, they became more stable over time, suggesting that they grew dynamically as group stereotypes. Implications for the interpretation of perceiver effects and for research on personality assessment and psychopathology are discussed. PMID:20175628
Fernández, David Lorente
2015-01-01
This chapter uses a comparative approach to examine the maintenance of Indigenous practices related with Learning by Observing and Pitching In in two generations--parent generation and current child generation--in a Central Mexican Nahua community. In spite of cultural changes and the increase of Western schooling experience, these practices persist, to different degrees, as a Nahua cultural heritage with close historical relations to the key value of cuidado (stewardship). The chapter explores how children learn the value of cuidado in a variety of everyday activities, which include assuming responsibility in many social situations, primarily in cultivating corn, raising and protecting domestic animals, health practices, and participating in family ceremonial life. The chapter focuses on three main points: (1) Cuidado (assuming responsibility for), in the Nahua socio-cultural context, refers to the concepts of protection and "raising" as well as fostering other beings, whether humans, plants, or animals, to reach their potential and fulfill their development. (2) Children learn cuidado by contributing to family endeavors: They develop attention and self-motivation; they are capable of responsible actions; and they are able to transform participation to achieve the status of a competent member of local society. (3) This collaborative participation allows children to continue the cultural tradition and to preserve a Nahua heritage at a deeper level in a community in which Nahuatl language and dress have disappeared, and people do not identify themselves as Indigenous. PMID:26955923
NASA Technical Reports Server (NTRS)
Liu, W. Timothy; Niiler, Pearn P.
1990-01-01
In deriving the surface latent heat flux with the bulk formula for the thermal forcing of some ocean circulation models, two approximations are commonly made to bypass the use of atmospheric humidity in the formula. The first assumes a constant relative humidity, and the second supposes that the sea-air humidity difference varies linearly with the saturation humidity at sea surface temperature. Using climatological fields derived from the Marine Deck and long time series from ocean weather stations, the errors introduced by these two assumptions are examined. It is shown that the errors reach above 100 W/sq m over western boundary currents and 50 W/sq m over the tropical ocean. The two approximations also introduce erroneous seasonal and spatial variabilities with magnitudes over 50 percent of the observed variabilities.
Bergstra, A; van Dijk, R B; Hillege, H L; Lie, K I; Mook, G A
1995-05-01
This study was performed because of observed differences between dye dilution cardiac output and the Fick cardiac output, calculated from estimated oxygen consumption according to LaFarge and Miettinen, and to find a better formula for assumed oxygen consumption. In 250 patients who underwent left and right heart catheterization, the oxygen consumption VO2 (ml.min-1) was calculated using Fick's principle. Either pulmonary or systemic flow, as measured by dye dilution, was used in combination with the concordant arteriovenous oxygen concentration difference. In 130 patients, who matched the age of the LaFarge and Miettinen population, the obtained values of oxygen consumption VO2(dd) were compared with the estimated oxygen consumption values VO2(lfm), found using the LaFarge and Miettinen formulae. The VO2(lfm) was significantly lower than VO2(dd); -21.8 +/- 29.3 ml.min-1 (mean +/- SD), P < 0.001, 95% confidence interval (95% CI) -26.9 to -16.7, limits of agreement (LA) -80.4 to 36.9. A new regression formula for the assumed oxygen consumption VO2(ass) was derived in 250 patients by stepwise multiple regression analysis. The VO2(dd) was used as a dependent variable, and body surface area BSA (m2). Sex (0 for female, 1 for male), Age (years), Heart rate (min-1) and the presence of a left to right shunt as independent variables. The best fitting formula is expressed as: VO2(ass) = (157.3 x BSA + 10.0 x Sex - 10.5 x In Age + 4.8) ml.min-1, where ln Age = the natural logarithm of the age. This formula was validated prospectively in 60 patients. A non-significant difference between VO2(ass) and VO2(dd) was found; mean 2.0 +/- 23.4 ml.min-1, P = 0.771, 95% Cl = -4.0 to +8.0, LA -44.7 to +48.7. In conclusion, assumed oxygen consumption values, using our new formula, are in better agreement with the actual values than those found according to LaFarge and Miettinen's formulae. PMID:7588904
Analysing organic transistors based on interface approximation
Akiyama, Yuto; Mori, Takehiko
2014-01-15
Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region.
Approximate probability distributions of the master equation
NASA Astrophysics Data System (ADS)
Thomas, Philipp; Grima, Ramon
2015-07-01
Master equations are common descriptions of mesoscopic systems. Analytical solutions to these equations can rarely be obtained. We here derive an analytical approximation of the time-dependent probability distribution of the master equation using orthogonal polynomials. The solution is given in two alternative formulations: a series with continuous and a series with discrete support, both of which can be systematically truncated. While both approximations satisfy the system size expansion of the master equation, the continuous distribution approximations become increasingly negative and tend to oscillations with increasing truncation order. In contrast, the discrete approximations rapidly converge to the underlying non-Gaussian distributions. The theory is shown to lead to particularly simple analytical expressions for the probability distributions of molecule numbers in metabolic reactions and gene expression systems.
On the Assumed Natural Strain method to alleviate locking in solid-shell NURBS-based finite elements
NASA Astrophysics Data System (ADS)
Caseiro, J. F.; Valente, R. A. F.; Reali, A.; Kiendl, J.; Auricchio, F.; Alves de Sousa, R. J.
2014-06-01
In isogeometric analysis (IGA), the functions used to describe the CAD geometry (such as NURBS) are also employed, in an isoparametric fashion, for the approximation of the unknown fields, leading to an exact geometry representation. Since the introduction of IGA, it has been shown that the high regularity properties of the employed functions lead in many cases to superior accuracy per degree of freedom with respect to standard FEM. However, as in Lagrangian elements, NURBS-based formulations can be negatively affected by the appearance of non-physical phenomena that "lock" the solution when constrained problems are considered. In order to alleviate such locking behaviors, the Assumed Natural Strain (ANS) method proposed for Lagrangian formulations is extended to NURBS-based elements in the present work, within the context of solid-shell formulations. The performance of the proposed methodology is assessed by means of a set of numerical examples. The results allow to conclude that the employment of the ANS method to quadratic NURBS-based elements successfully alleviates non-physical phenomena such as shear and membrane locking, significantly improving the element performance.
An efficient Mindlin finite strip plate element based on assumed strain distribution
NASA Technical Reports Server (NTRS)
Chulya, Abhisak; Thompson, Robert L.
1988-01-01
A simple two node, linear, finite strip plate bending element based on Mindlin-Reissner plate theory for the analysis of very thin to thick bridges, plates, and axisymmetric shells is presented. The new transverse shear strains are assumed for constant distribution in the two node linear strip. The important aspect is the choice of the points that relate the nodal displacements and rotations through the locking transverse shear strains. The element stiffness matrix is explicitly formulated for efficient computation and ease in computer implementation. Numerical results showing the efficiency and predictive capability of the element for analyzing plates with different supports, loading conditions, and a wide range of thicknesses are given. The results show no sign of the shear locking phenomenon.
An assumed pdf approach for the calculation of supersonic mixing layers
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Drummond, J. P.; Hassan, H. A.
1992-01-01
In an effort to predict the effect that turbulent mixing has on the extent of combustion, a one-equation turbulence model is added to an existing Navier-Stokes solver with finite-rate chemistry. To average the chemical-source terms appearing in the species-continuity equations, an assumed pdf approach is also used. This code was used to analyze the mixing and combustion caused by the mixing layer formed by supersonic coaxial H2-air streams. The chemistry model employed allows for the formation of H2O2 and HO2. Comparisons are made with recent measurements using laser Raman diagnostics. Comparisons include temperature and its rms, and concentrations of H2, O2, N2, H2O, and OH. In general, good agreement with experiment was noted.
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents a novel approach to collision detection based on distance fields. A novel interpolation ensures stability of the distances in the vicinity of complex geometries. An assumed gradient formulation is introduced leading to a C1-continuous distance function. The gap function is re-expressed allowing penalty and Lagrange multiplier formulations. The article introduces a node-to-element integration for first order elements, but also discusses signed distances, partial updates, intermediate surfaces, mortar methods and higher order elements. The algorithm is fast, simple and robust for complex geometries and self contact. The computed tractions conserve linear and angular momentum even in infeasible contact. Numerical examples illustrate the new algorithm in three dimensions. PMID:23888088
Challenging residents to assume maximal responsibilities in homes for the aged.
Rodstein, M
1975-07-01
A program for activating residents of homes for the aged to assume maximal responsibilities is described. Promoting maximal physical and mental health through various modalities including activity programs, appropriate exercise and participation in democratic self-government mechanisms, will result in a happier, healthier population of residents in institutions for the aged. The increased demands on staff time and patience will be compensated for by relief of the too-frequent feelings of hopelessness and boredom endemic among the staff of long-term care facilities. Such programs demand constant effort by all staff members, patients, volunteers and relatives because if they succumb to the usual human dislike of persistency, short-term gains can easily be lost. PMID:1141631
Analysis of an object assumed to contain “Red Mercury”
NASA Astrophysics Data System (ADS)
Obhođaš, Jasmina; Sudac, Davorin; Blagus, Saša; Valković, Vladivoj
2007-08-01
After having been informed about an attempt of illicit trafficking, the Organized Crime Division of the Zagreb Police Authority confiscated in November 2003 a hand size metal cylinder suspected to contain "Red Mercury" (RM). The sample assumed to contain RM was analyzed with two nondestructive analytical methods in order to obtain information about the nature of the investigated object, namely, activation analysis with 14.1 MeV neutrons and EDXRF analysis. The activation analysis with 14.1 MeV neutrons showed that the container and its contents were characterized by the following chemical elements: Hg, Fe, Cr and Ni. By using EDXRF analysis, it was shown that the elements Fe, Cr and Ni were constituents of the capsule. Therefore, it was concluded that these three elements were present in the capsule only, while the content of the unknown material was Hg. Antimony as a hypothetical component of red mercury was not detected.
Arino, Yosuke; Akimoto, Keigo; Sano, Fuminori; Homma, Takashi; Oda, Junichiro; Tomoda, Toshimasa
2016-05-24
Although solar radiation management (SRM) might play a role as an emergency geoengineering measure, its potential risks remain uncertain, and hence there are ethical and governance issues in the face of SRM's actual deployment. By using an integrated assessment model, we first present one possible methodology for evaluating the value arising from retaining an SRM option given the uncertainty of climate sensitivity, and also examine sensitivities of the option value to SRM's side effects (damages). Reflecting the governance challenges on immediate SRM deployment, we assume scenarios in which SRM could only be deployed with a limited degree of cooling (0.5 °C) only after 2050, when climate sensitivity uncertainty is assumed to be resolved and only when the sensitivity is found to be high (T2x = 4 °C). We conduct a cost-effectiveness analysis with constraining temperature rise as the objective. The SRM option value is originated from its rapid cooling capability that would alleviate the mitigation requirement under climate sensitivity uncertainty and thereby reduce mitigation costs. According to our estimates, the option value during 1990-2049 for a +2.4 °C target (the lowest temperature target level for which there were feasible solutions in this model study) relative to preindustrial levels were in the range between $2.5 and $5.9 trillion, taking into account the maximum level of side effects shown in the existing literature. The result indicates that lower limits of the option values for temperature targets below +2.4 °C would be greater than $2.5 trillion. PMID:27162346
LINE-BY-LINE CALCULATION OF SPECTRA FROM DIATOMIC MOLECULES AND ATOMS ASSUMING A VOIGT LINE PROFILE
NASA Technical Reports Server (NTRS)
Whiting, E. E.
1994-01-01
This program predicts the spectra resulting from electronic transitions of diatomic molecules and atoms in local thermodynamic equilibrium. The program produces a spectrum by accounting for the contribution of each rotational and atomic line considered. The integrated intensity of each line is distributed in the spectrum by an approximate Voigt profile. The program can produce spectra for optically thin gases or for cases where simultaneous emission and absorption occurs. In addition, the program can compute the spectrum resulting from the absorption of incident radiation by a column of cold gas or the high-temperature, self-absorbed emission spectrum from a nonisothermal gas. The computed spectrum can be output directly or combined with a slit function and sensitivity calibration to predict the output of a grating spectrograph or a fixed wavelength radiometer. Specifically, the program has the capability to include the following features in any computations: (1) Parallel transitions, in which spin splitting and lambda doubling are ignored (ignoring spin splitting and/or lambda doubling means that the total multiple strength is assumed to reside in a single "effective" line), (2) Perpendicular transitions, in which spin splitting and lambda doubling are ignored, (3) Sigma Pi transitions, in which lambda doubling is ignored, (4) Atomic lines, (5) Option to terminate rotational line calculations when the molecule dissociates due to rotation, (6) Option to include the alternation of line intensities for homonuclear molecules, (7) Use of an approximate Voigt profile for the line shape, and (8) Radiative energy transport in a nonisothermal gas. The output options available in the program are: (1) Tabulation of the spontaneous emission spectrum (i.e., optically thin spectrum) for a 1.0 cm path length, (2) Tabulation of the "true" spectrum, which incorporates spontaneous emission, induced emission, absorption, and externally incident radiation through the equation of
Turkoski, B B
2000-01-01
Herbal remedies are becoming increasingly popular as people seek more effective, natural, or safer methods for treating a variety of complaints. As a result, nurses in every setting may expect to see increased numbers of patients who are using herbal products. When patients assume that the nurses will be critical of their use of herbals, they may withhold such information to avoid unpleasantness. This could place patients at risk for adverse effects, drug interactions, and complications related to ineffective treatment. Nurses who are knowledgeable about herbal products and who are open to discussion about these products can provide information and advice about safe use. The discussion in this article addresses actions, possible benefits, and dangers of the most common herbal products. Guidelines for assessing and teaching clients about herbal use are included. PMID:11062629
Commonness and rarity in the marine biosphere.
Connolly, Sean R; MacNeil, M Aaron; Caley, M Julian; Knowlton, Nancy; Cripps, Ed; Hisano, Mizue; Thibaut, Loïc M; Bhattacharya, Bhaskar D; Benedetti-Cecchi, Lisandro; Brainard, Russell E; Brandt, Angelika; Bulleri, Fabio; Ellingsen, Kari E; Kaiser, Stefanie; Kröncke, Ingrid; Linse, Katrin; Maggi, Elena; O'Hara, Timothy D; Plaisance, Laetitia; Poore, Gary C B; Sarkar, Santosh K; Satpathy, Kamala K; Schückel, Ulrike; Williams, Alan; Wilson, Robin S
2014-06-10
Explaining patterns of commonness and rarity is fundamental for understanding and managing biodiversity. Consequently, a key test of biodiversity theory has been how well ecological models reproduce empirical distributions of species abundances. However, ecological models with very different assumptions can predict similar species abundance distributions, whereas models with similar assumptions may generate very different predictions. This complicates inferring processes driving community structure from model fits to data. Here, we use an approximation that captures common features of "neutral" biodiversity models--which assume ecological equivalence of species--to test whether neutrality is consistent with patterns of commonness and rarity in the marine biosphere. We do this by analyzing 1,185 species abundance distributions from 14 marine ecosystems ranging from intertidal habitats to abyssal depths, and from the tropics to polar regions. Neutrality performs substantially worse than a classical nonneutral alternative: empirical data consistently show greater heterogeneity of species abundances than expected under neutrality. Poor performance of neutral theory is driven by its consistent inability to capture the dominance of the communities' most-abundant species. Previous tests showing poor performance of a neutral model for a particular system often have been followed by controversy about whether an alternative formulation of neutral theory could explain the data after all. However, our approach focuses on common features of neutral models, revealing discrepancies with a broad range of empirical abundance distributions. These findings highlight the need for biodiversity theory in which ecological differences among species, such as niche differences and demographic trade-offs, play a central role. PMID:24912168
Epidemiology of child pedestrian casualty rates: can we assume spatial independence?
Hewson, Paul J
2005-07-01
Child pedestrian injuries are often investigated by means of ecological studies, yet are clearly part of a complex spatial phenomena. Spatial dependence within such ecological analyses have rarely been assessed, yet the validity of basic statistical techniques rely on a number of independence assumptions. Recent work from Canada has highlighted the potential for modelling spatial dependence within data that was aggregated in terms of the number of road casualties who were resident in a given geographical area. Other jurisdictions aggregate data in terms of the number of casualties in the geographical area in which the collision took place. This paper contrasts child pedestrian casualty data from Devon County UK, which has been aggregated by both methods. A simple ecological model, with minimally useful covaraties relating to measures of child deprivation, provides evidence that data aggregated in terms of the casualty's home location cannot be assumed to be spatially independent and that for analysis of these data to be valid there must be some accounting for spatial auto-correlation within the model structure. Conversely, data aggregated in terms of the collision location (as is usual in the UK) was found to be spatially independent. Whilst the spatial model is clearly more complex it provided a superior fit to that seen with either collision aggregated or non-spatial models. Of more importance, the ecological level association between deprivation and casualty rate is much lower once the spatial structure is accounted for, highlighting the importance using appropriately structured models. PMID:15949456
Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Gurfinkel, Arie
2010-01-01
We develop a learning-based automated Assume-Guarantee (AG) reasoning framework for verifying omega-regular properties of concurrent systems. We study the applicability of non-circular (AGNC) and circular (AG-C) AG proof rules in the context of systems with infinite behaviors. In particular, we show that AG-NC is incomplete when assumptions are restricted to strictly infinite behaviors, while AG-C remains complete. We present a general formalization, called LAG, of the learning based automated AG paradigm. We show how existing approaches for automated AG reasoning are special instances of LAG.We develop two learning algorithms for a class of systems, called infinite regular systems, that combine finite and infinite behaviors. We show that for infinity-regular systems, both AG-NC and AG-C are sound and complete. Finally, we show how to instantiate LAG to do automated AG reasoning for infinite regular, and omega-regular, systems using both AG-NC and AG-C as proof rules
Assume-Guarantee Verification of Source Code with Design-Level Assumptions
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.
2004-01-01
Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.
iRhom2 (Uncv) mutation blocks bulge stem cells assuming the fate of hair follicle.
Yang, Leilei; Li, Wenlong; Liu, Bing; Wang, Shaoxia; Zeng, Lin; Zhang, Cuiping; Li, Yang
2016-09-01
iRhom2 is necessary for maturation of TNFα-converting enzyme, which is required for the release of tumor necrosis factor. In the previous study, we found that the iRhom2 (Uncv) mutation in N-terminal cytoplasmic domain-encoding region (iRhom2 (Uncv) ) leads to aberrant hair shaft and inner root sheath differentiation, thus results in a hairless phenotype in homozygous iRhom2 (Uncv/Uncv) BALB/c mice. In this study, we found iRhom2 mutation decreased hair matrix proliferation, however, iRhom2 (Uncv/Uncv) mice displayed hyperproliferation and hyperkeratosis in the interfollicular epidermis along with hypertrophy in the sebaceous glands. The number of bulge SCs was not altered and the hair follicle cycle is normal in iRhom2 (Uncv/Uncv) mice. The decreased proliferation in hair matrix but increased proliferation in epidermis and sebaceous glands in iRhom2 (Uncv/Uncv) mice may implying that iRhom2 (Uncv) mutation blocks bugle stem cells assuming the fate of hair follicle. This study identifies iRhom2 as a novel regulator for determination of keratinocyte lineages. PMID:27393687
Bilateral Painful Ophthalmoplegia: A Case of Assumed Tolosa-Hunt Syndrome.
Kastirr, Ilko; Kamusella, Peter; Andresen, Reimer
2016-03-01
We present the case of a man of 47 years with vertical and horizontal paresis of view combined with periorbital pain that developed initially on the right side but extended after 3-4 days to the left. Gadolinum uptaking tissue in the cavernous sinus was shown by MRI of the orbital region in the T1 spin echo sequence with fat saturation (SEfs) with a slice thickness of 2 mm. As no other abnormalities were found and the pain resolved within 72 hours of treatment with cortison a bilateral Tolosa-Hunt Syndrome (THS) was assumed. THS is an uncommon cause for Painful Ophthalmoglegia (PO) and only few cases of bilateral appearance have been reported. Even though the diagnostic criteria for THS oblige unilateral symptoms we suggest that in patients with bilateral PO THS should not be excluded as a differential diagnosis. Further more when using MRI to detect granulomatous tissue in the orbital region the chosen sequence should be T1 SEfs and slice thickness should possibly be as low as 2 mm, as granulomas are often no larger than 1-2 mm. PMID:27134970
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr. (Principal Investigator)
1996-01-01
The goal of this research project is to develop assumed-stress hybrid elements with rotational degrees of freedom for analyzing composite structures. During the first year of the three-year activity, the effort was directed to further assess the AQ4 shell element and its extensions to buckling and free vibration problems. In addition, the development of a compatible 2-node beam element was to be accomplished. The extensions and new developments were implemented in the Computational Structural Mechanics Testbed COMET. An assessment was performed to verify the implementation and to assess the performance of these elements in terms of accuracy. During the second and third years, extensions to geometrically nonlinear problems were developed and tested. This effort involved working with the nonlinear solution strategy as well as the nonlinear formulation for the elements. This research has resulted in the development and implementation of two additional element processors (ES22 for the beam element and ES24 for the shell elements) in COMET. The software was developed using a SUN workstation and has been ported to the NASA Langley Convex named blackbird. Both element processors are now part of the baseline version of COMET.
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Gregory, Irene M.
2014-01-01
Control-theoretic modeling of human operator's dynamic behavior in manual control tasks has a long, rich history. There has been significant work on techniques used to identify the pilot model of a given structure. This research attempts to go beyond pilot identification based on experimental data to develop a predictor of pilot behavior. Two methods for pre-dicting pilot stick input during changing aircraft dynamics and deducing changes in pilot behavior are presented This approach may also have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot. With this ability to detect changes in piloting behavior, the possibility now exists to mediate human adverse behaviors, hardware failures, and software anomalies with autono-my that may ameliorate these undesirable effects. However, appropriate timing of when au-tonomy should assume control is dependent on criticality of actions to safety, sensitivity of methods to accurately detect these adverse changes, and effects of changes in levels of auto-mation of the system as a whole.
Defining modeling parameters for juniper trees assuming pleistocene-like conditions at the NTS
Tarbox, S.R.; Cochran, J.R.
1994-12-31
This paper addresses part of Sandia National Laboratories` (SNL) efforts to assess the long-term performance of the Greater Confinement Disposal (GCD) facility located on the Nevada Test Site (NTS). Of issue is whether the GCD site complies with 40 CFR 191 standards set for transuranic (TRU) waste burial. SNL has developed a radionuclide transport model which can be used to assess TRU radionuclide movement away from the GCD facility. An earlier iteration of the model found that radionuclide uptake and release by plants is an important aspect of the system to consider. Currently, the shallow-rooted plants at the NTS do not pose a threat to the integrity of the GCD facility. However, the threat increases substantially it deeper-rooted woodland species migrate to the GCD facility, given a shift to a wetter climate. The model parameters discussed here will be included in the next model iteration which assumes a climate shift will provide for the growth of juniper trees at the GCD facility. Model parameters were developed using published data and wherever possible, data were taken from juniper and pinon-juniper studies that mirrored as many aspects of the GCD facility as possible.
From the Kochen-Specker theorem to noncontextuality inequalities without assuming determinism.
Kunjwal, Ravi; Spekkens, Robert W
2015-09-11
The Kochen-Specker theorem demonstrates that it is not possible to reproduce the predictions of quantum theory in terms of a hidden variable model where the hidden variables assign a value to every projector deterministically and noncontextually. A noncontextual value assignment to a projector is one that does not depend on which other projectors-the context-are measured together with it. Using a generalization of the notion of noncontextuality that applies to both measurements and preparations, we propose a scheme for deriving inequalities that test whether a given set of experimental statistics is consistent with a noncontextual model. Unlike previous inequalities inspired by the Kochen-Specker theorem, we do not assume that the value assignments are deterministic and therefore in the face of a violation of our inequality, the possibility of salvaging noncontextuality by abandoning determinism is no longer an option. Our approach is operational in the sense that it does not presume quantum theory: a violation of our inequality implies the impossibility of a noncontextual model for any operational theory that can account for the experimental observations, including any successor to quantum theory. PMID:26406812
Modeling of Hydraulic Hractures with Poromechanical Coupling Using an Assumed Enhanced Strain Method
NASA Astrophysics Data System (ADS)
Wang, W.; White, J. A.
2015-12-01
When modeling hydraulic fractures, it is often necessary to include tightly coupled interaction between fluid-filled fractures and the porous host rock. Further, the numerical scheme must accurately discretize processes taking place both in the rock volume and along growing fracture surfaces. This work presents a three-dimensional scheme for handling these challenging numerical issues. Solid deformation and fluid pressure in the host rock are modeled using a mixed finite-element/finite-volume scheme. The continuum formulation is enriched with an assumed enhanced strain (AES) method to represent discontinuities in the displacement field due to fractures. Fractures can be arbitrarily oriented and located with respect to the underlying mesh, and no re-meshing is necessary during fracture propagation. Flow along the fracture is modeled using a locally conservative finite volume scheme. Leak-off coupling allows for fluid exchange between the porous matrix and the fracture. We describe an efficient and scalable preconditioning process that leads to rapid convergence of the resulting discrete system. The scheme is validated using analytical examples and monitoring data from a real fractured reservoir.
Engineering evaluation of alternatives: Managing the assumed leak from single-shell Tank 241-T-101
Brevick, C.H.; Jenkins, C.
1996-02-01
At mid-year 1992, the liquid level gage for Tank 241-T-101 indicated that 6,000 to 9,000 gal had leaked. Because of the liquid level anomaly, Tank 241-T-101 was declared an assumed leaker on October 4, 1992. SSTs liquid level gages have been historically unreliable. False readings can occur because of instrument failures, floating salt cake, and salt encrustation. Gages frequently self-correct and tanks show no indication of leak. Tank levels cannot be visually inspected and verified because of high radiation fields. The gage in Tank 241-T-101 has largely corrected itself since the mid-year 1992 reading. Therefore, doubt exists that a leak has occurred, or that the magnitude of the leak poses any immediate environmental threat. While reluctance exists to use valuable DST space unnecessarily, there is a large safety and economic incentive to prevent or mitigate release of tank liquid waste into the surrounding environment. During the assessment of the significance of the Tank 241-T-101 liquid level gage readings, Washington State Department of Ecology determined that Westinghouse Hanford Company was not in compliance with regulatory requirements, and directed transfer of the Tank 241-T-101 liquid contents into a DST. Meanwhile, DOE directed WHC to examine reasonable alternatives/options for safe interim management of Tank 241-T-101 wastes before taking action. The five alternatives that could be used to manage waste from a leaking SST are: (1) No-Action, (2) In-Tank Stabilization, (3) External Tank Stabilization, (4) Liquid Retrieval, and (5) Total Retrieval. The findings of these examinations are reported in this study.
NASA Astrophysics Data System (ADS)
Carvajal, Matías; Gubler, Alejandra
2016-06-01
We investigated the effect that along-dip slip distribution has on the near-shore tsunami amplitudes and on coastal land-level changes in the region of central Chile (29°-37°S). Here and all along the Chilean megathrust, the seismogenic zone extends beneath dry land, and thus, tsunami generation and propagation is limited to its seaward portion, where the sensitivity of the initial tsunami waveform to dislocation model inputs, such as slip distribution, is greater. We considered four distributions of earthquake slip in the dip direction, including a spatially uniform slip source and three others with typical bell-shaped slip patterns that differ in the depth range of slip concentration. We found that a uniform slip scenario predicts much lower tsunami amplitudes and generally less coastal subsidence than scenarios that assume bell-shaped distributions of slip. Although the finding that uniform slip scenarios underestimate tsunami amplitudes is not new, it has been largely ignored for tsunami hazard assessment in Chile. Our simulations results also suggest that uniform slip scenarios tend to predict later arrival times of the leading wave than bell-shaped sources. The time occurrence of the largest wave at a specific site is also dependent on how the slip is distributed in the dip direction; however, other factors, such as local bathymetric configurations and standing edge waves, are also expected to play a role. Arrival time differences are especially critical in Chile, where tsunamis arrive earlier than elsewhere. We believe that the results of this study will be useful to both public and private organizations for mapping tsunami hazard in coastal areas along the Chilean coast, and, therefore, help reduce the risk of loss and damage caused by future tsunamis.
NASA Astrophysics Data System (ADS)
Kraseski, K. A.
2015-12-01
Recently developed conceptual frameworks and new observations have improved our understanding of hyporheic temperature dynamics and their effects on channel temperatures. However, hyporheic temperature models that are both simple and useful remain elusive. As water moves through hyporheic pathways, it exchanges heat with hyporheic sediment through conduction, and this process dampens the diurnal temperature wave of the water entering from the channel. This study examined the mechanisms underlying this behavior, and utilized those findings to create two simple models that predict temperatures of water reentering the channel after traveling through hyporheic pathways for different lengths of time. First, we developed a laboratory experiment to represent this process and determine conduction rates for various sediment size classes (sand, fine gravel, coarse gravel, and a proportional mix of the three) by observing the time series of temperature changes between sediment and water of different initial temperatures. Results indicated that conductions rates were near-instantaneous, with heat transfer being completed on the scale of seconds to a few minutes of the initial interaction. Heat conduction rates between the sediment and water were therefore much faster than hyporheic flux rates, rendering reasonable an assumption of instantaneous conduction. Then, we developed two simple models to predict time series of hyporheic water based on the initial diurnal temperature wave and hyporheic travel distance. The first model estimates a damping coefficient based on the total water-sediment heat exchange through each diurnal cycle. The second model solves the heat transfer equation assuming instantaneous conduction using a simple finite difference algorithm. Both models demonstrated nearly complete damping of the sine wave over the distance traveled in four days. If hyporheic exchange is substantial and travel times are long, then hyporheic damping may have large effects on
The Cell Cycle Switch Computes Approximate Majority
NASA Astrophysics Data System (ADS)
Cardelli, Luca; Csikász-Nagy, Attila
2012-09-01
Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.
Investigating nitrogen deficiency in common beans
Technology Transfer Automated Retrieval System (TEKTRAN)
Phaseolus vulgaris (common bean) and soybean diverged from a common ancestor approximately 19 million years ago. The genome of P. vulgaris is approximately half the size of soybean, making it an excellent model for soybean genetics. Nitrogen (N) is often a growth-limiting nutrient, and N deficiency ...
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
NASA Technical Reports Server (NTRS)
Toplis, M. J.; Mizzon, H.; Forni, O.; Monnereau, M.; Prettyman, T. H.; McSween, H. Y.; McCoy, T. J.; Mittlefehldt, D. W.; DeSanctis, M. C.; Raymond, C. A.; Russell, C. T.
2012-01-01
Bulk composition (including oxygen content) is a primary control on the internal structure and mineralogy of differentiated asteroids. For example, oxidation state will affect core size, as well as Mg# and pyroxene content of the silicate mantle. The Howardite-Eucrite-Diogenite class of meteorites (HED) provide an interesting test-case of this idea, in particular in light of results of the Dawn mission which provide information on the size, density and differentiation state of Vesta, the parent body of the HED's. In this work we explore plausible bulk compositions of Vesta and use mass-balance and geochemical modelling to predict possible internal structures and crust/mantle compositions and mineralogies. Models are constrained to be consistent with known HED samples, but the approach has the potential to extend predictions to thermodynamically plausible rock types that are not necessarily present in the HED collection. Nine chondritic bulk compositions are considered (CI, CV, CO, CM, H, L, LL, EH, EL). For each, relative proportions and densities of the core, mantle, and crust are quantified. Considering that the basaltic crust has the composition of the primitive eucrite Juvinas and assuming that this crust is in thermodynamic equilibrium with the residual mantle, it is possible to calculate how much iron is in metallic form (in the core) and how much in oxidized form (in the mantle and crust) for a given bulk composition. Of the nine bulk compositions tested, solutions corresponding to CI and LL groups predicted a negative metal fraction and were not considered further. Solutions for enstatite chondrites imply significant oxidation relative to the starting materials and these solutions too are considered unlikely. For the remaining bulk compositions, the relative proportion of crust to bulk silicate is typically in the range 15 to 20% corresponding to crustal thicknesses of 15 to 20 km for a porosity-free Vesta-sized body. The mantle is predicted to be largely
Benthic grazers and suspension feeders: Which one assumes the energetic dominance in Königshafen?
NASA Astrophysics Data System (ADS)
Asmus, H.
1994-06-01
Size-frequency histograms of biomass, secondary production, respiration and energy flow of 4 dominant macrobenthic communities of the intertidal bay of Königshafen were analysed and compared. In the shallow sandy flats ( Nereis-Corophium-belt [ N.C.-belt], seagrass-bed and Arenicola-flat) a bimodal size-frequency histogram of biomass, secondary production, respiration and energy flow was found with a first peak formed by individuals within a size range of 0.10 to 0.32 mg ash free dry weight (AFDW). In this size range, the small prosobranch Hydrobia ulvae was the dominant species, showing maximal biomass as well as secondary production, respiration and energy flow in the seagrass-bed. The second peak on the size-frequency histogram was formed by the polychaete Nereis diversicolor with individual weights of 10 to 18 mg AFDW in the N.C.-belt, and by Arenicola marina with individual weights of 100 to 562 mg AFDW in both of the other sand flats. Biomass, productivity, respiration and energy flow of these polychaetes increased from the Nereis-Corophium-belt, to the seagrass-bed, and to the Arenicola-flat. Mussel beds surpassed all other communities in biomass and the functional parameters mentioned above. Size-frequency histograms of these parameters were distinctly unimodal with a maximum at an individual size of 562 to 1000 mg AFDW. This size group was dominated by adult specimens of Mytilus edulis. Averaged over the total area, the size-frequency histogram of energy flow of all intertidal flats of Königshafen showed one peak built by Hydrobia ulvae and a second one, mainly formed by M. edulis. Assuming that up to 10% of the intertidal area is covered by mussel beds, the maximum of the size-specific energy flow will be formed by Mytilus. When only 1% is covered by mussel beds, then the energy flow is dominated by H. ulvae. Both animals represent different trophic types and their dominance in energy flow has consequences for the food web and the carbon flow of the
Limitations of the acoustic approximation for seismic crosshole tomography
NASA Astrophysics Data System (ADS)
Marelli, Stefano; Maurer, Hansruedi
2010-05-01
Modelling and inversion of seismic crosshole data is a challenging task in terms of computational resources. Even with the significant increase in power of modern supercomputers, full three-dimensional elastic modelling of high-frequency waveforms generated from hundreds of source positions in several boreholes is still an intractable task. However, it has been recognised that full waveform inversion offers substantially more information compared with traditional travel time tomography. A common strategy to reduce the computational burden for tomographic inversion is to approximate the true elastic wave propagation by acoustic modelling. This approximation assumes that the solid rock units can be treated like fluids (with no shear wave propagation) and is generally considered to be satisfactory so long as only the earliest portions of the recorded seismograms are considered. The main assumption is that most of the energy in the early parts of the recorded seismograms is carried by the faster compressional (P-) waves. Although a limited number of studies exist on the effects of this approximation for surface/marine synthetic reflection seismic data, and show it to be generally acceptable for models with low to moderate impedance contrasts, to our knowledge no comparable studies have been published on the effects for cross-borehole transmission data. An obvious question is whether transmission tomography should be less affected by elastic effects than surface reflection data when only short time windows are applied to primarily capture the first arriving wavetrains. To answer this question we have performed 2D and 3D investigations on the validity of the acoustic approximation for an elastic medium and using crosshole source-receiver configurations. In order to generate consistent acoustic and elastic data sets, we ran the synthetic tests using the same finite-differences time-domain elastic modelling code for both types of simulations. The acoustic approximation was
ERIC Educational Resources Information Center
Hayamizu, Toshihiko; Kino, Kazuyo; Takagi, Kuniko; Tan, Eng-Hai
2004-01-01
The purpose of this study is to examine whether a new construct "Assumed-Competence based on undervaluing others (AC)" could be a determinant of anger and sadness for contemporary Japanese adolescents. A set of questionnaires was administered to 584 high school students, who rated ACS-2 (Assumed-Competence Scale, second version), Rosenberg's…
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 3 2013-01-01 2013-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. L Appendix L to Part 1026—Assumed Loan Periods...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 3 2014-01-01 2014-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED..., App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Assumed Loan Periods for Computations of Total Annual Loan Cost Rates L Appendix L to Part 1026 Banks and Banking BUREAU OF CONSUMER FINANCIAL PROTECTION TRUTH IN LENDING (REGULATION Z) Pt. 1026, App. L Appendix L to Part 1026—Assumed Loan Periods...
42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.
Code of Federal Regulations, 2010 CFR
2010-10-01
... costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM VOLUNTARY MEDICARE... Provisions § 423.908. Phased-down State contribution to drug benefit costs assumed by Medicare. This...
Embedding impedance approximations in the analysis of SIS mixers
NASA Technical Reports Server (NTRS)
Kerr, A. R.; Pan, S.-K.; Withington, S.
1992-01-01
Future millimeter-wave radio astronomy instruments will use arrays of many SIS receivers, either as focal plane arrays on individual radio telescopes, or as individual receivers on the many antennas of radio interferometers. Such applications will require broadband integrated mixers without mechanical tuners. To produce such mixers, it will be necessary to improve present mixer design techniques, most of which use the three-frequency approximation to Tucker's quantum mixer theory. This paper examines the adequacy of three approximations to Tucker's theory: (1) the usual three-frequency approximation which assumes a sinusoidal LO voltage at the junction, and a short-circuit at all frequencies above the upper sideband; (2) a five-frequency approximation which allows two LO voltage harmonics and five small-signal sidebands; and (3) a quasi five-frequency approximation in which five small-signal sidebands are allowed, but the LO voltage is assumed sinusoidal. These are compared with a full harmonic-Newton solution of Tucker's equations, including eight LO harmonics and their corresponding sidebands, for realistic SIS mixer circuits. It is shown that the accuracy of the three approximations depends strongly on the value of omega R(sub N)C for the SIS junctions used. For large omega R(sub N)C, all three approximations approach the eight-harmonic solution. For omega R(sub N)C values in the range 0.5 to 10, the range of most practical interest, the quasi five-frequency approximation is a considerable improvement over the three-frequency approximation, and should be suitable for much design work. For the realistic SIS mixers considered here, the five-frequency approximation gives results very close to those of the eight-harmonic solution. Use of these approximations, where appropriate, considerably reduces the computational effort needed to analyze an SIS mixer, and allows the design and optimization of mixers using a personal computer.
Cavity approximation for graphical models.
Rizzo, T; Wemmenhove, B; Kappen, H J
2007-07-01
We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field. PMID:17677405
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Approximation Schemes for Scheduling with Availability Constraints
NASA Astrophysics Data System (ADS)
Fu, Bin; Huo, Yumei; Zhao, Hairong
We investigate the problems of scheduling n weighted jobs to m identical machines with availability constraints. We consider two different models of availability constraints: the preventive model where the unavailability is due to preventive machine maintenance, and the fixed job model where the unavailability is due to a priori assignment of some of the n jobs to certain machines at certain times. Both models have applications such as turnaround scheduling or overlay computing. In both models, the objective is to minimize the total weighted completion time. We assume that m is a constant, and the jobs are non-resumable. For the preventive model, it has been shown that there is no approximation algorithm if all machines have unavailable intervals even when w i = p i for all jobs. In this paper, we assume there is one machine permanently available and the processing time of each job is equal to its weight for all jobs. We develop the first PTAS when there are constant number of unavailable intervals. One main feature of our algorithm is that the classification of large and small jobs is with respect to each individual interval, thus not fixed. This classification allows us (1) to enumerate the assignments of large jobs efficiently; (2) and to move small jobs around without increasing the objective value too much, and thus derive our PTAS. Then we show that there is no FPTAS in this case unless P = NP.
Common problems in gastrointestinal radiology
Thompson, W.M.
1989-01-01
This book covers approximately 70 common diagnostic problems in gastro-intestinal radiology. Each problem, includes a short illustrated case history, a discussion of the radiologic findings, a general discussion of the case, the differential diagnosis, a description of the management of the problem or procedure used, and, where appropriate, the results of the therapy suggested.
Odic, Darko; Lisboa, Juan Valle; Eisinger, Robert; Olivera, Magdalena Gonzalez; Maiche, Alejandro; Halberda, Justin
2016-01-01
What is the relationship between our intuitive sense of number (e.g., when estimating how many marbles are in a jar), and our intuitive sense of other quantities, including time (e.g., when estimating how long it has been since we last ate breakfast)? Recent work in cognitive, developmental, comparative psychology, and computational neuroscience has suggested that our representations of approximate number, time, and spatial extent are fundamentally linked and constitute a "generalized magnitude system". But, the shared behavioral and neural signatures between number, time, and space may alternatively be due to similar encoding and decision-making processes, rather than due to shared domain-general representations. In this study, we investigate the relationship between approximate number and time in a large sample of 6-8 year-old children in Uruguay by examining how individual differences in the precision of number and time estimation correlate with school mathematics performance. Over four testing days, each child completed an approximate number discrimination task, an approximate time discrimination task, a digit span task, and a large battery of symbolic math tests. We replicate previous reports showing that symbolic math abilities correlate with approximate number precision and extend those findings by showing that math abilities also correlate with approximate time precision. But, contrary to approximate number and time sharing common representations, we find that each of these dimensions uniquely correlates with formal math: approximate number correlates more strongly with formal math compared to time and continues to correlate with math even when precision in time and individual differences in working memory are controlled for. These results suggest that there are important differences in the mental representations of approximate number and approximate time and further clarify the relationship between quantity representations and mathematics. PMID:26587963
Approximate Genealogies Under Genetic Hitchhiking
Pfaffelhuber, P.; Haubold, B.; Wakolbinger, A.
2006-01-01
The rapid fixation of an advantageous allele leads to a reduction in linked neutral variation around the target of selection. The genealogy at a neutral locus in such a selective sweep can be simulated by first generating a random path of the advantageous allele's frequency and then a structured coalescent in this background. Usually the frequency path is approximated by a logistic growth curve. We discuss an alternative method that approximates the genealogy by a random binary splitting tree, a so-called Yule tree that does not require first constructing a frequency path. Compared to the coalescent in a logistic background, this method gives a slightly better approximation for identity by descent during the selective phase and a much better approximation for the number of lineages that stem from the founder of the selective sweep. In applications such as the approximation of the distribution of Tajima's D, the two approximation methods perform equally well. For relevant parameter ranges, the Yule approximation is faster. PMID:17182733
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Approximate factorization with source terms
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Chyu, W. J.
1991-01-01
A comparative evaluation is made of three methodologies with a view to that which offers the best approximate factorization error. While two of these methods are found to lead to more efficient algorithms in cases where factors which do not contain source terms can be diagonalized, the third method used generates the lowest approximate factorization error. This method may be preferred when the norms of source terms are large, and transient solutions are of interest.
Llinares, Claudio; Mota, David F
2013-04-19
Several extensions of general relativity and high energy physics include scalar fields as extra degrees of freedom. In the search for predictions in the nonlinear regime of cosmological evolution, the community makes use of numerical simulations in which the quasistatic limit is assumed when solving the equation of motion of the scalar field. In this Letter, we propose a method to solve the full equations of motion for scalar degrees of freedom coupled to matter. We run cosmological simulations which track the full time and space evolution of the scalar field, and find striking differences with respect to the commonly used quasistatic approximation. This novel procedure reveals new physical properties of the scalar field and uncovers concealed astrophysical phenomena which were hidden in the old approach. PMID:23679591
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Chiral Magnetic Effect in Hydrodynamic Approximation
NASA Astrophysics Data System (ADS)
Zakharov, Valentin I.
We review derivations of the chiral magnetic effect (ChME) in hydrodynamic approximation. The reader is assumed to be familiar with the basics of the effect. The main challenge now is to account for the strong interactions between the constituents of the fluid. The main result is that the ChME is not renormalized: in the hydrodynamic approximation it remains the same as for non-interacting chiral fermions moving in an external magnetic field. The key ingredients in the proof are general laws of thermodynamics and the Adler-Bardeen theorem for the chiral anomaly in external electromagnetic fields. The chiral magnetic effect in hydrodynamics represents a macroscopic manifestation of a quantum phenomenon (chiral anomaly). Moreover, one can argue that the current induced by the magnetic field is dissipation free and talk about a kind of "chiral superconductivity". More precise description is a quantum ballistic transport along magnetic field taking place in equilibrium and in absence of a driving force. The basic limitation is the exact chiral limit while temperature—excitingly enough—does not seemingly matter. What is still lacking, is a detailed quantum microscopic picture for the ChME in hydrodynamics. Probably, the chiral currents propagate through lower-dimensional defects, like vortices in superfluid. In case of superfluid, the prediction for the chiral magnetic effect remains unmodified although the emerging dynamical picture differs from the standard one.
Investigating Material Approximations in Spacecraft Radiation Analysis
NASA Technical Reports Server (NTRS)
Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.
2011-01-01
During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.
Approximated solutions to Born-Infeld dynamics
NASA Astrophysics Data System (ADS)
Ferraro, Rafael; Nigro, Mauro
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Planetary ephemerides approximation for radar astronomy
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1991-01-01
The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.
Common pediatric epilepsy syndromes.
Park, Jun T; Shahid, Asim M; Jammoul, Adham
2015-02-01
Benign rolandic epilepsy (BRE), childhood idiopathic occipital epilepsy (CIOE), childhood absence epilepsy (CAE), and juvenile myoclonic epilepsy (JME) are some of the common epilepsy syndromes in the pediatric age group. Among the four, BRE is the most commonly encountered. BRE remits by age 16 years with many children requiring no treatment. Seizures in CAE also remit at the rate of approximately 80%; whereas, JME is considered a lifelong condition even with the use of antiepileptic drugs (AEDs). Neonates and infants may also present with seizures that are self-limited with no associated psychomotor disturbances. Benign familial neonatal convulsions caused by a channelopathy, and inherited in an autosomal dominant manner, have a favorable outcome with spontaneous resolution. Benign idiopathic neonatal seizures, also referred to as "fifth-day fits," are an example of another epilepsy syndrome in infants that carries a good prognosis. BRE, CIOE, benign familial neonatal convulsions, benign idiopathic neonatal seizures, and benign myoclonic epilepsy in infancy are characterized as "benign" idiopathic age-related epilepsies as they have favorable implications, no structural brain abnormality, are sensitive to AEDs, have a high remission rate, and have no associated psychomotor disturbances. However, sometimes selected patients may have associated comorbidities such as cognitive and language delay for which the term "benign" may not be appropriate. PMID:25658216
Common Variable Immunodeficiency.
Saikia, Biman; Gupta, Sudhir
2016-04-01
Common variable immunodeficiency (CVID) is the most common primary immunodeficiency of young adolescents and adults which also affects the children. The disease remains largely under-diagnosed in India and Southeast Asian countries. Although in majority of cases it is sporadic, disease may be inherited in a autosomal recessive pattern and rarely, in autosomal dominant pattern. Patients, in addition to frequent sino-pulmonary infections, are also susceptible to various autoimmune diseases and malignancy, predominantly lymphoma and leukemia. Other characteristic lesions include lymphocytic and granulomatous interstitial lung disease, and nodular lymphoid hyperplasia of gut. Diagnosis requires reduced levels of at least two immunoglobulin isotypes: IgG with IgA and/or IgM and impaired specific antibody response to vaccines. A number of gene mutations have been described in CVID; however, these genetic alterations account for less than 20% of cases of CVID. Flow cytometry aptly demonstrates a disturbed B cell homeostasis with reduced or absent memory B cells and increased CD21(low) B cells and transitional B cell populations. Approximately one-third of patients with CVID also display T cell functional defects. Immunoglobulin therapy remains the mainstay of treatment. Immunologists and other clinicians in India and other South East Asian countries need to be aware of CVID so that early diagnosis can be made, as currently, majority of these patients still go undiagnosed. PMID:26868026
Reinhardt, V; Winckler, M; Lebiedz, D
2008-02-28
Many common kinetic model reduction approaches are explicitly based on inherent multiple time scales and often assume and directly exploit a clear time scale separation into fast and slow reaction processes. They approximate the system dynamics with a dimension-reduced model after eliminating the fast modes by enslaving them to the slow ones. The corresponding restrictive assumption of full relaxation of fast modes often renders the resulting approximation of slow attracting manifolds inaccurate as a representation of the reduced model and makes the numerical solution of the nonlinear "reduction equations" particularly difficult in many cases where the gap in intrinsic time scales is not large enough. We demonstrate that trajectory optimization approaches can avoid such severe restrictions by computing numerical solutions that correspond to "maximally relaxed" dynamical modes in a suitable sense. We present a framework of trajectory-based optimization for model reduction in chemical kinetics and a general class of reduction criteria characterizing the relaxation of chemical forces along reaction trajectories. These criteria can be motivated geometrically exploiting ideas from differential geometry and fundamental physics and turn out to be highly successful in example applications. Within this framework, we provide results for the computational approximation of slow attracting low-dimensional manifolds in terms of families of optimal trajectories for a six-component hydrogen combustion mechanism. PMID:18247506
Wavelet Sparse Approximate Inverse Preconditioners
NASA Technical Reports Server (NTRS)
Chan, Tony F.; Tang, W.-P.; Wan, W. L.
1996-01-01
There is an increasing interest in using sparse approximate inverses as preconditioners for Krylov subspace iterative methods. Recent studies of Grote and Huckle and Chow and Saad also show that sparse approximate inverse preconditioner can be effective for a variety of matrices, e.g. Harwell-Boeing collections. Nonetheless a drawback is that it requires rapid decay of the inverse entries so that sparse approximate inverse is possible. However, for the class of matrices that, come from elliptic PDE problems, this assumption may not necessarily hold. Our main idea is to look for a basis, other than the standard one, such that a sparse representation of the inverse is feasible. A crucial observation is that the kind of matrices we are interested in typically have a piecewise smooth inverse. We exploit this fact, by applying wavelet techniques to construct a better sparse approximate inverse in the wavelet basis. We shall justify theoretically and numerically that our approach is effective for matrices with smooth inverse. We emphasize that in this paper we have only presented the idea of wavelet approximate inverses and demonstrated its potential but have not yet developed a highly refined and efficient algorithm.
Approximate entropy of network parameters.
West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew
2012-04-01
We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches. PMID:22680542
Approximate entropy of network parameters
NASA Astrophysics Data System (ADS)
West, James; Lacasa, Lucas; Severini, Simone; Teschendorff, Andrew
2012-04-01
We study the notion of approximate entropy within the framework of network theory. Approximate entropy is an uncertainty measure originally proposed in the context of dynamical systems and time series. We first define a purely structural entropy obtained by computing the approximate entropy of the so-called slide sequence. This is a surrogate of the degree sequence and it is suggested by the frequency partition of a graph. We examine this quantity for standard scale-free and Erdös-Rényi networks. By using classical results of Pincus, we show that our entropy measure often converges with network size to a certain binary Shannon entropy. As a second step, with specific attention to networks generated by dynamical processes, we investigate approximate entropy of horizontal visibility graphs. Visibility graphs allow us to naturally associate with a network the notion of temporal correlations, therefore providing the measure a dynamical garment. We show that approximate entropy distinguishes visibility graphs generated by processes with different complexity. The result probes to a greater extent these networks for the study of dynamical systems. Applications to certain biological data arising in cancer genomics are finally considered in the light of both approaches.
Relativistic regular approximations revisited: An infinite-order relativistic approximation
Dyall, K.G.; van Lenthe, E.
1999-07-01
The concept of the regular approximation is presented as the neglect of the energy dependence of the exact Foldy{endash}Wouthuysen transformation of the Dirac Hamiltonian. Expansion of the normalization terms leads immediately to the zeroth-order regular approximation (ZORA) and first-order regular approximation (FORA) Hamiltonians as the zeroth- and first-order terms of the expansion. The expansion may be taken to infinite order by using an un-normalized Foldy{endash}Wouthuysen transformation, which results in the ZORA Hamiltonian and a nonunit metric. This infinite-order regular approximation, IORA, has eigenvalues which differ from the Dirac eigenvalues by order E{sup 3}/c{sup 4} for a hydrogen-like system, which is a considerable improvement over the ZORA eigenvalues, and similar to the nonvariational FORA energies. A further perturbation analysis yields a third-order correction to the IORA energies, TIORA. Results are presented for several systems including the neutral U atom. The IORA eigenvalues for all but the 1s spinor of the neutral system are superior even to the scaled ZORA energies, which are exact for the hydrogenic system. The third-order correction reduces the IORA error for the inner orbitals to a very small fraction of the Dirac eigenvalue. {copyright} {ital 1999 American Institute of Physics.}
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
Heat pipe transient response approximation
NASA Astrophysics Data System (ADS)
Reid, Robert S.
2002-01-01
A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper. .
Does the rapid appearance of life on Earth suggest that life is common in the universe?
Lineweaver, Charles H; Davis, Tamara M
2002-01-01
It is sometimes assumed that the rapidity of biogenesis on Earth suggests that life is common in the Universe. Here we critically examine the assumptions inherent in this if-life-evolved-rapidly-life-must-be-common argument. We use the observational constraints on the rapidity of biogenesis on Earth to infer the probability of biogenesis on terrestrial planets with the same unknown probability of biogenesis as the Earth. We find that on such planets, older than approximately 1 Gyr, the probability of biogenesis is > 13% at the 95% confidence level. This quantifies an important term in the Drake Equation but does not necessarily mean that life is common in the Universe. PMID:12530239
Erickson, K.L.; Chu, M.S.Y.; Siegel, M.D.; Beyeler, W.
1986-12-31
Three approximate methods appear useful for calculating radionuclide discharges in fractured, porous rock: (1) a semi-infinite-medium approximation where radionuclide diffusion rates into the matrix are calculated assuming a semi-infinite matrix; (2) a linear-driving-force approximation where radionuclide diffusion rates into the matrix are assumed to be proportional to the difference between bulk concentrations in the fracture fluid and in the matrix pore water; and (3) an equivalent-porous-medium approximation where radionuclide diffusion rates into the matrix are calculated assuming that the time rate of change of the bulk radionuclide concentration in the matrix is proportional to the time rate of change of the radionuclide concentration in the fracture fluid. A preliminary evaluation of these approximations was made by considering transport of a single radionuclide in saturated, porous rock containing uniform, parallel fractures.
An approximate solution for interlaminar stresses in composite laminates
NASA Technical Reports Server (NTRS)
Rose, Cheryl A.; Herakovich, Carl T.
1993-01-01
An efficient approximate solution for interlaminar stresses in finite width, symmetric and unsymmetric laminated composites subjected to axial and/or bending loads is presented. The solution is based upon statically admissible stress fields which take into consideration local property mismatch effects and global equilibrium requirements. Unknown constants in the assumed stress states are determined through minimization of the laminate complementary energy. Typical results are presented for through-thickness and interlaminar stress distributions for angle-ply and cross-ply laminates subjected to axial loading. It is shown that the present formulation represents an improved, efficient approximate solution for interlaminar stresses.
No Common Opinion on the Common Core
ERIC Educational Resources Information Center
Henderson, Michael B.; Peterson, Paul E.; West, Martin R.
2015-01-01
According to the three authors of this article, the 2014 "EdNext" poll yields four especially important new findings: (1) Opinion with respect to the Common Core has yet to coalesce. The idea of a common set of standards across the country has wide appeal, and the Common Core itself still commands the support of a majority of the public.…
ERIC Educational Resources Information Center
Kennedy, Nadia Stoyanova
2012-01-01
Students are often encouraged to work on problems "like mathematicians"--to be persistent, to investigate different approaches, and to evaluate solutions. This behavior, regarded as problem solving, is an essential component of mathematical practice. Some crucial aspects of problem solving include defining and interpreting problems, working with…
Examining the exobase approximation: DSMC models of Titan's upper atmosphere
NASA Astrophysics Data System (ADS)
Tucker, O. J.; Waalkes, W.; Tenishev, V.; Johnson, R. E.; Bieler, A. M.; Nagy, A. F.
2015-12-01
Chamberlain (1963) developed the so-called exobase approximation for planetary atmospheres below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are negligible. Here we present an examination of the exobase approximation applied in the DeLaHaye et al. (2007) study used to extract the energy deposition and non-thermal escape rates from Titan's atmosphere using the INMS data for the TA and T5 Cassini encounters. In that study a Liouville theorem based approach is used to fit the density data for N2 and CH4 assuming an enhanced population of suprathermal molecules (E >> kT) was present at the exobase. The density data was fit in the altitude region of 1450 - 2000 km using a kappa energy distribution to characterize the non-thermal component. Here we again fit the data using the conventional kappa energy distribution function, and then use the Direct Simulation Monte Carlo (DSMC) technique (Bird 1994) to determine the effect of molecular collisions. The results for the fits are used to obtain improved fits compared to the results in DeLaHaye et al. (2007). In addition the collisional and collisionless DSMC results are compared to evaluate the validity of the assumed energy distribution function and the collisionless approximation. We find that differences between fitting procedures to the INMS data carried out within a scale height of the assumed exobase can result in the extraction of very different energy deposition and escape rates. DSMC simulations performed with and without collisions to test the Liouville theorem based approximation show that collisions affect the density and temperature profiles well above the exobase as well as the escape rate. This research was supported by grant NNH12ZDA001N from the NASA ROSES OPR program. The computations were made with NAS computer resources at NASA Ames under GID 26135.
Chemical Laws, Idealization and Approximation
NASA Astrophysics Data System (ADS)
Tobin, Emma
2013-07-01
This paper examines the notion of laws in chemistry. Vihalemm ( Found Chem 5(1):7-22, 2003) argues that the laws of chemistry are fundamentally the same as the laws of physics they are all ceteris paribus laws which are true "in ideal conditions". In contrast, Scerri (2000) contends that the laws of chemistry are fundamentally different to the laws of physics, because they involve approximations. Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34-50, 2000) agree that the laws of chemistry are operationally different to the laws of physics, but claim that the distinction between exact and approximate laws is too simplistic to taxonomise them. Approximations in chemistry involve diverse kinds of activity and often what counts as a scientific law in chemistry is dictated by the context of its use in scientific practice. This paper addresses the question of what makes chemical laws distinctive independently of the separate question as to how they are related to the laws of physics. From an analysis of some candidate ceteris paribus laws in chemistry, this paper argues that there are two distinct kinds of ceteris paribus laws in chemistry; idealized and approximate chemical laws. Thus, while Christie ( Stud Hist Philos Sci 25:613-629, 1994) and Christie and Christie ( Of minds and molecules. Oxford University Press, New York, pp. 34--50, 2000) are correct to point out that the candidate generalisations in chemistry are diverse and heterogeneous, a distinction between idealizations and approximations can nevertheless be used to successfully taxonomise them.
Code of Federal Regulations, 2011 CFR
2011-01-01
... CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving all claims against United States. When the cattle are to be dipped under APHIS supervision the owner of the cattle, offered for shipment, or his agent duly authorized thereto, shall first execute...
ERIC Educational Resources Information Center
Nielsen, Annemette; Michaelsen, Kim F.; Holm, Lotte
2014-01-01
Researchers question the implications of the way in which "motherhood" is constructed in public health discourse. Current nutritional guidelines for Danish parents of young children are part of this discourse. They are shaped by an assumed symbiotic relationship between the nutritional needs of the child and the interest and focus of the…
Code of Federal Regulations, 2013 CFR
2013-04-01
... new TERA covering the authority for the development of another energy resource it wishes to assume... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS TRIBAL ENERGY RESOURCE AGREEMENTS UNDER THE INDIAN TRIBAL...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false How may a tribe assume management of development of different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS TRIBAL ENERGY RESOURCE AGREEMENTS UNDER THE INDIAN TRIBAL ENERGY DEVELOPMENT AND SELF DETERMINATION ACT...
Code of Federal Regulations, 2011 CFR
2011-04-01
... new TERA covering the authority for the development of another energy resource it wishes to assume... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS TRIBAL ENERGY RESOURCE AGREEMENTS UNDER THE INDIAN TRIBAL...
Code of Federal Regulations, 2010 CFR
2010-04-01
... new TERA covering the authority for the development of another energy resource it wishes to assume... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS TRIBAL ENERGY RESOURCE AGREEMENTS UNDER THE INDIAN TRIBAL...
Code of Federal Regulations, 2012 CFR
2012-04-01
... new TERA covering the authority for the development of another energy resource it wishes to assume... different types of energy resources? 224.64 Section 224.64 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR ENERGY AND MINERALS TRIBAL ENERGY RESOURCE AGREEMENTS UNDER THE INDIAN TRIBAL...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Federal environmental responsibilities assumed by the Self-Governance Tribe. ... 42 Public Health 1 2010-10-01 2010-10-01 false Since Federal environmental responsibilities are... additional funds available to Self-Governance Tribes to carry out these formerly inherently...
Code of Federal Regulations, 2012 CFR
2012-01-01
... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a)...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a)...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Annual Loan Cost Rates L Appendix L to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. L Appendix L to Part 226—Assumed Loan Periods for Computations of Total Annual Loan Cost Rates (a)...
42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.
Code of Federal Regulations, 2012 CFR
2012-10-01
... costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) VOLUNTARY MEDICARE PRESCRIPTION DRUG BENEFIT Special Rules for States-Eligibility Determinations for Subsidies...
42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.
Code of Federal Regulations, 2013 CFR
2013-10-01
... costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) VOLUNTARY MEDICARE PRESCRIPTION DRUG BENEFIT Special Rules for States-Eligibility Determinations for Subsidies...
42 CFR 423.908. - Phased-down State contribution to drug benefit costs assumed by Medicare.
Code of Federal Regulations, 2014 CFR
2014-10-01
... costs assumed by Medicare. 423.908. Section 423.908. Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM (CONTINUED) VOLUNTARY MEDICARE PRESCRIPTION DRUG BENEFIT Special Rules for States-Eligibility Determinations for Subsidies...
Code of Federal Regulations, 2013 CFR
2013-01-01
... INTERSTATE TRANSPORTATION OF ANIMALS (INCLUDING POULTRY) AND ANIMAL PRODUCTS TEXAS (SPLENETIC) FEVER IN CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving... to their interstate shipment, or resulting from the fact that they are later found to be still...
Code of Federal Regulations, 2010 CFR
2010-01-01
... INTERSTATE TRANSPORTATION OF ANIMALS (INCLUDING POULTRY) AND ANIMAL PRODUCTS TEXAS (SPLENETIC) FEVER IN CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving... to their interstate shipment, or resulting from the fact that they are later found to be still...
Code of Federal Regulations, 2012 CFR
2012-01-01
... INTERSTATE TRANSPORTATION OF ANIMALS (INCLUDING POULTRY) AND ANIMAL PRODUCTS TEXAS (SPLENETIC) FEVER IN CATTLE § 72.15 Owners assume responsibility; must execute agreement prior to dipping or treatment waiving... to their interstate shipment, or resulting from the fact that they are later found to be still...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 1 2012-10-01 2012-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 1 2013-10-01 2013-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false Do Self-Governance Tribes become Federal agencies... HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.286 Do Self-Governance... Self-Governance Tribes are required to assume Federal environmental responsibilities for projects...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 1 2011-10-01 2011-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 1 2014-10-01 2014-10-01 false May Self-Governance Tribes carry out construction... OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Construction Nepa Process § 137.291 May Self-Governance Tribes carry out construction projects without assuming these Federal...
Testing the frozen flow approximation
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1993-01-01
We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.
EPR Correlations, Bell Inequalities and Common Cause Systems
NASA Astrophysics Data System (ADS)
Hofer-Szabó, Gábor
2014-03-01
Standard common causal explanations of the EPR situation assume a so-called joint common cause system that is a common cause for all correlations. However, the assumption of a joint common cause system together with some other physically motivated assumptions concerning locality and no-conspiracy results in various Bell inequalities. Since Bell inequalities are violated for appropriate measurement settings, a local, non-conspiratorial joint common causal explanation of the EPR situation is ruled out. But why do we assume that a common causal explanation of a set of correlation consists in finding a joint common cause system for all correlations and not just in finding separate common cause systems for the different correlations? What are the perspectives of a local, non-conspiratorial separate common causal explanation for the EPR scenario? And finally, how do Bell inequalities relate to the weaker assumption of separate common cause systems?
Approximate Counting of Graphical Realizations
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Working Memory in Nonsymbolic Approximate Arithmetic Processing: A Dual-Task Study with Preschoolers
ERIC Educational Resources Information Center
Xenidou-Dervou, Iro; van Lieshout, Ernest C. D. M.; van der Schoot, Menno
2014-01-01
Preschool children have been proven to possess nonsymbolic approximate arithmetic skills before learning how to manipulate symbolic math and thus before any formal math instruction. It has been assumed that nonsymbolic approximate math tasks necessitate the allocation of Working Memory (WM) resources. WM has been consistently shown to be an…
Dynamics of false vacuum bubbles: beyond the thin shell approximation
NASA Astrophysics Data System (ADS)
Hansen, Jakob; Hwang, Dong-il; Yeom, Dong-han
2009-11-01
We numerically study the dynamics of false vacuum bubbles which are inside an almost flat background; we assumed spherical symmetry and the size of the bubble is smaller than the size of the background horizon. According to the thin shell approximation and the null energy condition, if the bubble is outside of a Schwarzschild black hole, unless we assume Farhi-Guth-Guven tunneling, expanding and inflating solutions are impossible. In this paper, we extend our method to beyond the thin shell approximation: we include the dynamics of fields and assume that the transition layer between a true vacuum and a false vacuum has non-zero thickness. If a shell has sufficiently low energy, as expected from the thin shell approximation, it collapses (Type 1). However, if the shell has sufficiently large energy, it tends to expand. Here, via the field dynamics, field values of inside of the shell slowly roll down to the true vacuum and hence the shell does not inflate (Type 2). If we add sufficient exotic matters to regularize the curvature near the shell, inflation may be possible without assuming Farhi-Guth-Guven tunneling. In this case, a wormhole is dynamically generated around the shell (Type 3). By tuning our simulation parameters, we could find transitions between Type 1 and Type 2, as well as between Type 2 and Type 3. Between Type 2 and Type 3, we could find another class of solutions (Type 4). Finally, we discuss the generation of a bubble universe and the violation of unitarity. We conclude that the existence of a certain combination of exotic matter fields violates unitarity.
Migraine and Common Morbidities
... headaches . Home > Migraine and Common Morbidities Print Email Migraine and Common Morbidities ACHE Newsletter Sign up for ... newsletter by entering your e-mail address below. Migraine and Common Morbidities For many patients, migraine is ...
NASA Technical Reports Server (NTRS)
Hark, Frank; Britton, Paul; Ring, Robert; Novack, Steven
2015-01-01
Space Launch System (SLS) Agenda: Objective; Key Definitions; Calculating Common Cause; Examples; Defense against Common Cause; Impact of varied Common Cause Failure (CCF) and abortability; Response Surface for various CCF Beta; Takeaways.
Improved non-approximability results
Bellare, M.; Sudan, M.
1994-12-31
We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.
Quantum tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Banerjee, Rabin; Ranjan Majhi, Bibhas
2008-06-01
Hawking radiation as tunneling by Hamilton-Jacobi method beyond semiclassical approximation is analysed. We compute all quantum corrections in the single particle action revealing that these are proportional to the usual semiclassical contribution. We show that a simple choice of the proportionality constants reproduces the one loop back reaction effect in the spacetime, found by conformal field theory methods, which modifies the Hawking temperature of the black hole. Using the law of black hole mechanics we give the corrections to the Bekenstein-Hawking area law following from the modified Hawking temperature. Some examples are explicitly worked out.
Fermion tunneling beyond semiclassical approximation
NASA Astrophysics Data System (ADS)
Majhi, Bibhas Ranjan
2009-02-01
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys.JHEPFG1029-8479 06 (2008) 09510.1088/1126-6708/2008/06/095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
The structural physical approximation conjecture
NASA Astrophysics Data System (ADS)
Shultz, Fred
2016-01-01
It was conjectured that the structural physical approximation (SPA) of an optimal entanglement witness is separable (or equivalently, that the SPA of an optimal positive map is entanglement breaking). This conjecture was disproved, first for indecomposable maps and more recently for decomposable maps. The arguments in both cases are sketched along with important related results. This review includes background material on topics including entanglement witnesses, optimality, duality of cones, decomposability, and the statement and motivation for the SPA conjecture so that it should be accessible for a broad audience.
Approximate Joint Diagonalization and Geometric Mean of Symmetric Positive Definite Matrices
Congedo, Marco; Afsari, Bijan; Barachant, Alexandre; Moakher, Maher
2015-01-01
We explore the connection between two problems that have arisen independently in the signal processing and related fields: the estimation of the geometric mean of a set of symmetric positive definite (SPD) matrices and their approximate joint diagonalization (AJD). Today there is a considerable interest in estimating the geometric mean of a SPD matrix set in the manifold of SPD matrices endowed with the Fisher information metric. The resulting mean has several important invariance properties and has proven very useful in diverse engineering applications such as biomedical and image data processing. While for two SPD matrices the mean has an algebraic closed form solution, for a set of more than two SPD matrices it can only be estimated by iterative algorithms. However, none of the existing iterative algorithms feature at the same time fast convergence, low computational complexity per iteration and guarantee of convergence. For this reason, recently other definitions of geometric mean based on symmetric divergence measures, such as the Bhattacharyya divergence, have been considered. The resulting means, although possibly useful in practice, do not satisfy all desirable invariance properties. In this paper we consider geometric means of covariance matrices estimated on high-dimensional time-series, assuming that the data is generated according to an instantaneous mixing model, which is very common in signal processing. We show that in these circumstances we can approximate the Fisher information geometric mean by employing an efficient AJD algorithm. Our approximation is in general much closer to the Fisher information geometric mean as compared to its competitors and verifies many invariance properties. Furthermore, convergence is guaranteed, the computational complexity is low and the convergence rate is quadratic. The accuracy of this new geometric mean approximation is demonstrated by means of simulations. PMID:25919667
NASA Technical Reports Server (NTRS)
Billingham, John; Tarter, Jill
1989-01-01
The maximum range is calculated at which radar signals from the earth could be detected by a search system similar to the NASA SETI Microwave Observing Project (SETI MOP) assumed to be operating out in the Galaxy. Figures are calculated for the Targeted Search and for the Sky Survey parts of the MOP, both planned to be operating in the 1990s. The probability of detection is calculated for the two most powerful transmitters, the planetary radar at Arecibo (Puerto Rico) and the ballistic missile early warning systems (BMEWSs), assuming that the terrestrial radars are only in the eavesdropping mode. It was found that, for the case of a single transmitter within the maximum range, the highest probability is for the sky survey detecting BMEWSs; this is directly proportional to BMEWS sky coverage and is therefore 0.25.
Stadler, Tanja; Vaughan, Timothy G.; Gavryushkin, Alex; Guindon, Stephane; Kühnert, Denise; Leventhal, Gabriel E.; Drummond, Alexei J.
2015-01-01
One of the central objectives in the field of phylodynamics is the quantification of population dynamic processes using genetic sequence data or in some cases phenotypic data. Phylodynamics has been successfully applied to many different processes, such as the spread of infectious diseases, within-host evolution of a pathogen, macroevolution and even language evolution. Phylodynamic analysis requires a probability distribution on phylogenetic trees spanned by the genetic data. Because such a probability distribution is not available for many common stochastic population dynamic processes, coalescent-based approximations assuming deterministic population size changes are widely employed. Key to many population dynamic models, in particular epidemiological models, is a period of exponential population growth during the initial phase. Here, we show that the coalescent does not well approximate stochastic exponential population growth, which is typically modelled by a birth–death process. We demonstrate that introducing demographic stochasticity into the population size function of the coalescent improves the approximation for values of R0 close to 1, but substantial differences remain for large R0. In addition, the computational advantage of using an approximation over exact models vanishes when introducing such demographic stochasticity. These results highlight that we need to increase efforts to develop phylodynamic tools that correctly account for the stochasticity of population dynamic models for inference. PMID:25876846
NASA Technical Reports Server (NTRS)
Billingham, J.; Tarter, J.
1992-01-01
This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Plasma Physics Approximations in Ares
Managan, R. A.
2015-01-08
Lee & More derived analytic forms for the transport properties of a plasma. Many hydro-codes use their formulae for electrical and thermal conductivity. The coefficients are complex functions of Fermi-Dirac integrals, F_{n}( μ/θ ), the chemical potential, μ or ζ = ln(1+e^{ μ/θ} ), and the temperature, θ = kT. Since these formulae are expensive to compute, rational function approximations were fit to them. Approximations are also used to find the chemical potential, either μ or ζ . The fits use ζ as the independent variable instead of μ/θ . New fits are provided for A^{α} (ζ ),A^{β} (ζ ), ζ, f(ζ ) = (1 + e^{-μ/θ})F_{1/2}(μ/θ), F_{1/2}'/F_{1/2}, F_{c}^{α}, and F_{c}^{β}. In each case the relative error of the fit is minimized since the functions can vary by many orders of magnitude. The new fits are designed to exactly preserve the limiting values in the non-degenerate and highly degenerate limits or as ζ→ 0 or ∞. The original fits due to Lee & More and George Zimmerman are presented for comparison.
Fast Approximate Quadratic Programming for Graph Matching
Vogelstein, Joshua T.; Conroy, John M.; Lyzinski, Vince; Podrazik, Louis J.; Kratzer, Steven G.; Harley, Eric T.; Fishkind, Donniell E.; Vogelstein, R. Jacob; Priebe, Carey E.
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624
Fast approximate quadratic programming for graph matching.
Vogelstein, Joshua T; Conroy, John M; Lyzinski, Vince; Podrazik, Louis J; Kratzer, Steven G; Harley, Eric T; Fishkind, Donniell E; Vogelstein, R Jacob; Priebe, Carey E
2015-01-01
Quadratic assignment problems arise in a wide variety of domains, spanning operations research, graph theory, computer vision, and neuroscience, to name a few. The graph matching problem is a special case of the quadratic assignment problem, and graph matching is increasingly important as graph-valued data is becoming more prominent. With the aim of efficiently and accurately matching the large graphs common in big data, we present our graph matching algorithm, the Fast Approximate Quadratic assignment algorithm. We empirically demonstrate that our algorithm is faster and achieves a lower objective value on over 80% of the QAPLIB benchmark library, compared with the previous state-of-the-art. Applying our algorithm to our motivating example, matching C. elegans connectomes (brain-graphs), we find that it efficiently achieves performance. PMID:25886624
Common Career Technical Core: Common Standards, Common Vision for CTE
ERIC Educational Resources Information Center
Green, Kimberly
2012-01-01
This article provides an overview of the National Association of State Directors of Career Technical Education Consortium's (NASDCTEc) Common Career Technical Core (CCTC), a state-led initiative that was created to ensure that career and technical education (CTE) programs are consistent and high quality across the United States. Forty-two states,…
Mean-Field Approximation to the Hydrophobic Hydration in the Liquid-Vapor Interface of Water.
Abe, Kiharu; Sumi, Tomonari; Koga, Kenichiro
2016-03-01
A mean-field approximation to the solvation of nonpolar solutes in the liquid-vapor interface of aqueous solutions is proposed. It is first remarked with a numerical illustration that the solvation of a methane-like solute in bulk liquid water is accurately described by the mean-field theory of liquids, the main idea of which is that the probability (Pcav) of finding a cavity in the solvent that can accommodate the solute molecule and the attractive interaction energy (uatt) that the solute would feel if it is inserted in such a cavity are both functions of the solvent density alone. It is then assumed that the basic idea is still valid in the liquid-vapor interface, but Pcav and uatt are separately functions of different coarse-grained local densities, not functions of a common local density. Validity of the assumptions is confirmed for the solvation of the methane-like particle in the interface of model water at temperatures between 253 and 613 K. With the mean-field approximation extended to the inhomogeneous system the local solubility profiles across the interface at various temperatures are calculated from Pcav and uatt obtained at a single temperature. The predicted profiles are in excellent agreement with those obtained by the direct calculation of the excess chemical potential over an interfacial region where the solvent local density varies most rapidly. PMID:26595441
Interplay of approximate planning strategies.
Huys, Quentin J M; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J; Dayan, Peter; Roiser, Jonathan P
2015-03-10
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or "options." PMID:25675480
Approximating metal-insulator transitions
NASA Astrophysics Data System (ADS)
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
Strong shock implosion, approximate solution
NASA Astrophysics Data System (ADS)
Fujimoto, Y.; Mishkin, E. A.; Alejaldre, C.
1983-01-01
The self-similar, center-bound motion of a strong spherical, or cylindrical, shock wave moving through an ideal gas with a constant, γ= cp/ cv, is considered and a linearized, approximate solution is derived. An X, Y phase plane of the self-similar solution is defined and the representative curved of the system behind the shock front is replaced by a straight line connecting the mappings of the shock front with that of its tail. The reduced pressure P(ξ), density R(ξ) and velocity U1(ξ) are found in closed, quite accurate, form. Comparison with numerically obtained results, for γ= {5}/{3} and γ= {7}/{5}, is shown.
Function approximation using adaptive and overlapping intervals
Patil, R.B.
1995-05-01
A problem common to many disciplines is to approximate a function given only the values of the function at various points in input variable space. A method is proposed for approximating a function of several to one variable. The model takes the form of weighted averaging of overlapping basis functions defined over intervals. The number of such basis functions and their parameters (widths and centers) are automatically determined using given training data and a learning algorithm. The proposed algorithm can be seen as placing a nonuniform multidimensional grid in the input domain with overlapping cells. The non-uniformity and overlap of the cells is achieved by a learning algorithm to optimize a given objective function. This approach is motivated by the fuzzy modeling approach and a learning algorithms used for clustering and classification in pattern recognition. The basics of why and how the approach works are given. Few examples of nonlinear regression and classification are modeled. The relationship between the proposed technique, radial basis neural networks, kernel regression, probabilistic neural networks, and fuzzy modeling is explained. Finally advantages and disadvantages are discussed.
Approximate analytic solutions to the NPDD: Short exposure approximations
NASA Astrophysics Data System (ADS)
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Oh, Jun Seok
2016-01-01
Vagally mediated atrioventricular (AV) block is a condition which a paroxysmal AV block occurs with the slowing of the sinus rate. Owing to its unpredictability and benign nature, it often goes unrecognized in clinical practice. We present the case of a 49-year-old man who suddenly lost consciousness when he assumed a prone position for hemorrohoidectomy under spinal anesthesia; continuous electrocardiographic recording revealed AV block with ventricular asystole. He was completely recovered after returning to a supine position. This case calls our attention to fatal manifestation of vagally mediated AV block leading to syncope. PMID:26885304
Generalized stationary phase approximations for mountain waves
NASA Astrophysics Data System (ADS)
Knight, H.; Broutman, D.; Eckermann, S. D.
2016-04-01
Large altitude asymptotic approximations are derived for vertical displacements due to mountain waves generated by hydrostatic wind flow over arbitrary topography. This leads to new asymptotic analytic expressions for wave-induced vertical displacement for mountains with an elliptical Gaussian shape and with the major axis oriented at any angle relative to the background wind. The motivation is to understand local maxima in vertical displacement amplitude at a given height for elliptical mountains aligned at oblique angles to the wind direction, as identified in Eckermann et al. ["Effects of horizontal geometrical spreading on the parameterization of orographic gravity-wave drag. Part 1: Numerical transform solutions," J. Atmos. Sci. 72, 2330-2347 (2015)]. The standard stationary phase method reproduces one type of local amplitude maximum that migrates downwind with increasing altitude. Another type of local amplitude maximum stays close to the vertical axis over the center of the mountain, and a new generalized stationary phase method is developed to describe this other type of local amplitude maximum and the horizontal variation of wave-induced vertical displacement near the vertical axis of the mountain in the large altitude limit. The new generalized stationary phase method describes the asymptotic behavior of integrals where the asymptotic parameter is raised to two different powers (1/2 and 1) rather than just one power as in the standard stationary phase method. The vertical displacement formulas are initially derived assuming a uniform background wind but are extended to accommodate both vertical shear with a fixed wind direction and vertical variations in the buoyancy frequency.
Function approximation in inhibitory networks.
Tripp, Bryan; Eliasmith, Chris
2016-05-01
In performance-optimized artificial neural networks, such as convolutional networks, each neuron makes excitatory connections with some of its targets and inhibitory connections with others. In contrast, physiological neurons are typically either excitatory or inhibitory, not both. This is a puzzle, because it seems to constrain computation, and because there are several counter-examples that suggest that it may not be a physiological necessity. Parisien et al. (2008) showed that any mixture of excitatory and inhibitory functional connections could be realized by a purely excitatory projection in parallel with a two-synapse projection through an inhibitory population. They showed that this works well with ratios of excitatory and inhibitory neurons that are realistic for the neocortex, suggesting that perhaps the cortex efficiently works around this apparent computational constraint. Extending this work, we show here that mixed excitatory and inhibitory functional connections can also be realized in networks that are dominated by inhibition, such as those of the basal ganglia. Further, we show that the function-approximation capacity of such connections is comparable to that of idealized mixed-weight connections. We also study whether such connections are viable in recurrent networks, and find that such recurrent networks can flexibly exhibit a wide range of dynamics. These results offer a new perspective on computation in the basal ganglia, and also perhaps on inhibitory networks within the cortex. PMID:26963256
Interplay of approximate planning strategies
Huys, Quentin J. M.; Lally, Níall; Faulkner, Paul; Eshel, Neir; Seifritz, Erich; Gershman, Samuel J.; Dayan, Peter; Roiser, Jonathan P.
2015-01-01
Humans routinely formulate plans in domains so complex that even the most powerful computers are taxed. To do so, they seem to avail themselves of many strategies and heuristics that efficiently simplify, approximate, and hierarchically decompose hard tasks into simpler subtasks. Theoretical and cognitive research has revealed several such strategies; however, little is known about their establishment, interaction, and efficiency. Here, we use model-based behavioral analysis to provide a detailed examination of the performance of human subjects in a moderately deep planning task. We find that subjects exploit the structure of the domain to establish subgoals in a way that achieves a nearly maximal reduction in the cost of computing values of choices, but then combine partial searches with greedy local steps to solve subtasks, and maladaptively prune the decision trees of subtasks in a reflexive manner upon encountering salient losses. Subjects come idiosyncratically to favor particular sequences of actions to achieve subgoals, creating novel complex actions or “options.” PMID:25675480
Multidimensional stochastic approximation Monte Carlo.
Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383
Decision analysis with approximate probabilities
NASA Technical Reports Server (NTRS)
Whalen, Thomas
1992-01-01
This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
NASA Astrophysics Data System (ADS)
Trimarchi, Giancarlo; Zhang, Xiuwen; Zunger, Alex
2015-03-01
The quest for new topological insulators (TIs) has motivated numerous ab initio calculations of the topological metric Z2 of candidate compounds in hypothetical crystal structures, or in assumed pressure or doping conditions. However, TI-ness might destabilize certain crystal structures that would be replaced by other structures, which might not be TIs. Here, we discuss such false-positive predictions recurrent in the ab initio search for new TIs: (i) Various ABX compounds, predicted to be TIs in the assumed ZrBeSi-type structure that turns out to be unstable, become trivial insulators in their stable structures. (ii) Band-inversion-inducing structure perturbations destabilize the system which is instead trivial at equilibrium: examples of this scenario are the cubic AIIIBiO3 perovskites that transform from topological to trivial when they relax to their equilibrium structures. (iii) Doping destabilizes the band-inverted system that relaxes to a trivial atomic configuration (orthorhombic band-inverted BaBiO3 becomes trivial upon electron doping). This shows the need of performing total energy along with Z2 calculations to predict stable TIs. Work at CU, Boulder supported by the U.S. Department of Energy, Office of Science, Basic Energy Science, Materials Sciences and Engineering Division under Grant DE-FG02-13ER46959.
ERIC Educational Resources Information Center
Bruton, Anthony
2005-01-01
Process writing and communicative-task-based instruction both assume productive tasks that prompt self-expression to motivate students and as the principal engine for developing L2 proficiency in the language classroom. Besides this, process writing and communicative-task-based instruction have much else in common, despite some obvious…
A consistent collinear triad approximation for operational wave models
NASA Astrophysics Data System (ADS)
Salmon, J. E.; Smit, P. B.; Janssen, T. T.; Holthuijsen, L. H.
2016-08-01
In shallow water, the spectral evolution associated with energy transfers due to three-wave (or triad) interactions is important for the prediction of nearshore wave propagation and wave-driven dynamics. The numerical evaluation of these nonlinear interactions involves the evaluation of a weighted convolution integral in both frequency and directional space for each frequency-direction component in the wave field. For reasons of efficiency, operational wave models often rely on a so-called collinear approximation that assumes that energy is only exchanged between wave components travelling in the same direction (collinear propagation) to eliminate the directional convolution. In this work, we show that the collinear approximation as presently implemented in operational models is inconsistent. This causes energy transfers to become unbounded in the limit of unidirectional waves (narrow aperture), and results in the underestimation of energy transfers in short-crested wave conditions. We propose a modification to the collinear approximation to remove this inconsistency and to make it physically more realistic. Through comparison with laboratory observations and results from Monte Carlo simulations, we demonstrate that the proposed modified collinear model is consistent, remains bounded, smoothly converges to the unidirectional limit, and is numerically more robust. Our results show that the modifications proposed here result in a consistent collinear approximation, which remains bounded and can provide an efficient approximation to model nonlinear triad effects in operational wave models.
The weighted curvature approximation in scattering from sea surfaces
NASA Astrophysics Data System (ADS)
Guérin, Charles-Antoine; Soriano, Gabriel; Chapron, Bertrand
2010-07-01
A family of unified models in scattering from rough surfaces is based on local corrections of the tangent plane approximation through higher-order derivatives of the surface. We revisit these methods in a common framework when the correction is limited to the curvature, that is essentially the second-order derivative. The resulting expression is formally identical to the weighted curvature approximation, with several admissible kernels, however. For sea surfaces under the Gaussian assumption, we show that the weighted curvature approximation reduces to a universal and simple expression for the off-specular normalized radar cross-section (NRCS), regardless of the chosen kernel. The formula involves merely the sum of the NRCS in the classical Kirchhoff approximation and the NRCS in the small perturbation method, except that the Bragg kernel in the latter has to be replaced by the difference of a Bragg and a Kirchhoff kernel. This result is consistently compared with the resonant curvature approximation. Some numerical comparisons with the method of moments and other classical approximate methods are performed at various bands and sea states. For the copolarized components, the weighted curvature approximation is found numerically very close to the cut-off invariant two-scale model, while bringing substantial improvement to both the Kirchhoff and small-slope approximation. However, the model is unable to predict cross-polarization in the plane of incidence. The simplicity of the formulation opens new perspectives in sea state inversion from remote sensing data.
Common Interventional Radiology Procedures
... of common interventional techniques is below. Common Interventional Radiology Procedures Angiography An X-ray exam of the ... into the vertebra. Copyright © 2016 Society of Interventional Radiology. All rights reserved. 3975 Fair Ridge Drive • Suite ...
How Common Is the Common Core?
ERIC Educational Resources Information Center
Thomas, Amande; Edson, Alden J.
2014-01-01
Since the introduction of the Common Core State Standards for Mathematics (CCSSM) in 2010, stakeholders in adopting states have engaged in a variety of activities to understand CCSSM standards and transition from previous state standards. These efforts include research, professional development, assessment and modification of curriculum resources,…
Pair approximation method for spin-1 Heisenberg system
NASA Astrophysics Data System (ADS)
Mert, Murat; Kılıç, Ahmet; Mert, Gülistan
2016-03-01
Spin-1 Heisenberg system on simple cubic lattice is considered in the pair approximation method assuming that the second-nearest-neighbor exchange interaction parameter has a negative value. The system is described in presence of an external magnetic field. The effects of the negative single-ion anisotropy and the negative second-nearest-neighbor exchange interaction on magnetization, internal energy, heat capacity, entropy and free energy are investigated. There are diverse anomalies at low temperature. In the magnetization and other thermodynamic quantities, the first-order phase transitions from ferromagnetic state to antiferromagnetic state and from ferromagnetic state to paramagnetic state have been observed.
An Examination of New Paradigms for Spline Approximations.
Witzgall, Christoph; Gilsinn, David E; McClain, Marjorie A
2006-01-01
Lavery splines are examined in the univariate and bivariate cases. In both instances relaxation based algorithms for approximate calculation of Lavery splines are proposed. Following previous work Gilsinn, et al. [7] addressing the bivariate case, a rotationally invariant functional is assumed. The version of bivariate splines proposed in this paper also aims at irregularly spaced data and uses Hseih-Clough-Tocher elements based on the triangulated irregular network (TIN) concept. In this paper, the univariate case, however, is investigated in greater detail so as to further the understanding of the bivariate case. PMID:27274917
ERIC Educational Resources Information Center
Glenn, Charles L.
1987-01-01
Horace Mann's goal of creating a common school that brings our society's children together in mutual respect and common learning need not be frustrated by residential segregation and geographical separation of the haves and have-nots. Massachusetts' new common school vision boasts a Metro Program for minority students, 80 magnet schools, and…
ERIC Educational Resources Information Center
Boyer, Ernest L.
Current curricula in institutions of higher education are criticized in this speech for their lack of a common core of education. Several possibilities for developing such a common core include education centered around our common heritage and the challenges of the present. It is suggested that all students must be introduced to the events,…
Knowledge representation for commonality
NASA Technical Reports Server (NTRS)
Yeager, Dorian P.
1990-01-01
Domain-specific knowledge necessary for commonality analysis falls into two general classes: commonality constraints and costing information. Notations for encoding such knowledge should be powerful and flexible and should appeal to the domain expert. The notations employed by the Commonality Analysis Problem Solver (CAPS) analysis tool are described. Examples are given to illustrate the main concepts.
Gryczynski, Z; Tenenholz, T; Bucci, E
1992-01-01
Using the Förster equations we have estimated the rate of energy transfer from tryptophans to hemes in hemoglobin. Assuming an isotropic distribution of the transition moments of the heme in the plane of the porphyrin, we computed the orientation factors and the consequent transfer rates from the crystallographic coordinates of human oxy- and deoxy-hemoglobin. It appears that the orientation factors do not play a limiting role in regulating the energy transfer and that the rates are controlled almost exclusively by the intrasubunit separations between tryptophans and hemes. In intact hemoglobin tetramers the intrasubunit separations are such as to reduce lifetimes to 5 and 15 ps/ns of tryptophan lifetime. Lifetimes of several hundred picoseconds would be allowed by the intersubunit separations, but intersubunits transfer becomes important only when one heme per tetramer is absent or does not accept transfer. If more than one heme per tetramer is absent lifetimes of more than 1 ns would appear. PMID:1420905
NASA Technical Reports Server (NTRS)
Rengarajan, Govind; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
An improved four-node quadrilateral assumed-stress hybrid shell element with drilling degrees of freedom is presented. The formulation is based on Hellinger-Reissner variational principle and the shape functions are formulated directly for the four-node element. The element has 12 membrane degrees of freedom and 12 bending degrees of freedom. It has nine independent stress parameters to describe the membrane stress resultant field and 13 independent stress parameters to describe the moment and transverse shear stress resultant field. The formulation encompasses linear stress, linear buckling, and linear free vibration problems. The element is validated with standard tests cases and is shown to be robust. Numerical results are presented for linear stress, buckling, and free vibration analyses.
NASA Astrophysics Data System (ADS)
Firl, G. J.; Randall, D. A.
2013-12-01
The so-called "assumed probability density function (PDF)" approach to subgrid-scale (SGS) parameterization has shown to be a promising method for more accurately representing boundary layer cloudiness under a wide range of conditions. A new parameterization has been developed, named the Two-and-a-Half ORder closure (THOR), that combines this approach with a higher-order turbulence closure. THOR predicts the time evolution of the turbulence kinetic energy components, the variance of ice-liquid water potential temperature (θil) and total non-precipitating water mixing ratio (qt) and the covariance between the two, and the vertical fluxes of horizontal momentum, θil, and qt. Ten corresponding third-order moments in addition to the skewnesses of θil and qt are calculated using diagnostic functions assuming negligible time tendencies. The statistical moments are used to define a trivariate double Gaussian PDF among vertical velocity, θil, and qt. The first three statistical moments of each variable are used to estimate the two Gaussian plume means, variances, and weights. Unlike previous similar models, plume variances are not assumed to be equal or zero. Instead, they are parameterized using the idea that the less dominant Gaussian plume (typically representing the updraft-containing portion of a grid cell) has greater variance than the dominant plume (typically representing the "environmental" or slowly subsiding portion of a grid cell). Correlations among the three variables are calculated using the appropriate covariance moments, and both plume correlations are assumed to be equal. The diagnosed PDF in each grid cell is used to calculate SGS condensation, SGS fluxes of cloud water species, SGS buoyancy terms, and to inform other physical parameterizations about SGS variability. SGS condensation is extended from previous similar models to include condensation over both liquid and ice substrates, dependent on the grid cell temperature. Implementations have been
2014-01-01
Background The accuracy of the World Health Organization method of estimating malaria parasite density from thick blood smears by assuming a white blood cell (WBC) count of 8,000/μL has been questioned in several studies. Since epidemiological investigations, anti-malarial efficacy trials and routine laboratory reporting in Papua New Guinea (PNG) have all relied on this approach, its validity was assessed as part of a trial of artemisinin-based combination therapy, which included blood smear microscopy and automated measurement of leucocyte densities on Days 0, 3 and 7. Results 168 children with uncomplicated malaria (median (inter-quartile range) age 44 (39–47) months) were enrolled, 80.3% with Plasmodium falciparum monoinfection, 14.9% with Plasmodium vivax monoinfection, and 4.8% with mixed P. falciparum/P. vivax infection. All responded to allocated therapy and none had a malaria-positive slide on Day 3. Consistent with a median baseline WBC density of 7.3 (6.5-7.8) × 109/L, there was no significant difference in baseline parasite density between the two methods regardless of Plasmodium species. Bland Altman plots showed that, for both species, the mean difference between paired parasite densities calculated from assumed and measured WBC densities was close to zero. At parasite densities <10,000/μL by measured WBC, almost all between-method differences were within the 95% limits of agreement. Above this range, there was increasing scatter but no systematic bias. Conclusions Diagnostic thresholds and parasite clearance assessment in most PNG children with uncomplicated malaria are relatively robust, but accurate estimates of a higher parasitaemia, as a prognostic index, requires formal WBC measurement. PMID:24739250
NASA Astrophysics Data System (ADS)
Kirchner, N.; Ahlkrona, J.; Gowan, E. J.; Lötstedt, P.; Lea, J. M.; Noormets, R.; von Sydow, L.; Dowdeswell, J. A.; Benham, T.
2016-03-01
Full Stokes ice sheet models provide the most accurate description of ice sheet flow, and can therefore be used to reduce existing uncertainties in predicting the contribution of ice sheets to future sea level rise on centennial time-scales. The level of accuracy at which millennial time-scale palaeo-ice sheet simulations resolve ice sheet flow lags the standards set by Full Stokes models, especially, when Shallow Ice Approximation (SIA) models are used. Most models used in paleo-ice sheet modeling were developed at a time when computer power was very limited, and rely on several assumptions. At the time there was no means of verifying the assumptions by other than mathematical arguments. However, with the computer power and refined Full Stokes models available today, it is possible to test these assumptions numerically. In this paper, we review (Ahlkrona et al., 2013a) where such tests were performed and inaccuracies in commonly used arguments were found. We also summarize (Ahlkrona et al., 2013b) where the implications of the inaccurate assumptions are analyzed for two paleo-models - the SIA and the SOSIA. We review these works without resorting to mathematical detail, in order to make them accessible to a wider audience with a general interest in palaeo-ice sheet modelling. Specifically, we discuss two implications of relevance for palaeo-ice sheet modelling. First, classical SIA models are less accurate than assumed in their original derivation. Secondly, and contrary to previous recommendations, the SOSIA model is ruled out as a practicable tool for palaeo-ice sheet simulations. We conclude with an outlook concerning the new Ice Sheet Coupled Approximation Level (ISCAL) method presented in Ahlkrona et al. (2016), that has the potential to match the accuracy standards of full Stokes model on palaeo-timescales of tens of thousands of years, and to become an alternative to hybrid models currently used in palaeo-ice sheet modelling. The method is applied to an ice
Comparison of the Radiative Two-Flux and Diffusion Approximations
NASA Technical Reports Server (NTRS)
Spuckler, Charles M.
2006-01-01
Approximate solutions are sometimes used to determine the heat transfer and temperatures in a semitransparent material in which conduction and thermal radiation are acting. A comparison of the Milne-Eddington two-flux approximation and the diffusion approximation for combined conduction and radiation heat transfer in a ceramic material was preformed to determine the accuracy of the diffusion solution. A plane gray semitransparent layer without a substrate and a non-gray semitransparent plane layer on an opaque substrate were considered. For the plane gray layer the material is semitransparent for all wavelengths and the scattering and absorption coefficients do not vary with wavelength. For the non-gray plane layer the material is semitransparent with constant absorption and scattering coefficients up to a specified wavelength. At higher wavelengths the non-gray plane layer is assumed to be opaque. The layers are heated on one side and cooled on the other by diffuse radiation and convection. The scattering and absorption coefficients were varied. The error in the diffusion approximation compared to the Milne-Eddington two flux approximation was obtained as a function of scattering coefficient and absorption coefficient. The percent difference in interface temperatures and heat flux through the layer obtained using the Milne-Eddington two-flux and diffusion approximations are presented as a function of scattering coefficient and absorption coefficient. The largest errors occur for high scattering and low absorption except for the back surface temperature of the plane gray layer where the error is also larger at low scattering and low absorption. It is shown that the accuracy of the diffusion approximation can be improved for some scattering and absorption conditions if a reflectance obtained from a Kubelka-Munk type two flux theory is used instead of a reflection obtained from the Fresnel equation. The Kubelka-Munk reflectance accounts for surface reflection and
Energy flow: image correspondence approximation for motion analysis
NASA Astrophysics Data System (ADS)
Wang, Liangliang; Li, Ruifeng; Fang, Yajun
2016-04-01
We propose a correspondence approximation approach between temporally adjacent frames for motion analysis. First, energy map is established to represent image spatial features on multiple scales using Gaussian convolution. On this basis, energy flow at each layer is estimated using Gauss-Seidel iteration according to the energy invariance constraint. More specifically, at the core of energy invariance constraint is "energy conservation law" assuming that the spatial energy distribution of an image does not change significantly with time. Finally, energy flow field at different layers is reconstructed by considering different smoothness degrees. Due to the multiresolution origin and energy-based implementation, our algorithm is able to quickly address correspondence searching issues in spite of background noise or illumination variation. We apply our correspondence approximation method to motion analysis, and experimental results demonstrate its applicability.
Dynamical observer for a flexible beam via finite element approximations
NASA Technical Reports Server (NTRS)
Manitius, Andre; Xia, Hong-Xing
1994-01-01
The purpose of this view-graph presentation is a computational investigation of the closed-loop output feedback control of a Euler-Bernoulli beam based on finite element approximation. The observer is part of the classical observer plus state feedback control, but it is finite-dimensional. In the theoretical work on the subject it is assumed (and sometimes proved) that increasing the number of finite elements will improve accuracy of the control. In applications, this may be difficult to achieve because of numerical problems. The main difficulty in computing the observer and simulating its work is the presence of high frequency eigenvalues in the finite-element model and poor numerical conditioning of some of the system matrices (e.g. poor observability properties) when the dimension of the approximating system increases. This work dealt with some of these difficulties.
On the convergence of difference approximations to scalar conservation laws
NASA Technical Reports Server (NTRS)
Osher, S.; Tadmor, E.
1985-01-01
A unified treatment of explicit in time, two level, second order resolution, total variation diminishing, approximations to scalar conservation laws are presented. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced and results in terms of the latter are obtained. The existence of a cell entropy inequality is discussed and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first order accurate in general. Convergence for total variation diminishing-second order resolution schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.
On the convergence of difference approximations to scalar conservation laws
NASA Technical Reports Server (NTRS)
Osher, Stanley; Tadmor, Eitan
1988-01-01
A unified treatment is given for time-explicit, two-level, second-order-resolution (SOR), total-variation-diminishing (TVD) approximations to scalar conservation laws. The schemes are assumed only to have conservation form and incremental form. A modified flux and a viscosity coefficient are introduced to obtain results in terms of the latter. The existence of a cell entropy inequality is discussed, and such an equality for all entropies is shown to imply that the scheme is an E scheme on monotone (actually more general) data, hence at most only first-order accurate in general. Convergence for TVD-SOR schemes approximating convex or concave conservation laws is shown by enforcing a single discrete entropy inequality.
NASA Astrophysics Data System (ADS)
Duchêne, Vincent
2014-08-01
The rigid-lid approximation is a commonly used simplification in the study of density-stratified fluids in oceanography. Roughly speaking, one assumes that the displacements of the surface are negligible compared with interface displacements. In this paper, we offer a rigorous justification of this approximation in the case of two shallow layers of immiscible fluids with constant and quasi-equal mass density. More precisely, we control the difference between the solutions of the Cauchy problem predicted by the shallow-water (Saint-Venant) system in the rigid-lid and free-surface configuration. We show that in the limit of a small density contrast, the flow may be accurately described as the superposition of a baroclinic (or slow) mode, which is well predicted by the rigid-lid approximation, and a barotropic (or fast) mode, whose initial smallness persists for large time. We also describe explicitly the first-order behavior of the deformation of the surface and discuss the case of a nonsmall initial barotropic mode.
HIGH DIMENSIONAL COVARIANCE MATRIX ESTIMATION IN APPROXIMATE FACTOR MODELS
Fan, Jianqing; Liao, Yuan; Mincheva, Martina
2012-01-01
The variance covariance matrix plays a central role in the inferential theories of high dimensional factor models in finance and economics. Popular regularization methods of directly exploiting sparsity are not directly applicable to many financial problems. Classical methods of estimating the covariance matrices are based on the strict factor models, assuming independent idiosyncratic components. This assumption, however, is restrictive in practical applications. By assuming sparse error covariance matrix, we allow the presence of the cross-sectional correlation even after taking out common factors, and it enables us to combine the merits of both methods. We estimate the sparse covariance using the adaptive thresholding technique as in Cai and Liu (2011), taking into account the fact that direct observations of the idiosyncratic components are unavailable. The impact of high dimensionality on the covariance matrix estimation based on the factor structure is then studied. PMID:22661790
Establishing Conventional Communication Systems: Is Common Knowledge Necessary?
ERIC Educational Resources Information Center
Barr, Dale J.
2004-01-01
How do communities establish shared communication systems? The Common Knowledge view assumes that symbolic conventions develop through the accumulation of common knowledge regarding communication practices among the members of a community. In contrast with this view, it is proposed that coordinated communication emerges a by-product of local…
Producing approximate answers to database queries
NASA Technical Reports Server (NTRS)
Vrbsky, Susan V.; Liu, Jane W. S.
1993-01-01
We have designed and implemented a query processor, called APPROXIMATE, that makes approximate answers available if part of the database is unavailable or if there is not enough time to produce an exact answer. The accuracy of the approximate answers produced improves monotonically with the amount of data retrieved to produce the result. The exact answer is produced if all of the needed data are available and query processing is allowed to continue until completion. The monotone query processing algorithm of APPROXIMATE works within the standard relational algebra framework and can be implemented on a relational database system with little change to the relational architecture. We describe here the approximation semantics of APPROXIMATE that serves as the basis for meaningful approximations of both set-valued and single-valued queries. We show how APPROXIMATE is implemented to make effective use of semantic information, provided by an object-oriented view of the database, and describe the additional overhead required by APPROXIMATE.
Dentate Gyrus Circuitry Features Improve Performance of Sparse Approximation Algorithms
Petrantonakis, Panagiotis C.; Poirazi, Panayiota
2015-01-01
Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2–4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm “Iterative Soft Thresholding” (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG. PMID:25635776
Poisson process approximation for sequence repeats, and sequencing by hybridization.
Arratia, R; Martin, D; Reinert, G; Waterman, M S
1996-01-01
Sequencing by hybridization is a tool to determine a DNA sequence from the unordered list of all l-tuples contained in this sequence; typical numbers for l are l = 8, 10, 12. For theoretical purposes we assume that the multiset of all l-tuples is known. This multiset determines the DNA sequence uniquely if none of the so-called Ukkonen transformations are possible. These transformations require repeats of (l-1)-tuples in the sequence, with these repeats occurring in certain spatial patterns. We model DNA as an i.i.d. sequence. We first prove Poisson process approximations for the process of indicators of all leftmost long repeats allowing self-overlap and for the process of indicators of all left-most long repeats without self-overlap. Using the Chen-Stein method, we get bounds on the error of these approximations. As a corollary, we approximate the distribution of longest repeats. In the second step we analyze the spatial patterns of the repeats. Finally we combine these two steps to prove an approximation for the probability that a random sequence is uniquely recoverable from its list of l-tuples. For all our results we give some numerical examples including error bounds. PMID:8891959
A stepwise similarity approximation of spatial constraints for image retrieval
NASA Astrophysics Data System (ADS)
Zhang, Qing-Long; Yau, Stephen S.
2005-07-01
A real image is assumed to be associated with some content-based meta-data about that image (i.e., information about objects in the image and spatial relationships among them). Recently Zhang and Yau have addressed the approximate picture matching problem, and have presented a stepwise approximation of intractable spatial constraints in an image query. In particular, in contrast with very few cases done in earlier related works, Zhang-Yau's algorthmic analysis shows that there are all possible 16 cases for results of the object matching step of image retrieval, and 13 out of these 16 cases are valid for the stepwise approximation of spatial constraints while the only other 3 cases are identified impossible for finding an exact picture-matching between a query picture and a databse picture. In this paper, Zhang and Yau have successfullyused the stepwise approximation method to work out a simliarity measure between a query image and a database image, for image retrieval. The prpose similiarity measure utlizes the simliarity measures previously developed, by Gudivada and Raghavan (1995) and El-Kwae and Kabuka (1999), for the scenario of the single occurrence of each object in both query and databse images, and extends to cover all 13 valid cases.
Tatulian, S A; Hinterdorfer, P; Baber, G; Tamm, L K
1995-01-01
Fusion of influenza virus with target membranes is mediated by an acid-induced conformational change of the viral fusion protein hemagglutinin (HA) involving an extensive reorganization of the alpha-helices. A 'spring-loaded' displacement over at least 100 A provides a mechanism for the insertion of the fusion peptide into the target membrane, but does not explain how the two membranes are brought into fusion contact. Here we examine, by attenuated total reflection Fourier transform infrared spectroscopy, the secondary structure and orientation of HA reconstituted in planar membranes. At neutral pH, the orientation of the HA trimers in planar membranes is approximately perpendicular to the membrane. However, at the pH of fusion, the HA trimers are tilted 55-70 degrees from the membrane normal in the presence or absence of bound target membranes. In the absence of target membranes, the overall secondary structure of HA at the fusion pH is similar to that at neutral pH, but approximately 50-60 additional residues become alpha-helical upon the conformational change in the presence of bound target membranes. These results are discussed in terms of a structural model for the fusion intermediate of influenza HA. Images PMID:8521808
NASA Astrophysics Data System (ADS)
Mizell, Steve A.; Gutjahr, Allan L.; Gelhar, Lynn W.
1982-08-01
Two-dimensional steady groundwater flow in a confined aquifer with spatially variable transmissivity T is analyzed stochastically using spectral analysis and the theory of intrinsic random functions. Conditions that ensure a stationary (statistically homogeneous) head process are derived, and using two convenient forms for the covariance function of the ln T process, the head covariance function is studied. In addition, the head variogram is obtained for a particular nonstationary case, and the asymptotic head variogram is derived under very general conditions. Results are compared to those obtained by Gelhar (1976) for one- and two-dimensional phreatic flow and Bakr et al. (1978) for one- and three-dimensional confined flow. Multidimensional flow analysis results in a significantly reduced head variance. The head correlation remains high over much greater distances than the ln T correlation. The variogram obtained when stationary heads are assumed is identical to that obtained for nonstationary heads for dimensionless lag distances up to 2½ times the correlation scale of the log transmissivity. The variogram for nonstationary heads continues to grow logarithmically as lag distance increases, independent of the form of the input covariance in the nonstationary case. The conditions for stationarity are contrasted with the corresponding results obtained for the one- and three-dimensional cases of Gutjahr and Gelhar (1981). The head variance calculated from the stationary theory is found to agree with that of previous Monte Carlo simulations.
Miyake, K; Zuckerman, M
1993-09-01
We examined the effects of target persons' physical and vocal attractiveness on judges' responses to five measures: false consensus (the belief that the target shares one's behavior), choice of targets as comparison others, affiliation with targets, assumed similarity (similarity between self-ratings and ratings assigned to targets), and perceived similarity (direct questions about similarity). Higher physical attractiveness and higher vocal attractiveness were both related to higher scores on all variables. The effect of one type of attractiveness was more pronounced for higher levels of the other type of attractiveness. The joint effect of the two types of attractiveness was best described as synergistic, i.e., only targets high on both types of attractiveness elicited higher scores on the dependent variables. The effect of physical attractiveness on most dependent variables was more pronounced for subjects who were themselves physically attractive. The synergistic effect (the advantage of targets high on both types of attractiveness) was more pronounced for judges high in self-monitoring. The contribution of the study to the literature on attractiveness stereotypes is discussed. PMID:8246108
Wildhaber, M.L.; Lamberson, P.J.
2004-01-01
Various mechanisms of habitat choice in fishes based on food and/or temperature have been proposed: optimal foraging for food alone; behavioral thermoregulation for temperature alone; and behavioral energetics and discounted matching for food and temperature combined. Along with development of habitat choice mechanisms, there has been a major push to develop and apply to fish populations individual-based models that incorporate various forms of these mechanisms. However, it is not known how the wide variation in observed and hypothesized mechanisms of fish habitat choice could alter fish population predictions (e.g. growth, size distributions, etc.). We used spatially explicit, individual-based modeling to compare predicted fish populations using different submodels of patch choice behavior under various food and temperature distributions. We compared predicted growth, temperature experience, food consumption, and final spatial distribution using the different models. Our results demonstrated that the habitat choice mechanism assumed in fish population modeling simulations was critical to predictions of fish distribution and growth rates. Hence, resource managers who use modeling results to predict fish population trends should be very aware of and understand the underlying patch choice mechanisms used in their models to assure that those mechanisms correctly represent the fish populations being modeled.
NASA Astrophysics Data System (ADS)
Sohn, Dongwoo; Im, Seyoung
2013-06-01
In this paper, novel finite elements that include an arbitrary number of additional nodes on each edge of a quadrilateral element are proposed to achieve compatible connection of neighboring nonmatching meshes in plate and shell analyses. The elements, termed variable-node plate elements, are based on two-dimensional variable-node elements with point interpolation and on the Mindlin-Reissner plate theory. Subsequently the flat shell elements, termed variable-node shell elements, are formulated by further extending the plate elements. To eliminate a transverse shear locking phenomenon, the assumed natural strain method is used for plate and shell analyses. Since the variable-node plate and shell elements allow an arbitrary number of additional nodes and overcome locking problems, they make it possible to connect two nonmatching meshes and to provide accurate solutions in local mesh refinement. In addition, the curvature and strain smoothing methods through smoothed integration are adopted to improve the element performance. Several numerical examples are presented to demonstrate the effectiveness of the elements in terms of the accuracy and efficiency of the analyses.
Examining the exobase approximation: DSMC models of Titan's upper atmosphere
NASA Astrophysics Data System (ADS)
Tucker, Orenthal J.; Waalkes, William; Tenishev, Valeriy M.; Johnson, Robert E.; Bieler, Andre; Combi, Michael R.; Nagy, Andrew F.
2016-07-01
Chamberlain ([1963] Planet. Space Sci., 11, 901-960) described the use of the exobase layer to determine escape from planetary atmospheres, below which it is assumed that molecular collisions maintain thermal equilibrium and above which collisions are deemed negligible. De La Haye et al. ([2007] Icarus., 191, 236-250) used this approximation to extract the energy deposition and non-thermal escape rates for Titan's atmosphere by fitting the Cassini Ion Neutral Mass Spectrometer (INMS) density data. De La Haye et al. assumed the gas distributions were composed of an enhanced population of super-thermal molecules (E >> kT) that could be described by a kappa energy distribution function (EDF), and they fit the data using the Liouville theorem. Here we fitted the data again, but we used the conventional form of the kappa EDF. The extracted kappa EDFs were then used with the Direct Simulation Monte Carlo (DSMC) technique (Bird [1994] Molecular Gas Dynamics and the Direct Simulation of Gas Flows) to evaluate the effect of collisions on the exospheric profiles. The INMS density data can be fit reasonably well with thermal and various non-thermal EDFs. However, the extracted energy deposition and escape rates are shown to depend significantly on the assumed exobase altitude, and the usefulness of such fits without directly modeling the collisions is unclear. Our DSMC results indicate that the kappa EDFs used in the Chamberlain approximation can lead to errors in determining the atmospheric temperature profiles and escape rates. Gas kinetic simulations are needed to accurately model measured exospheric density profiles, and to determine the altitude ranges where the Liouville method might be applicable.
Gutzwiller approximation in strongly correlated electron systems
NASA Astrophysics Data System (ADS)
Li, Chunhua
Gutzwiller wave function is an important theoretical technique for treating local electron-electron correlations nonperturbatively in condensed matter and materials physics. It is concerned with calculating variationally the ground state wave function by projecting out multi-occupation configurations that are energetically costly. The projection can be carried out analytically in the Gutzwiller approximation that offers an approximate way of calculating expectation values in the Gutzwiller projected wave function. This approach has proven to be very successful in strongly correlated systems such as the high temperature cuprate superconductors, the sodium cobaltates, and the heavy fermion compounds. In recent years, it has become increasingly evident that strongly correlated systems have a strong propensity towards forming inhomogeneous electronic states with spatially periodic superstrutural modulations. A good example is the commonly observed stripes and checkerboard states in high- Tc superconductors under a variety of conditions where superconductivity is weakened. There exists currently a real challenge and demand for new theoretical ideas and approaches that treats strongly correlated inhomogeneous electronic states, which is the subject matter of this thesis. This thesis contains four parts. In the first part of the thesis, the Gutzwiller approach is formulated in the grand canonical ensemble where, for the first time, a spatially (and spin) unrestricted Gutzwiller approximation (SUGA) is developed for studying inhomogeneous (both ordered and disordered) quantum electronic states in strongly correlated electron systems. The second part of the thesis applies the SUGA to the t-J model for doped Mott insulators which led to the discovery of checkerboard-like inhomogeneous electronic states competing with d-wave superconductivity, consistent with experimental observations made on several families of high-Tc superconductors. In the third part of the thesis, new
An approximation technique for jet impingement flow
Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.
2015-03-10
The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.
Comparison of two Pareto frontier approximations
NASA Astrophysics Data System (ADS)
Berezkin, V. E.; Lotov, A. V.
2014-09-01
A method for comparing two approximations to the multidimensional Pareto frontier in nonconvex nonlinear multicriteria optimization problems, namely, the inclusion functions method is described. A feature of the method is that Pareto frontier approximations are compared by computing and comparing inclusion functions that show which fraction of points of one Pareto frontier approximation is contained in the neighborhood of the Edgeworth-Pareto hull approximation for the other Pareto frontier.
Molecular collisions 21: Semiclassical approximation to atom-symmetric top rotational excitation
NASA Technical Reports Server (NTRS)
Russell, D.; Curtiss, C. F.
1973-01-01
A distorted wave approximation to the T matrix for atom-symmetric top scattering was developed. The approximation is correct to first order in the part of the interaction potential responsible for transitions in the component of rotational angular momentum along the symmetry axis of the top. A semiclassical expression for this T matrix is derived by assuming large values of orbital and rotational angular momentum quantum numbers.
Fractal Trigonometric Polynomials for Restricted Range Approximation
NASA Astrophysics Data System (ADS)
Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.
2016-05-01
One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.
Weight-Bearing Ankle Dorsiflexion Range of Motion—Can Side-to-Side Symmetry Be Assumed?
Rabin, Alon; Kozol, Zvi; Spitzer, Elad; Finestone, Aharon S.
2015-01-01
Context: In clinical practice, the range of motion (ROM) of the noninvolved side often serves as the reference for comparison with the injured side. Previous investigations of non–weight-bearing (NWB) ankle dorsiflexion (DF) ROM measurements have indicated bilateral symmetry for the most part. Less is known about ankle DF measured under weight-bearing (WB) conditions. Because WB and NWB ankle DF are not strongly correlated, there is a need to determine whether WB ankle DF is also symmetrical in a healthy population. Objective: To determine whether WB ankle DF is bilaterally symmetrical. A secondary goal was to further explore the correlation between WB and NWB ankle DF ROM. Design: Cross-sectional study. Setting: Training facility of the Israeli Defense Forces. Patients or Other Participants: A total of 64 healthy males (age = 19.6 ± 1.0 years, height = 175.0 ± 6.4 cm, and body mass = 71.4 ± 7.7 kg). Main Outcome Measure(s): Dorsiflexion ROM in WB was measured with an inclinometer and DF ROM in NWB was measured with a universal goniometer. All measurements were taken bilaterally by a single examiner. Results: Weight-bearing ankle DF was greater on the nondominant side compared with the dominant side (P < .001). Non–weight-bearing ankle DF was not different between sides (P = .64). The correlation between WB and NWB DF was moderate, with the NWB DF measurement accounting for 30% to 37% of the variance of the WB measurement. Conclusions: Weight-bearing ankle DF ROM should not be assumed to be bilaterally symmetrical. These findings suggest that side-to-side differences in WB DF may need to be interpreted while considering which side is dominant. The difference in bilateral symmetry between the WB and NWB measurements, as well as the only moderate level of correlation between them, suggests that both measurements should be performed routinely. PMID:25329350
KUPPER, Lawrence L.
2012-01-01
A common goal in environmental epidemiologic studies is to undertake logistic regression modeling to associate a continuous measure of exposure with binary disease status, adjusting for covariates. A frequent complication is that exposure may only be measurable indirectly, through a collection of subject-specific variables assumed associated with it. Motivated by a specific study to investigate the association between lung function and exposure to metal working fluids, we focus on a multiplicative-lognormal structural measurement error scenario and approaches to address it when external validation data are available. Conceptually, we emphasize the case in which true untransformed exposure is of interest in modeling disease status, but measurement error is additive on the log scale and thus multiplicative on the raw scale. Methodologically, we favor a pseudo-likelihood (PL) approach that exhibits fewer computational problems than direct full maximum likelihood (ML) yet maintains consistency under the assumed models without necessitating small exposure effects and/or small measurement error assumptions. Such assumptions are required by computationally convenient alternative methods like regression calibration (RC) and ML based on probit approximations. We summarize simulations demonstrating considerable potential for bias in the latter two approaches, while supporting the use of PL across a variety of scenarios. We also provide accessible strategies for obtaining adjusted standard errors to accompany RC and PL estimates. PMID:24027381
A unified approach to the Darwin approximation
Krause, Todd B.; Apte, A.; Morrison, P. J.
2007-10-15
There are two basic approaches to the Darwin approximation. The first involves solving the Maxwell equations in Coulomb gauge and then approximating the vector potential to remove retardation effects. The second approach approximates the Coulomb gauge equations themselves, then solves these exactly for the vector potential. There is no a priori reason that these should result in the same approximation. Here, the equivalence of these two approaches is investigated and a unified framework is provided in which to view the Darwin approximation. Darwin's original treatment is variational in nature, but subsequent applications of his ideas in the context of Vlasov's theory are not. We present here action principles for the Darwin approximation in the Vlasov context, and this serves as a consistency check on the use of the approximation in this setting.
Approximate Formula for the Vertical Asymptote of Projectile Motion in Midair
ERIC Educational Resources Information Center
Chudinov, Peter Sergey
2010-01-01
The classic problem of the motion of a point mass (projectile) thrown at an angle to the horizon is reviewed. The air drag force is taken into account with the drag factor assumed to be constant. An analytical approach is used for the investigation. An approximate formula is obtained for one of the characteristics of the motion--the vertical…
... Prenatal Baby Bathing & Skin Care Breastfeeding Crying & Colic Diapers & Clothing Feeding & Nutrition Preemie Sleep Teething & Tooth Care Toddler Preschool Gradeschool Teen Young Adult Healthy Children > Ages & Stages > Baby > Common Conditions in ...
Virta, R.L.
2004-01-01
Part of the 2003 industrial minerals review. The legislation, production, and consumption of common clay and shale are discussed. The average prices of the material and outlook for the market are provided.
The Genomic Data Commons (GDC), a unified data system that promotes sharing of genomic and clinical data between researchers, launched today with a visit from Vice President Joe Biden to the operations center at the University of Chicago.
Barry Commoner Assails Petrochemicals
ERIC Educational Resources Information Center
Chemical and Engineering News, 1973
1973-01-01
Discusses Commoner's ideas on the social value of the petrochemical industry and his suggestions for curtailment or elimination of its productive operation to produce a higher environmental quality for mankind at a relatively low loss in social benefit. (CC)
Virta, R.L.
2011-01-01
The article discusses the latest developments in the global common clay and shale industry, particularly in the U.S. It claims that common clay and shale is mainly used in the manufacture of heavy clay products like brick, flue tile and sewer pipe. The main producing states in the U.S. include North Carolina, New York and Oklahoma. Among the firms that manufacture clay and shale-based products are Mid America Brick & Structural Clay Products LLC and Boral USA.
Virta, R.L.
2006-01-01
At present, 150 companies produce common clay and shale in 41 US states. According to the United States Geological Survey (USGS), domestic production in 2005 reached 24.8 Mt valued at $176 million. In decreasing order by tonnage, the leading producer states include North Carolina, Texas, Alabama, Georgia and Ohio. For the whole year, residential and commercial building construction remained the major market for common clay and shale products such as brick, drain tile, lightweight aggregate, quarry tile and structural tile.
Cophylogeny Reconstruction via an Approximate Bayesian Computation
Baudet, C.; Donati, B.; Sinaimeri, B.; Crescenzi, P.; Gautier, C.; Matias, C.; Sagot, M.-F.
2015-01-01
Despite an increasingly vast literature on cophylogenetic reconstructions for studying host–parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host–parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454
Cophylogeny reconstruction via an approximate Bayesian computation.
Baudet, C; Donati, B; Sinaimeri, B; Crescenzi, P; Gautier, C; Matias, C; Sagot, M-F
2015-05-01
Despite an increasingly vast literature on cophylogenetic reconstructions for studying host-parasite associations, understanding the common evolutionary history of such systems remains a problem that is far from being solved. Most algorithms for host-parasite reconciliation use an event-based model, where the events include in general (a subset of) cospeciation, duplication, loss, and host switch. All known parsimonious event-based methods then assign a cost to each type of event in order to find a reconstruction of minimum cost. The main problem with this approach is that the cost of the events strongly influences the reconciliation obtained. Some earlier approaches attempt to avoid this problem by finding a Pareto set of solutions and hence by considering event costs under some minimization constraints. To deal with this problem, we developed an algorithm, called Coala, for estimating the frequency of the events based on an approximate Bayesian computation approach. The benefits of this method are 2-fold: (i) it provides more confidence in the set of costs to be used in a reconciliation, and (ii) it allows estimation of the frequency of the events in cases where the data set consists of trees with a large number of taxa. We evaluate our method on simulated and on biological data sets. We show that in both cases, for the same pair of host and parasite trees, different sets of frequencies for the events lead to equally probable solutions. Moreover, often these solutions differ greatly in terms of the number of inferred events. It appears crucial to take this into account before attempting any further biological interpretation of such reconciliations. More generally, we also show that the set of frequencies can vary widely depending on the input host and parasite trees. Indiscriminately applying a standard vector of costs may thus not be a good strategy. PMID:25540454
Norms of Descriptive Adjective Responses to Common Nouns.
ERIC Educational Resources Information Center
Robbins, Janet L.
This paper gives the results of a controlled experiment on word association. The purpose was to establish norms of commonality of primary descriptive adjective responses to common nouns. The stimuli consisted of 203 common nouns selected from 10 everyday topics of conversation, approximately 20 from each topic. There were 350 subjects, 50% male,…
A Probabilistic PTAS for Shortest Common Superstring
NASA Astrophysics Data System (ADS)
Plociennik, Kai
We consider approximation algorithms for the shortest common superstring problem (SCS). It is well-known that there is a constant f > 1 such that there is no efficient approximation algorithm for SCS achieving a factor of at most f in the worst case, unless P = NP. We study SCS on random inputs and present an approximation scheme that achieves, for every ɛ> 0, a 1 + ɛ-approximation in expected polynomial time. This result applies not only if the letters are chosen independently at random, but also to the more realistic mixing model, which allows dependencies among the letters of the random strings. Our result is based on a sharp tail bound on the optimal compression, which improves a previous result by Frieze and Szpankowski.
Managing the wildlife tourism commons.
Pirotta, Enrico; Lusseau, David
2015-04-01
The nonlethal effects of wildlife tourism can threaten the conservation status of targeted animal populations. In turn, such resource depletion can compromise the economic viability of the industry. Therefore, wildlife tourism exploits resources that can become common pool and that should be managed accordingly. We used a simulation approach to test whether different management regimes (tax, tax and subsidy, cap, cap and trade) could provide socioecologically sustainable solutions. Such schemes are sensitive to errors in estimated management targets. We determined the sensitivity of each scenario to various realistic uncertainties in management implementation and in our knowledge of the population. Scenarios where time quotas were enforced using a tax and subsidy approach, or they were traded between operators were more likely to be sustainable. Importantly, sustainability could be achieved even when operators were assumed to make simple rational economic decisions. We suggest that a combination of the two regimes might offer a robust solution, especially on a small spatial scale and under the control of a self-organized, operator-level institution. Our simulation platform could be parameterized to mimic local conditions and provide a test bed for experimenting different governance solutions in specific case studies. PMID:26214918
Approximations for column effect in airplane wing spars
NASA Technical Reports Server (NTRS)
Warner, Edward P; Short, Mac
1927-01-01
The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.
Approximate Analysis of Semiconductor Laser Arrays
NASA Technical Reports Server (NTRS)
Marshall, William K.; Katz, Joseph
1987-01-01
Simplified equation yields useful information on gains and output patterns. Theoretical method based on approximate waveguide equation enables prediction of lateral modes of gain-guided planar array of parallel semiconductor lasers. Equation for entire array solved directly using piecewise approximation of index of refraction by simple functions without customary approximation based on coupled waveguid modes of individual lasers. Improved results yield better understanding of laser-array modes and help in development of well-behaved high-power semiconductor laser arrays.
Constructive approximate interpolation by neural networks
NASA Astrophysics Data System (ADS)
Llanas, B.; Sainz, F. J.
2006-04-01
We present a type of single-hidden layer feedforward neural networks with sigmoidal nondecreasing activation function. We call them ai-nets. They can approximately interpolate, with arbitrary precision, any set of distinct data in one or several dimensions. They can uniformly approximate any continuous function of one variable and can be used for constructing uniform approximants of continuous functions of several variables. All these capabilities are based on a closed expression of the networks.
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1990-01-01
This paper presents finite-dimensional approximations for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems, when a quadratic cost integral must be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in the case where the cost integral ranges over a finite time interval, as well as in the case where it ranges over an infinite time interval. The arguments in the last case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense.
Power system commonality study
NASA Astrophysics Data System (ADS)
Littman, Franklin D.
1992-07-01
A limited top level study was completed to determine the commonality of power system/subsystem concepts within potential lunar and Mars surface power system architectures. A list of power system concepts with high commonality was developed which can be used to synthesize power system architectures which minimize development cost. Examples of potential high commonality power system architectures are given in this report along with a mass comparison. Other criteria such as life cycle cost (which includes transportation cost), reliability, safety, risk, and operability should be used in future, more detailed studies to select optimum power system architectures. Nineteen potential power system concepts were identified and evaluated for planetary surface applications including photovoltaic arrays with energy storage, isotope, and nuclear power systems. A top level environmental factors study was completed to assess environmental impacts on the identified power system concepts for both lunar and Mars applications. Potential power system design solutions for commonality between Mars and lunar applications were identified. Isotope, photovoltaic array (PVA), regenerative fuel cell (RFC), stainless steel liquid-metal cooled reactors (less than 1033 K maximum) with dynamic converters, and in-core thermionic reactor systems were found suitable for both lunar and Mars environments. The use of SP-100 thermoelectric (TE) and SP-100 dynamic power systems in a vacuum enclosure may also be possible for Mars applications although several issues need to be investigated further (potential single point failure of enclosure, mass penalty of enclosure and active pumping system, additional installation time and complexity). There are also technical issues involved with development of thermionic reactors (life, serviceability, and adaptability to other power conversion units). Additional studies are required to determine the optimum reactor concept for Mars applications. Various screening
Inverse Common-Reflection-Surface
NASA Astrophysics Data System (ADS)
Perroud, H.; Tygel, M.; Freitas, L.
2010-12-01
The Common-Reflection-Surface (CRS) stack method is a powerful tool to produce high-quality stacked images of multicoverage seismic data. As a result of the CRS stack, not only a stacked section, but also a number of attributes defined at each point of that section, are produced. In this way, one can think of the CRS stack method as a transformation from data space to attribute space. Being a purely kinematic method, the CRS stack lacks amplitude information that can be useful for many purposes. Here we propose to fill this gap by means of a combined use of a zero-offset section (that could be a short-offset or amplitude-corrected stacked section) and common midpoint gather. We present an algorithm for an inverse CRS transformation, namely one that (approximately) transforms the CRS attributes back to data space. First synthetic tests provide satisfying results for the two simple cases of single dipping-plane and single circular reflectors with a homogeneous overburden, and provide estimates of the range of applicability, in both midpoint and offset directions. We further present an application for interpolating missing traces in a near-surface, high-resolution seismic experiment, conducted in the alluvial plain of the river Gave de Pau, near Assat, southern France, showing its ability to build coherent signals, where recording was not available. A somewhat unexpected good feature of the algorithm, is that it seems capable to reconstruct signals even in muted parts of the section.
Common ecology quantifies human insurgency.
Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F
2009-12-17
Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour. PMID:20016600
NASA Technical Reports Server (NTRS)
Wetherholt, Jon; Heimann, Timothy J.; Anderson, Brenda
2011-01-01
High technology industries with high failure costs commonly use redundancy as a means to reduce risk. Redundant systems, whether similar or dissimilar, are susceptible to Common Cause Failures (CCF). CCF is not always considered in the design effort and, therefore, can be a major threat to success. There are several aspects to CCF which must be understood to perform an analysis which will find hidden issues that may negate redundancy. This paper will provide definition, types, a list of possible causes and some examples of CCF. Requirements and designs from NASA projects will be used in the paper as examples.
Berkel, M. van; Tamura, N.; Ida, K.; Hogeweij, G. M. D.; Zwart, H. J.; Inagaki, S.; Baar, M. R. de
2014-11-15
, cylindrical approximations are treated for heat waves traveling towards the plasma edge assuming a semi-infinite domain.
Validity criterion for the Born approximation convergence in microscopy imaging.
Trattner, Sigal; Feigin, Micha; Greenspan, Hayit; Sochen, Nir
2009-05-01
The need for the reconstruction and quantification of visualized objects from light microscopy images requires an image formation model that adequately describes the interaction of light waves with biological matter. Differential interference contrast (DIC) microscopy, as well as light microscopy, uses the common model of the scalar Helmholtz equation. Its solution is frequently expressed via the Born approximation. A theoretical bound is known that limits the validity of such an approximation to very small objects. We present an analytic criterion for the validity region of the Born approximation. In contrast to the theoretical known bound, the suggested criterion considers the field at the lens, external to the object, that corresponds to microscopic imaging and extends the validity region of the approximation. An analytical proof of convergence is presented to support the derived criterion. The suggested criterion for the Born approximation validity region is described in the context of a DIC microscope, yet it is relevant for any light microscope with similar fundamental apparatus. PMID:19412231
Pratt, Bridget; Lwin, Khin Maung; Zion, Deborah; Nosten, Francois; Loff, Bebe; Cheah, Phaik Yeong
2015-04-01
It has been suggested that community advisory boards (CABs) can play a role in minimising exploitation in international research. To get a better idea of what this requires and whether it might be achievable, the paper first describes core elements that we suggest must be in place for a CAB to reduce the potential for exploitation. The paper then examines a CAB established by the Shoklo Malaria Research Unit under conditions common in resource-poor settings - namely, where individuals join with a very limited understanding of disease and medical research and where an existing organisational structure is not relied upon to serve as the CAB. Using the Tak Province Border Community Ethics Advisory Board (T-CAB) as a case study, we assess the extent to which it might be able to take on a role minimising exploitation were it to decide to do so. We investigate whether, after two years in operation, T-CAB is capable of assessing clinical trials for exploitative features and addressing those found to have them. The findings show that, although T-CAB members have gained knowledge and developed capacities that are foundational for one-day taking on a role to reduce exploitation, their ability to critically evaluate studies for the presence of exploitative elements has not yet been strongly demonstrated. In light of this example, we argue that CABs may not be able to perform such a role for a number of years after initial formation, making it an unsuitable responsibility for many short-term CABs. PMID:23725206
Sedley, William; Cunningham, Mark O.
2013-01-01
Cortical gamma oscillations occur alongside perceptual processes, and in proportion to perceptual salience. They have a number of properties that make them ideal candidates to explain perception, including incorporating synchronized discharges of neural assemblies, and their emergence over a fast timescale consistent with that of perception. These observations have led to widespread assumptions that gamma oscillations' role is to cause or facilitate conscious perception (i.e., a “positive” role). While the majority of the human literature on gamma oscillations is consistent with this interpretation, many or most of these studies could equally be interpreted as showing a suppressive or inhibitory (i.e., “negative”) role. For example, presenting a stimulus and recording a response of increased gamma oscillations would only suggest a role for gamma oscillations in the representation of that stimulus, and would not specify what that role were; if gamma oscillations were inhibitory, then they would become selectively activated in response to the stimulus they acted to inhibit. In this review, we consider two classes of gamma oscillations: “broadband” and “narrowband,” which have very different properties (and likely roles). We first discuss studies on gamma oscillations that are non-discriminatory, with respect to the role of gamma oscillations, followed by studies that specifically support specifically a positive or negative role. These include work on perception in healthy individuals, and in the pathological contexts of phantom perception and epilepsy. Reference is based as much as possible on magnetoencephalography (MEG) and electroencephalography (EEG) studies, but we also consider evidence from invasive recordings in humans and other animals. Attempts are made to reconcile findings within a common framework. We conclude with a summary of the pertinent questions that remain unanswered, and suggest how future studies might address these. PMID
Spline approximations for nonlinear hereditary control systems
NASA Technical Reports Server (NTRS)
Daniel, P. L.
1982-01-01
A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.
Quirks of Stirling's Approximation
ERIC Educational Resources Information Center
Macrae, Roderick M.; Allgeier, Benjamin M.
2013-01-01
Stirling's approximation to ln "n"! is typically introduced to physical chemistry students as a step in the derivation of the statistical expression for the entropy. However, naive application of this approximation leads to incorrect conclusions. In this article, the problem is first illustrated using a familiar "toy…
Taylor approximations of multidimensional linear differential systems
NASA Astrophysics Data System (ADS)
Lomadze, Vakhtang
2016-06-01
The Taylor approximations of a multidimensional linear differential system are of importance as they contain a complete information about it. It is shown that in order to construct them it is sufficient to truncate the exponential trajectories only. A computation of the Taylor approximations is provided using purely algebraic means, without requiring explicit knowledge of the trajectories.
Approximation for nonresonant beam target fusion reactivities
Mikkelsen, D.R.
1988-11-01
The beam target fusion reactivity for a monoenergetic beam in a Maxwellian target is approximately evaluated for nonresonant reactions. The approximation is accurate for the DD and TT fusion reactions to better than 4% for all beam energies up to 300 keV and all ion temperatures up to 2/3 of the beam energy. 12 refs., 1 fig., 1 tab.
Diagonal Pade approximations for initial value problems
Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.
1987-06-01
Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab.
Inversion and approximation of Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.
An approximation for inverse Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1981-01-01
Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.
Linear radiosity approximation using vertex radiosities
Max, N. Lawrence Livermore National Lab., CA ); Allison, M. )
1990-12-01
Using radiosities computed at vertices, the radiosity across a triangle can be approximated by linear interpolation. We develop vertex-to-vertex form factors based on this linear radiosity approximation, and show how they can be computed efficiently using modern hardware-accelerated shading and z-buffer technology. 9 refs., 4 figs.
Common Magnets, Unexpected Polarities
ERIC Educational Resources Information Center
Olson, Mark
2013-01-01
In this paper, I discuss a "misconception" in magnetism so simple and pervasive as to be typically unnoticed. That magnets have poles might be considered one of the more straightforward notions in introductory physics. However, the magnets common to students' experiences are likely different from those presented in educational…
Solving Common Mathematical Problems
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Mathematical Solutions Toolset is a collection of five software programs that rapidly solve some common mathematical problems. The programs consist of a set of Microsoft Excel worksheets. The programs provide for entry of input data and display of output data in a user-friendly, menu-driven format, and for automatic execution once the input data has been entered.
ERIC Educational Resources Information Center
Bayer, Marc Dewey
2008-01-01
Since 2004, Buffalo State College's E. H. Butler Library has used the Information Commons (IC) model to assist its 8,500 students with library research and computer applications. Campus Technology Services (CTS) plays a very active role in its IC, with a centrally located Computer Help Desk and a newly created Application Support Desk right in the…
ERIC Educational Resources Information Center
Federal Communications Commission, Washington, DC.
After outlining the Federal Communications Commission's (FCC) responsibility for regulating interstate common carrier communication (non-broadcast communication whose carriers are required by law to furnish service at reasonable charges upon request), this information bulletin reviews the history, technological development, and current…
ERIC Educational Resources Information Center
Federal Communications Commission, Washington, DC.
This bulletin outlines the Federal Communications Commission's (FCC) responsibilities in regulating the interstate and foreign common carrier communication via electrical means. Also summarized are the history, technological development, and current capabilities and prospects of telegraph, wire telephone, radiotelephone, satellite communications,…
ERIC Educational Resources Information Center
Passmore, Kaye
2008-01-01
Educator Ernest Boyer believed that well-educated students should do more than master isolated facts. They should understand the "connectedness of things." He suggested organizing curriculum thematically around eight commonalities shared by people around the world. In the book "The Basic School: A Community for Learning," Boyer recommends that…
2001-05-01
This appendix presents tables of some of the more common conversion factors for units of measure used throughout Current Protocols manuals, as well as prefixes indicating powers of ten for SI units. Another table gives conversions between temperatures on the Celsius (Centigrade) and Fahrenheit scales. PMID:18770653
Virta, R.L.
2001-01-01
Part of the 2000 annual review of the industrial minerals sector. A general overview of the common clay and shale industry is provided. In 2000, U.S. production increased by 5 percent, while sales or use declined to 23.6 Mt. Despite the slowdown in the economy, no major changes are expected for the market.
Virta, R.L.
2003-01-01
Part of the 2002 industrial minerals review. The production, consumption, and price of shale and common clay in the U.S. during 2002 are discussed. The impact of EPA regulations on brick and structural clay product manufacturers is also outlined.
Leonard, Shonda A; Littlejohn, Timothy G; Baxevanis, Andreas D
2007-01-01
This appendix discusses a few of the file formats frequently encountered in bioinformatics. Specifically, it reviews the rules for generating FASTA files and provides guidance for interpreting NCBI descriptor lines, commonly found in FASTA files. In addition, it reviews the construction of GenBank, Phylip, MSF and Nexus files. PMID:18428774
ERIC Educational Resources Information Center
McShane, Michael Q.
2014-01-01
This article presents a debate over the Common Core State Standards Initiative as it has rocketed to the forefront of education policy discussions around the country. The author contends that there is value in having clear cross state standards that will clarify the new online and blended learning that the growing use of technology has provided…
ERIC Educational Resources Information Center
Grimes, Nikki
2005-01-01
An author and a poet Nikki Grimes uses her art to reach across differences such as race and culture, and show the commonality of human experience. She uses the power of her poetry to break down racial barriers, shatter cultural stereotypes, and forge community.
Mathematics: Common Curriculum Goals.
ERIC Educational Resources Information Center
Oregon State Dept. of Education, Salem.
This document defines what are considered to be the essentials in a strong mathematics program for the state of Oregon for grades K-12. The common curriculum goals are organized into nine content strands: (1) number and numeration; (2) appropriate computational skills; (3) problem solving; (4) geometry and visualization skills; (5) measurement;…
Gora, Irv
1986-01-01
Within the pediatric population of their practices, family physicians frequently encounter infants with skin rashes. This article discusses several of the more common rashes of infancy: atopic dermatitis, cradle cap, diaper dermatitis and miliaria. Etiology, clinical picture and possible approaches to treatment are presented. ImagesFigure 1Figure 2Figure 3Figure 4Figure 5Figure 6Figure 7 PMID:21267297
Space station commonality analysis
NASA Technical Reports Server (NTRS)
1988-01-01
This study was conducted on the basis of a modification to Contract NAS8-36413, Space Station Commonality Analysis, which was initiated in December, 1987 and completed in July, 1988. The objective was to investigate the commonality aspects of subsystems and mission support hardware while technology experiments are accommodated on board the Space Station in the mid-to-late 1990s. Two types of mission are considered: (1) Advanced solar arrays and their storage; and (2) Satellite servicing. The point of departure for definition of the technology development missions was a set of missions described in the Space Station Mission Requirements Data Base. (MRDB): TDMX 2151 Solar Array/Energy Storage Technology; TDMX 2561 Satellite Servicing and Refurbishment; TDMX 2562 Satellite Maintenance and Repair; TDMX 2563 Materials Resupply (to a free-flyer materials processing platform); TDMX 2564 Coatings Maintenance Technology; and TDMX 2565 Thermal Interface Technology. Issues to be addressed according to the Statement of Work included modularity of programs, data base analysis interactions, user interfaces, and commonality. The study was to consider State-of-the-art advances through the 1990s and to select an appropriate scale for the technology experiments, considering hardware commonality, user interfaces, and mission support requirements. The study was to develop evolutionary plans for the technology advancement missions.
ERIC Educational Resources Information Center
Principal, 2010
2010-01-01
About three-fourths of the states have already adopted the Common Core State Standards, which were designed to provide more clarity about and consistency in what is expected of student learning across the country. However, given the brief time since the standards' final release in June, questions persist among educators, who will have the…
Parametric study of the Orbiter rollout using an approximate solution
NASA Technical Reports Server (NTRS)
Garland, B. J.
1979-01-01
An approximate solution to the motion of the Orbiter during rollout is used to perform a parametric study of the rollout distance required by the Orbiter. The study considers the maximum expected dispersions in the landing speed and the touchdown point. These dispersions are assumed to be correlated so that a fast landing occurs before the nominal touchdown point. The maximum rollout distance is required by the maximum landing speed with a 10 knot tailwind and the center of mass at the forward limit of its longitudinal travel. The maximum weight that can be stopped within 15,000 feet on a hot day at Kennedy Space Center is 248,800 pounds. The energy absorbed by the brakes would exceed the limit for reuse of the brakes.
An approximate model for pulsar navigation simulation
NASA Astrophysics Data System (ADS)
Jovanovic, Ilija; Enright, John
2016-02-01
This paper presents an approximate model for the simulation of pulsar aided navigation systems. High fidelity simulations of these systems are computationally intensive and impractical for simulating periods of a day or more. Simulation of yearlong missions is done by abstracting navigation errors as periodic Gaussian noise injections. This paper presents an intermediary approximate model to simulate position errors for periods of several weeks, useful for building more accurate Gaussian error models. This is done by abstracting photon detection and binning, replacing it with a simple deterministic process. The approximate model enables faster computation of error injection models, allowing the error model to be inexpensively updated throughout a simulation. Testing of the approximate model revealed an optimistic performance prediction for non-millisecond pulsars with more accurate predictions for pulsars in the millisecond spectrum. This performance gap was attributed to noise which is not present in the approximate model but can be predicted and added to improve accuracy.
Approximating maximum clique with a Hopfield network.
Jagota, A
1995-01-01
In a graph, a clique is a set of vertices such that every pair is connected by an edge. MAX-CLIQUE is the optimization problem of finding the largest clique in a given graph and is NP-hard, even to approximate well. Several real-world and theory problems can be modeled as MAX-CLIQUE. In this paper, we efficiently approximate MAX-CLIQUE in a special case of the Hopfield network whose stable states are maximal cliques. We present several energy-descent optimizing dynamics; both discrete (deterministic and stochastic) and continuous. One of these emulates, as special cases, two well-known greedy algorithms for approximating MAX-CLIQUE. We report on detailed empirical comparisons on random graphs and on harder ones. Mean-field annealing, an efficient approximation to simulated annealing, and a stochastic dynamics are the narrow but clear winners. All dynamics approximate much better than one which emulates a "naive" greedy heuristic. PMID:18263357
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
The Replica Symmetric Approximation of the Analogical Neural Network
NASA Astrophysics Data System (ADS)
Barra, Adriano; Genovese, Giuseppe; Guerra, Francesco
2010-08-01
In this paper we continue our investigation of the analogical neural network, by introducing and studying its replica symmetric approximation in the absence of external fields. Bridging the neural network to a bipartite spin-glass, we introduce and apply a new interpolation scheme to its free energy, that naturally extends the interpolation via cavity fields or stochastic perturbations from the usual spin glass case to these models. While our methods allow the formulation of a fully broken replica symmetry scheme, in this paper we limit ourselves to the replica symmetric case, in order to give the basic essence of our interpolation method. The order parameters in this case are given by the assumed averages of the overlaps for the original spin variables, and for the new Gaussian variables. As a result, we obtain the free energy of the system as a sum rule, which, at least at the replica symmetric level, can be solved exactly, through a self-consistent mini-max variational principle. The so gained replica symmetric approximation turns out to be exactly correct in the ergodic region, where it coincides with the annealed expression for the free energy, and in the low density limit of stored patterns. Moreover, in the spin glass limit it gives the correct expression for the replica symmetric approximation in this case. We calculate also the entropy density in the low temperature region, where we find that it becomes negative, as expected for this kind of approximation. Interestingly, in contrast with the case where the stored patterns are digital, no phase transition is found in the low temperature limit, as a function of the density of stored patterns.
Linear-phase approximation in the triangular facet near-field physical optics computer program
NASA Technical Reports Server (NTRS)
Imbriale, W. A.; Hodges, R. E.
1990-01-01
Analyses of reflector antenna surfaces use a computer program based on a discrete approximation of the radiation integral. The calculation replaces the actual surface with a triangular facet representation; the physical optics current is assumed to be constant over each facet. Described here is a method of calculation using linear-phase approximation of the surface currents of parabolas, ellipses, and shaped subreflectors and compares results with a previous program that used a constant-phase approximation of the triangular facets. The results show that the linear-phase approximation is a significant improvement over the constant-phase approximation, and enables computation of 100 to 1,000 lambda reflectors within a reasonable time on a Cray computer.
Common tester platform concept.
Hurst, Michael James
2008-05-01
This report summarizes the results of a case study on the doctrine of a common tester platform, a concept of a standardized platform that can be applicable across the broad spectrum of testing requirements throughout the various stages of a weapons program, as well as across the various weapons programs. The common tester concept strives to define an affordable, next-generation design that will meet testing requirements with the flexibility to grow and expand; supporting the initial development stages of a weapons program through to the final production and surveillance stages. This report discusses a concept investing key leveraging technologies and operational concepts combined with prototype tester-development experiences and practical lessons learned gleaned from past weapons programs.
Commonly missed orthopedic problems.
Ballas, M T; Tytko, J; Mannarino, F
1998-01-15
When not diagnosed early and managed appropriately, common musculoskeletal injuries may result in long-term disabling conditions. Anterior cruciate ligament tears are some of the most common knee ligament injuries. Slipped capital femoral epiphysis may present with little or no hip pain, and subtle or absent physical and radiographic findings. Femoral neck stress fractures, if left untreated, may result in avascular necrosis, refractures and pseudoarthrosis. A delay in diagnosis of scaphoid fractures may cause early wrist arthrosis if nonunion results. Ulnar collateral ligament tears are a frequently overlooked injury in skiers. The diagnosis of Achilles tendon rupture is missed as often as 25 percent of the time. Posterior tibial tendon tears may result in fixed bony planus if diagnosis is delayed, necessitating hindfoot fusion rather than simple soft tissue repair. Family physicians should be familiar with the initial assessment of these conditions and, when appropriate, refer patients promptly to an orthopedic surgeon. PMID:9456991
NASA Technical Reports Server (NTRS)
Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.
2015-01-01
Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshall Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.
NASA Technical Reports Server (NTRS)
Hark, Frank; Britton, Paul; Ring, Rob; Novack, Steven D.
2016-01-01
Common Cause Failures (CCFs) are a known and documented phenomenon that defeats system redundancy. CCFS are a set of dependent type of failures that can be caused by: system environments; manufacturing; transportation; storage; maintenance; and assembly, as examples. Since there are many factors that contribute to CCFs, the effects can be reduced, but they are difficult to eliminate entirely. Furthermore, failure databases sometimes fail to differentiate between independent and CCF (dependent) failure and data is limited, especially for launch vehicles. The Probabilistic Risk Assessment (PRA) of NASA's Safety and Mission Assurance Directorate at Marshal Space Flight Center (MFSC) is using generic data from the Nuclear Regulatory Commission's database of common cause failures at nuclear power plants to estimate CCF due to the lack of a more appropriate data source. There remains uncertainty in the actual magnitude of the common cause risk estimates for different systems at this stage of the design. Given the limited data about launch vehicle CCF and that launch vehicles are a highly redundant system by design, it is important to make design decisions to account for a range of values for independent and CCFs. When investigating the design of the one-out-of-two component redundant system for launch vehicles, a response surface was constructed to represent the impact of the independent failure rate versus a common cause beta factor effect on a system's failure probability. This presentation will define a CCF and review estimation calculations. It gives a summary of reduction methodologies and a review of examples of historical CCFs. Finally, it presents the response surface and discusses the results of the different CCFs on the reliability of a one-out-of-two system.
NASA Technical Reports Server (NTRS)
Ellis, R. C.; Fink, R. A.; Moore, E. A.
1987-01-01
The Common Drive Unit (CDU) is a high reliability rotary actuator with many versatile applications in mechanism designs. The CDU incorporates a set of redundant motor-brake assemblies driving a single output shaft through differential. Tachometers provide speed information in the AC version. Operation of both motors, as compared to the operation of one motor, will yield the same output torque with twice the output speed.
Foxx-Orenstein, Amy E.; Umar, Sarah B.; Crowell, Michael D.
2014-01-01
Anorectal disorders result in many visits to healthcare specialists. These disorders include benign conditions such as hemorrhoids to more serious conditions such as malignancy; thus, it is important for the clinician to be familiar with these disorders as well as know how to conduct an appropriate history and physical examination. This article reviews the most common anorectal disorders, including hemorrhoids, anal fissures, fecal incontinence, proctalgia fugax, excessive perineal descent, and pruritus ani, and provides guidelines on comprehensive evaluation and management. PMID:24987313
Energy Science and Technology Software Center (ESTSC)
2005-01-01
The Common Geometry Module (CGM) is a code library which provides geometry functionality used for mesh generation and other applications. This functionality includes that commonly found in solid modeling engines, like geometry creation, query and modification; CGM also includes capabilities not commonly found in solid modeling engines, like geometry decomposition tools and support for shared material interfaces. CGM is built upon the ACIS solid modeling engine, but also includes geometry capability developed beside and onmore » top of ACIS. CGM can be used as-is to provide geometry functionality for codes needing this capability. However, CGM can also be extended using derived classes in C++, allowing the geometric model to serve as the basis for other applications, for example mesh generation. CGM is supported on Sun Solaris, SGI, HP, IBM, DEC, Linux and Windows NT platforms. CGM also indudes support for loading ACIS models on parallel computers, using MPI-based communication. Future plans for CGM are to port it to different solid modeling engines, including Pro/Engineer or SolidWorks. CGM is being released into the public domain under an LGPL license; the ACIS-based engine is available to ACIS licensees on request.« less
NASA Astrophysics Data System (ADS)
Taddei, Arnaud
After it had been decided to design a common user environment for UNIX platforms among HEP laboratories, a joint project between DESY and CERN had been started. The project consists in 2 phases: 1. Provide a common user environment at shell level, 2. Provide a common user environment at graphical level (X11). Phase 1 is in production at DESY and at CERN as well as at PISA and RAL. It has been developed around the scripts originally designed at DESY Zeuthen improved and extended with a 2 months project at CERN with a contribution from DESY Hamburg. It consists of a set of files which are customizing the environment for the 6 main shells (sh, csh, ksh, bash, tcsh, zsh) on the main platforms (AIX, HP-UX, IRIX, SunOS, Solaris 2, OSF/1, ULTRIX, etc.) and it is divided at several "sociological" levels: HEP, site, machine, cluster, group of users and user with some levels which are optional. The second phase is under design and a first proposal has been published. A first version of the phase 2 exists already for AIX and Solaris, and it should be available for all other platforms, by the time of the conference. This is a major collective work between several HEP laboratories involved in the HEPiX-scripts and HEPiX-X11 working-groups.
Millstone, Noah
2012-12-01
This essay is an expanded set of comments on the social psychology papers written for the special issue on History and Social Psychology. It considers what social psychology, and particularly the theory of social representations, might offer historians working on similar problems, and what historical methods might offer social psychology. The social history of thinking has been a major theme in twentieth and twenty-first century historical writing, represented most recently by the genre of 'cultural history'. Cultural history and the theory of social representations have common ancestors in early twentieth-century social science. Nevertheless, the two lines of research have developed in different ways and are better seen as complementary than similar. The theory of social representations usefully foregrounds issues, like social division and change over time, that cultural history relegates to the background. But for historians, the theory of social representations seems oddly fixated on comparing the thought styles associated with positivist science and 'common sense'. Using historical analysis, this essay tries to dissect the core opposition 'science : common sense' and argues for a more flexible approach to comparing modes of thought. PMID:23135802
APPROXIMATING LIGHT RAYS IN THE SCHWARZSCHILD FIELD
Semerák, O.
2015-02-10
A short formula is suggested that approximates photon trajectories in the Schwarzschild field better than other simple prescriptions from the literature. We compare it with various ''low-order competitors'', namely, with those following from exact formulas for small M, with one of the results based on pseudo-Newtonian potentials, with a suitably adjusted hyperbola, and with the effective and often employed approximation by Beloborodov. Our main concern is the shape of the photon trajectories at finite radii, yet asymptotic behavior is also discussed, important for lensing. An example is attached indicating that the newly suggested approximation is usable—and very accurate—for practically solving the ray-deflection exercise.
Detecting Gravitational Waves using Pade Approximants
NASA Astrophysics Data System (ADS)
Porter, E. K.; Sathyaprakash, B. S.
1998-12-01
We look at the use of Pade Approximants in defining a metric tensor for the inspiral waveform template manifold. By using this method we investigate the curvature of the template manifold and the number of templates needed to carry out a realistic search for a Gravitational Wave signal. By comparing this method with the normal use of Taylor Approximant waveforms we hope to show that (a) Pade Approximants are a superior method for calculating the inspiral waveform, and (b) the number of search templates needed, and hence computing power, is reduced.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.
Approximate knowledge compilation: The first order case
Val, A. del
1996-12-31
Knowledge compilation procedures make a knowledge base more explicit so as make inference with respect to the compiled knowledge base tractable or at least more efficient. Most work to date in this area has been restricted to the propositional case, despite the importance of first order theories for expressing knowledge concisely. Focusing on (LUB) approximate compilation, our contribution is twofold: (1) We present a new ground algorithm for approximate compilation which can produce exponential savings with respect to the previously known algorithm. (2) We show that both ground algorithms can be lifted to the first order case preserving their correctness for approximate compilation.
Adiabatic approximation for nucleus-nucleus scattering
Johnson, R.C.
2005-10-14
Adiabatic approximations to few-body models of nuclear scattering are described with emphasis on reactions with deuterons and halo nuclei (frozen halo approximation) as projectiles. The different ways the approximation should be implemented in a consistent theory of elastic scattering, stripping and break-up are explained and the conditions for the theory's validity are briefly discussed. A formalism which links few-body models and the underlying many-body system is outlined and the connection between the adiabatic and CDCC methods is reviewed.
Approximate Bruechner orbitals in electron propagator calculations
Ortiz, J.V.
1999-12-01
Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.
Information geometry of mean-field approximation.
Tanaka, T
2000-08-01
I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the "naive" mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics. PMID:10953246
Polynomial approximations of a class of stochastic multiscale elasticity problems
NASA Astrophysics Data System (ADS)
Hoang, Viet Ha; Nguyen, Thanh Chung; Xia, Bingxing
2016-06-01
We consider a class of elasticity equations in {mathbb{R}^d} whose elastic moduli depend on n separated microscopic scales. The moduli are random and expressed as a linear expansion of a countable sequence of random variables which are independently and identically uniformly distributed in a compact interval. The multiscale Hellinger-Reissner mixed problem that allows for computing the stress directly and the multiscale mixed problem with a penalty term for nearly incompressible isotropic materials are considered. The stochastic problems are studied via deterministic problems that depend on a countable number of real parameters which represent the probabilistic law of the stochastic equations. We study the multiscale homogenized problems that contain all the macroscopic and microscopic information. The solutions of these multiscale homogenized problems are written as generalized polynomial chaos (gpc) expansions. We approximate these solutions by semidiscrete Galerkin approximating problems that project into the spaces of functions with only a finite number of N gpc modes. Assuming summability properties for the coefficients of the elastic moduli's expansion, we deduce bounds and summability properties for the solutions' gpc expansion coefficients. These bounds imply explicit rates of convergence in terms of N when the gpc modes used for the Galerkin approximation are chosen to correspond to the best N terms in the gpc expansion. For the mixed problem with a penalty term for nearly incompressible materials, we show that the rate of convergence for the best N term approximation is independent of the Lamé constants' ratio when it goes to {infty}. Correctors for the homogenization problem are deduced. From these we establish correctors for the solutions of the parametric multiscale problems in terms of the semidiscrete Galerkin approximations. For two-scale problems, an explicit homogenization error which is uniform with respect to the parameters is deduced. Together
Fretting about FRET: Failure of the Ideal Dipole Approximation
Muñoz-Losa, Aurora; Curutchet, Carles; Krueger, Brent P.; Hartsell, Lydia R.; Mennucci, Benedetta
2009-01-01
Abstract With recent growth in the use of fluorescence-detected resonance energy transfer (FRET), it is being applied to complex systems in modern and diverse ways where it is not always clear that the common approximations required for analysis are applicable. For instance, the ideal dipole approximation (IDA), which is implicit in the Förster equation, is known to break down when molecules get “too close” to each other. Yet, no clear definition exists of what is meant by “too close”. Here we examine several common fluorescent probe molecules to determine boundaries for use of the IDA. We compare the Coulombic coupling determined essentially exactly with a linear response approach with the IDA coupling to find the distance regimes over which the IDA begins to fail. We find that the IDA performs well down to roughly 20 Å separation, provided the molecules sample an isotropic set of relative orientations. However, if molecular motions are restricted, the IDA performs poorly at separations beyond 50 Å. Thus, isotropic probe motions help mask poor performance of the IDA through cancellation of error. Therefore, if fluorescent probe motions are restricted, FRET practitioners should be concerned with not only the well-known κ2 approximation, but also possible failure of the IDA. PMID:19527638
Fretting about FRET: failure of the ideal dipole approximation.
Muñoz-Losa, Aurora; Curutchet, Carles; Krueger, Brent P; Hartsell, Lydia R; Mennucci, Benedetta
2009-06-17
With recent growth in the use of fluorescence-detected resonance energy transfer (FRET), it is being applied to complex systems in modern and diverse ways where it is not always clear that the common approximations required for analysis are applicable. For instance, the ideal dipole approximation (IDA), which is implicit in the Förster equation, is known to break down when molecules get "too close" to each other. Yet, no clear definition exists of what is meant by "too close". Here we examine several common fluorescent probe molecules to determine boundaries for use of the IDA. We compare the Coulombic coupling determined essentially exactly with a linear response approach with the IDA coupling to find the distance regimes over which the IDA begins to fail. We find that the IDA performs well down to roughly 20 A separation, provided the molecules sample an isotropic set of relative orientations. However, if molecular motions are restricted, the IDA performs poorly at separations beyond 50 A. Thus, isotropic probe motions help mask poor performance of the IDA through cancellation of error. Therefore, if fluorescent probe motions are restricted, FRET practitioners should be concerned with not only the well-known kappa2 approximation, but also possible failure of the IDA. PMID:19527638
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.
Auer, Peter; Burgsteiner, Harald; Maass, Wolfgang
2008-06-01
One may argue that the simplest type of neural networks beyond a single perceptron is an array of several perceptrons in parallel. In spite of their simplicity, such circuits can compute any Boolean function if one views the majority of the binary perceptron outputs as the binary output of the parallel perceptron, and they are universal approximators for arbitrary continuous functions with values in [0,1] if one views the fraction of perceptrons that output 1 as the analog output of the parallel perceptron. Note that in contrast to the familiar model of a "multi-layer perceptron" the parallel perceptron that we consider here has just binary values as outputs of gates on the hidden layer. For a long time one has thought that there exists no competitive learning algorithm for these extremely simple neural networks, which also came to be known as committee machines. It is commonly assumed that one has to replace the hard threshold gates on the hidden layer by sigmoidal gates (or RBF-gates) and that one has to tune the weights on at least two successive layers in order to achieve satisfactory learning results for any class of neural networks that yield universal approximators. We show that this assumption is not true, by exhibiting a simple learning algorithm for parallel perceptrons - the parallel delta rule (p-delta rule). In contrast to backprop for multi-layer perceptrons, the p-delta rule only has to tune a single layer of weights, and it does not require the computation and communication of analog values with high precision. Reduced communication also distinguishes our new learning rule from other learning rules for parallel perceptrons such as MADALINE. Obviously these features make the p-delta rule attractive as a biologically more realistic alternative to backprop in biological neural circuits, but also for implementations in special purpose hardware. We show that the p-delta rule also implements gradient descent-with regard to a suitable error measure
A Survey of Techniques for Approximate Computing
Mittal, Sparsh
2016-03-18
Approximate computing trades off computation quality with the effort expended and as rising performance demands confront with plateauing resource budgets, approximate computing has become, not merely attractive, but even imperative. Here, we present a survey of techniques for approximate computing (AC). We discuss strategies for finding approximable program portions and monitoring output quality, techniques for using AC in different processing units (e.g., CPU, GPU and FPGA), processor components, memory technologies etc., and programming frameworks for AC. Moreover, we classify these techniques based on several key characteristics to emphasize their similarities and differences. Finally, the aim of this paper is tomore » provide insights to researchers into working of AC techniques and inspire more efforts in this area to make AC the mainstream computing approach in future systems.« less
Adiabatic approximation for the density matrix
NASA Astrophysics Data System (ADS)
Band, Yehuda B.
1992-05-01
An adiabatic approximation for the Liouville density-matrix equation which includes decay terms is developed. The adiabatic approximation employs the eigenvectors of the non-normal Liouville operator. The approximation is valid when there exists a complete set of eigenvectors of the non-normal Liouville operator (i.e., the eigenvectors span the density-matrix space), the time rate of change of the Liouville operator is small, and an auxiliary matrix is nonsingular. Numerical examples are presented involving efficient population transfer in a molecule by stimulated Raman scattering, with the intermediate level of the molecule decaying on a time scale that is fast compared with the pulse durations of the pump and Stokes fields. The adiabatic density-matrix approximation can be simply used to determine the density matrix for atomic or molecular systems interacting with cw electromagnetic fields when spontaneous emission or other decay mechanisms prevail.
An approximation method for electrostatic Vlasov turbulence
NASA Technical Reports Server (NTRS)
Klimas, A. J.
1979-01-01
Electrostatic Vlasov turbulence in a bounded spatial region is considered. An iterative approximation method with a proof of convergence is constructed. The method is non-linear and applicable to strong turbulence.
Linear Approximation SAR Azimuth Processing Study
NASA Technical Reports Server (NTRS)
Lindquist, R. B.; Masnaghetti, R. K.; Belland, E.; Hance, H. V.; Weis, W. G.
1979-01-01
A segmented linear approximation of the quadratic phase function that is used to focus the synthetic antenna of a SAR was studied. Ideal focusing, using a quadratic varying phase focusing function during the time radar target histories are gathered, requires a large number of complex multiplications. These can be largely eliminated by using linear approximation techniques. The result is a reduced processor size and chip count relative to ideally focussed processing and a correspondingly increased feasibility for spaceworthy implementation. A preliminary design and sizing for a spaceworthy linear approximation SAR azimuth processor meeting requirements similar to those of the SEASAT-A SAR was developed. The study resulted in a design with approximately 1500 IC's, 1.2 cubic feet of volume, and 350 watts of power for a single look, 4000 range cell azimuth processor with 25 meters resolution.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Structural Reliability Analysis and Optimization: Use of Approximations
NASA Technical Reports Server (NTRS)
Grandhi, Ramana V.; Wang, Liping
1999-01-01
This report is intended for the demonstration of function approximation concepts and their applicability in reliability analysis and design. Particularly, approximations in the calculation of the safety index, failure probability and structural optimization (modification of design variables) are developed. With this scope in mind, extensive details on probability theory are avoided. Definitions relevant to the stated objectives have been taken from standard text books. The idea of function approximations is to minimize the repetitive use of computationally intensive calculations by replacing them with simpler closed-form equations, which could be nonlinear. Typically, the approximations provide good accuracy around the points where they are constructed, and they need to be periodically updated to extend their utility. There are approximations in calculating the failure probability of a limit state function. The first one, which is most commonly discussed, is how the limit state is approximated at the design point. Most of the time this could be a first-order Taylor series expansion, also known as the First Order Reliability Method (FORM), or a second-order Taylor series expansion (paraboloid), also known as the Second Order Reliability Method (SORM). From the computational procedure point of view, this step comes after the design point identification; however, the order of approximation for the probability of failure calculation is discussed first, and it is denoted by either FORM or SORM. The other approximation of interest is how the design point, or the most probable failure point (MPP), is identified. For iteratively finding this point, again the limit state is approximated. The accuracy and efficiency of the approximations make the search process quite practical for analysis intensive approaches such as the finite element methods; therefore, the crux of this research is to develop excellent approximations for MPP identification and also different
Some Recent Progress for Approximation Algorithms
NASA Astrophysics Data System (ADS)
Kawarabayashi, Ken-ichi
We survey some recent progress on approximation algorithms. Our main focus is the following two problems that have some recent breakthroughs; the edge-disjoint paths problem and the graph coloring problem. These breakthroughs involve the following three ingredients that are quite central in approximation algorithms: (1) Combinatorial (graph theoretical) approach, (2) LP based approach and (3) Semi-definite programming approach. We also sketch how they are used to obtain recent development.
Polynomial approximation of functions in Sobolev spaces
NASA Technical Reports Server (NTRS)
Dupont, T.; Scott, R.
1980-01-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomial plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
Approximate Solutions Of Equations Of Steady Diffusion
NASA Technical Reports Server (NTRS)
Edmonds, Larry D.
1992-01-01
Rigorous analysis yields reliable criteria for "best-fit" functions. Improved "curve-fitting" method yields approximate solutions to differential equations of steady-state diffusion. Method applies to problems in which rates of diffusion depend linearly or nonlinearly on concentrations of diffusants, approximate solutions analytic or numerical, and boundary conditions of Dirichlet type, of Neumann type, or mixture of both types. Applied to equations for diffusion of charge carriers in semiconductors in which mobilities and lifetimes of charge carriers depend on concentrations.
Polynomial approximation of functions in Sobolev spaces
Dupont, T.; Scott, R.
1980-04-01
Constructive proofs and several generalizations of approximation results of J. H. Bramble and S. R. Hilbert are presented. Using an averaged Taylor series, we represent a function as a polynomical plus a remainder. The remainder can be manipulated in many ways to give different types of bounds. Approximation of functions in fractional order Sobolev spaces is treated as well as the usual integer order spaces and several nonstandard Sobolev-like spaces.
An improved proximity force approximation for electrostatics
Fosco, Cesar D.; Lombardo, Fernando C.; Mazzitelli, Francisco D.
2012-08-15
A quite straightforward approximation for the electrostatic interaction between two perfectly conducting surfaces suggests itself when the distance between them is much smaller than the characteristic lengths associated with their shapes. Indeed, in the so called 'proximity force approximation' the electrostatic force is evaluated by first dividing each surface into a set of small flat patches, and then adding up the forces due two opposite pairs, the contributions of which are approximated as due to pairs of parallel planes. This approximation has been widely and successfully applied in different contexts, ranging from nuclear physics to Casimir effect calculations. We present here an improvement on this approximation, based on a derivative expansion for the electrostatic energy contained between the surfaces. The results obtained could be useful for discussing the geometric dependence of the electrostatic force, and also as a convenient benchmark for numerical analyses of the tip-sample electrostatic interaction in atomic force microscopes. - Highlights: Black-Right-Pointing-Pointer The proximity force approximation (PFA) has been widely used in different areas. Black-Right-Pointing-Pointer The PFA can be improved using a derivative expansion in the shape of the surfaces. Black-Right-Pointing-Pointer We use the improved PFA to compute electrostatic forces between conductors. Black-Right-Pointing-Pointer The results can be used as an analytic benchmark for numerical calculations in AFM. Black-Right-Pointing-Pointer Insight is provided for people who use the PFA to compute nuclear and Casimir forces.
Efficient crosswell EM tomography using localized nonlinear approximation
Kim, Hee Joon; Song, Yoonho; Lee, Ki Ha; Wilt, Michael J.
2003-07-21
This paper presents a fast and stable imaging scheme using the localized nonlinear (LN) approximation of integral equation (IE) solutions for inverting electromagnetic data obtained in a crosswell survey. The medium is assumed to be cylindrically symmetric about a source borehole and to maintain the symmetry a vertical magnetic dipole is used as a source. To find an optimum balance between data fitting and smoothness constraint, we introduce an automatic selection scheme of Lagrange multiplier, which is sought at each iteration with a least misfit criterion. In this selection scheme, the IE algorithm is quite attractive in speed because Green's functions, a most time-consuming part in IE methods, are repeatedly reusable throughout the inversion process. The inversion scheme using the LN approximation has been tested to show its stability and efficiency using both synthetic and field data. The inverted image derived from the field data, collected in a pilot experiment of water flood monitoring in an oil field, is successfully compared with that of a 2.5-dimensional inversion scheme.
Nonadiabatic charged spherical evolution in the postquasistatic approximation
Rosales, L.; Barreto, W.; Peralta, C.; Rodriguez-Mueller, B.
2010-10-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of dissipative and electrically charged distributions in general relativity. The numerical implementation of our approach leads to a solver which is globally second-order convergent. We evolve nonadiabatic distributions assuming an equation of state that accounts for the anisotropy induced by the electric charge. Dissipation is described by streaming-out or diffusion approximations. We match the interior solution, in noncomoving coordinates, with the Vaidya-Reissner-Nordstroem exterior solution. Two models are considered: (i) a Schwarzschild-like shell in the diffusion limit; and (ii) a Schwarzschild-like interior in the free-streaming limit. These toy models tell us something about the nature of the dissipative and electrically charged collapse. Diffusion stabilizes the gravitational collapse producing a spherical shell whose contraction is halted in a short characteristic hydrodynamic time. The streaming-out radiation provides a more efficient mechanism for emission of energy, redistributing the electric charge on the whole sphere, while the distribution collapses indefinitely with a longer hydrodynamic time scale.
Near distance approximation in astrodynamical applications of Lambert's theorem
NASA Astrophysics Data System (ADS)
Rauh, Alexander; Parisi, Jürgen
2014-01-01
The smallness parameter of the approximation method is defined in terms of the non-dimensional initial distance between target and chaser satellite. In the case of a circular target orbit, compact analytical expressions are obtained for the interception travel time up to third order. For eccentric target orbits, an explicit result is worked out to first order, and the tools are prepared for numerical evaluation of higher order contributions. The possible transfer orbits are examined within Lambert's theorem. For an eventual rendezvous it is assumed that the directions of the angular momenta of the two orbits enclose an acute angle. This assumption, together with the property that the travel time should vanish with vanishing initial distance, leads to a condition on the admissible initial positions of the chaser satellite. The condition is worked out explicitly in the general case of an eccentric target orbit and a non-coplanar transfer orbit. The condition is local. However, since during a rendezvous maneuver, the chaser eventually passes through the local space, the condition propagates to non-local initial distances. As to quantitative accuracy, the third order approximation reproduces the elements of Mars, in the historical problem treated by Gauss, to seven decimals accuracy, and in the case of the International Space Station, the method predicts an encounter error of about 12 m for an initial distance of 70 km.
Business Education Innovation: How Common Exams Can Improve University Teaching
ERIC Educational Resources Information Center
Unger, Darian
2010-01-01
Although there is significant research on improving college-level teaching practices, most literature in the field assumes an incentive for improvement. The research presented in this paper addresses the issue of poor incentives for improving university-level teaching. Specifically, it proposes instructor-designed common examinations as an…
A general Kirchhoff approximation for echo simulation in ultrasonic NDT
NASA Astrophysics Data System (ADS)
Dorval, V.; Chatillon, S.; Lu, B.; Darmon, M.; Mahaut, S.
2012-05-01
The Kirchhoff approximation is commonly used for the modeling of echoes in ultrasonic NDE. It consists in locally approximating the illuminated surface by an infinite plane to compute elastic fields. A model based on this approximation is used in the CIVA software, developed at CEA LIST, to compute echoes from cracks and backwalls. In its current version, it is limited to stress-free surfaces. A new model using a more general formalism has been developed. It is based on reciprocity principles and is valid for any host and flaw materials (liquids, isotropic and anisotropic solids). Experimental validations confirm that this new model can be used for a wider range of applications than the previous one. A second part of this communication deals with the improvement of the Kirchhoff approximation in the aim of predicting diffraction echoes. It is based on an approach called refined Kirchhoff, which combines the Kirchhoff and Geometrical Theory of Diffraction (GTD) models. An illustration of this method for the case of a rigid obstacle in a fluid is given.
Sparse approximation problem: how rapid simulated annealing succeeds and fails
NASA Astrophysics Data System (ADS)
Obuchi, Tomoyuki; Kabashima, Yoshiyuki
2016-03-01
Information processing techniques based on sparseness have been actively studied in several disciplines. Among them, a mathematical framework to approximately express a given dataset by a combination of a small number of basis vectors of an overcomplete basis is termed the sparse approximation. In this paper, we apply simulated annealing, a metaheuristic algorithm for general optimization problems, to sparse approximation in the situation where the given data have a planted sparse representation and noise is present. The result in the noiseless case shows that our simulated annealing works well in a reasonable parameter region: the planted solution is found fairly rapidly. This is true even in the case where a common relaxation of the sparse approximation problem, the G-relaxation, is ineffective. On the other hand, when the dimensionality of the data is close to the number of non-zero components, another metastable state emerges, and our algorithm fails to find the planted solution. This phenomenon is associated with a first-order phase transition. In the case of very strong noise, it is no longer meaningful to search for the planted solution. In this situation, our algorithm determines a solution with close-to-minimum distortion fairly quickly.
Hybrid approximate message passing for generalized group sparsity
NASA Astrophysics Data System (ADS)
Fletcher, Alyson K.; Rangan, Sundeep
2013-09-01
We consider the problem of estimating a group sparse vector x ∈ Rn under a generalized linear measurement model. Group sparsity of x means the activity of different components of the vector occurs in groups - a feature common in estimation problems in image processing, simultaneous sparse approximation and feature selection with grouped variables. Unfortunately, many current group sparse estimation methods require that the groups are non-overlapping. This work considers problems with what we call generalized group sparsity where the activity of the different components of x are modeled as functions of a small number of boolean latent variables. We show that this model can incorporate a large class of overlapping group sparse problems including problems in sparse multivariable polynomial regression and gene expression analysis. To estimate vectors with such group sparse structures, the paper proposes to use a recently-developed hybrid generalized approximate message passing (HyGAMP) method. Approximate message passing (AMP) refers to a class of algorithms based on Gaussian and quadratic approximations of loopy belief propagation for estimation of random vectors under linear measurements. The HyGAMP method extends the AMP framework to incorporate priors on x described by graphical models of which generalized group sparsity is a special case. We show that the HyGAMP algorithm is computationally efficient, general and offers superior performance in certain synthetic data test cases.
Effective medium approximations for anisotropic composites with arbitrary component orientation
NASA Astrophysics Data System (ADS)
Levy, Ohad; Cherkaev, Elena
2013-10-01
A Maxwell Garnett approximation (MGA) and a symmetric effective medium approximation (SEMA) are derived for anisotropic composites of host-inclusion and symmetric-grains morphologies, respectively, with ellipsoidal grains of arbitrary intrinsic, shape and orientation anisotropies. The effect of anisotropy on the effective dielectric tensor is illustrated in both cases. The MGA shows negative and non-monotonic off-diagonal elements for geometries where the host and inclusions are not mutually aligned. The SEMA leads to an anisotropy-dependent nonlinear behaviour of the conductivity as a function of volume fraction above a percolation threshold of conductor-insulator composites, in contrast to the well-known linear behaviour of the isotropic effective medium model. The percolation threshold obtained for composites of aligned ellipsoids is isotropic and independent of the ellipsoids aspect ratio. Thus, the common identification of the percolation threshold with the depolarization factors of the grains is unjustified and a description of anisotropic percolation requires explicit anisotropic geometric characteristics.
Commonly used gastrointestinal drugs.
Aggarwal, Annu; Bhatt, Mohit
2014-01-01
This chapter reviews the spectrum and mechanisms of neurologic adverse effects of commonly used gastrointestinal drugs including antiemetics, promotility drugs, laxatives, antimotility drugs, and drugs for acid-related disorders. The commonly used gastrointestinal drugs as a group are considered safe and are widely used. A range of neurologic complications are reported following use of various gastrointestinal drugs. Acute neurotoxicities, including transient akathisias, oculogyric crisis, delirium, seizures, and strokes, can develop after use of certain gastrointestinal medications, while disabling and pervasive tardive syndromes are described following long-term and often unsupervised use of phenothiazines, metoclopramide, and other drugs. In rare instances, some of the antiemetics can precipitate life-threatening extrapyramidal reactions, neuroleptic malignant syndrome, or serotonin syndrome. In contrast, concerns about the cardiovascular toxicity of drugs such as cisapride and tegaserod have been grave enough to lead to their withdrawal from many world markets. Awareness and recognition of the neurotoxicity of gastrointestinal drugs is essential to help weigh the benefit of their use against possible adverse effects, even if uncommon. Furthermore, as far as possible, drugs such as metoclopramide and others that can lead to tardive dyskinesias should be used for as short time as possible, with close clinical monitoring and patient education. PMID:24365343
Approximate Thermodynamics State Relations in Partially Ionized Gas Mixtures
Ramshaw, J D
2003-12-30
In practical applications, the thermodynamic state relations of partially ionized gas mixtures are usually approximated in terms of the state relations of the pure partially ionized constituent gases or materials in isolation. Such approximations are ordinarily based on an artificial partitioning or separation of the mixture into its constituent materials, with material k regarded as being confined by itself within a compartment or subvolume with volume fraction {alpha}k and possessing a fraction {beta}k of the total internal energy of the mixture. In a mixture of N materials, the quantities {alpha}k and {beta}k constitute an additional 2N--2 independent variables. The most common procedure for determining these variables, and hence the state relations for the mixture, is to require that the subvolumes all have the same temperature and pressure. This intuitively reasonable procedure is easily shown to reproduce the correct thermal and caloric state equations for a mixture of neutral (non-ionized) ideal gases. Here we wish to point out that (a) this procedure leads to incorrect state equations for a mixture of partially ionized ideal gases, whereas (b) the alternative procedure of requiring that the subvolumes all have the same temperature and free electron density reproduces the correct thermal and caloric state equations for such a mixture. These results readily generalize to the case of partially degenerate and/or relativistic electrons, to a common approximation used to represent pressure ionization effects, and to two-temperature plasmas. This suggests that equating the subvolume electron number densities or chemical potentials instead of pressures is likely to provide a more accurate approximation even in nonideal plasma mixtures.
Common questions about wound care.
Worster, Brooke; Zawora, Michelle Q; Hsieh, Christine
2015-01-15
Lacerations, abrasions, burns, and puncture wounds are common in the outpatient setting. Because wounds can quickly become infected, the most important aspect of treating a minor wound is irrigation and cleaning. There is no evidence that antiseptic irrigation is superior to sterile saline or tap water. Occlusion of the wound is key to preventing contamination. Suturing, if required, can be completed up to 24 hours after the trauma occurs, depending on the wound site. Tissue adhesives are equally effective for low-tension wounds with linear edges that can be evenly approximated. Although patients are often instructed to keep their wounds covered and dry after suturing, they can get wet within the first 24 to 48 hours without increasing the risk of infection. There is no evidence that prophylactic antibiotics improve outcomes for most simple wounds. Tetanus toxoid should be administered as soon as possible to patients who have not received a booster in the past 10 years. Superficial mild wound infections can be treated with topical agents, whereas deeper mild and moderate infections should be treated with oral antibiotics. Most severe infections, and moderate infections in high-risk patients, require initial parenteral antibiotics. Severe burns and wounds that cover large areas of the body or involve the face, joints, bone, tendons, or nerves should generally be referred to wound care specialists. PMID:25591209
Parallel SVD updating using approximate rotations
NASA Astrophysics Data System (ADS)
Goetze, Juergen; Rieder, Peter; Nossek, J. A.
1995-06-01
In this paper a parallel implementation of the SVD-updating algorithm using approximate rotations is presented. In its original form the SVD-updating algorithm had numerical problems if no reorthogonalization steps were applied. Representing the orthogonalmatrix V (right singular vectors) using its parameterization in terms of the rotation angles of n(n - 1)/2 plane rotations these reorthogonalization steps can be avoided during the SVD-updating algorithm. This results in a SVD-updating algorithm where all computations (matrix vector multiplication, QRD-updating, Kogbetliantz's algorithm) are entirely based on the evaluation and application of orthogonal plane rotations. Therefore, in this form the SVD-updating algorithm is amenable to an implementation using CORDIC-based approximate rotations. Using CORDIC-based approximate rotations the n(n - 1)/2 rotations representing V (as well as all other rotations) are only computed to a certain approximation accuracy (in the basis arctan 2i). All necessary computations required during the SVD-updating algorithm (exclusively rotations) are executed with the same accuracy, i.e., only r << w (w: wordlength) elementary orthonormal (mu) rotations are used per plane rotation. Simulations show the efficiency of the implementation using CORDIC-based approximate rotations.
NASA Astrophysics Data System (ADS)
ESO CPL Development Team
2014-02-01
The Common Pipeline Library (CPL) is a set of ISO-C libraries that provide a comprehensive, efficient and robust software toolkit to create automated astronomical data reduction pipelines. Though initially developed as a standardized way to build VLT instrument pipelines, the CPL may be more generally applied to any similar application. The code also provides a variety of general purpose image- and signal-processing functions, making it an excellent framework for the creation of more generic data handling packages. The CPL handles low-level data types (images, tables, matrices, strings, property lists, etc.) and medium-level data access methods (a simple data abstraction layer for FITS files). It also provides table organization and manipulation, keyword/value handling and management, and support for dynamic loading of recipe modules using programs such as EsoRex (ascl:1504.003).
When Computers Assume Building Operations
ERIC Educational Resources Information Center
Kmetzo, John L.
1972-01-01
Describes what the interaction between the trend to centralized control of building operation and the increased capabilities of computers means to building design and operation today and in the future. (Author/DN)
Separable approximations of two-body interactions
NASA Astrophysics Data System (ADS)
Haidenbauer, J.; Plessas, W.
1983-01-01
We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.
Approximate solutions of the hyperbolic Kepler equation
NASA Astrophysics Data System (ADS)
Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge
2015-12-01
We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.
Ancilla-approximable quantum state transformations
Blass, Andreas; Gurevich, Yuri
2015-04-15
We consider the transformations of quantum states obtainable by a process of the following sort. Combine the given input state with a specially prepared initial state of an auxiliary system. Apply a unitary transformation to the combined system. Measure the state of the auxiliary subsystem. If (and only if) it is in a specified final state, consider the process successful, and take the resulting state of the original (principal) system as the result of the process. We review known information about exact realization of transformations by such a process. Then we present results about approximate realization of finite partial transformations. We not only consider primarily the issue of approximation to within a specified positive ε, but also address the question of arbitrarily close approximation.
Fast wavelet based sparse approximate inverse preconditioner
Wan, W.L.
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Faddeev random-phase approximation for molecules
Degroote, Matthias; Van Neck, Dimitri; Barbieri, Carlo
2011-04-15
The Faddeev random-phase approximation is a Green's function technique that makes use of Faddeev equations to couple the motion of a single electron to the two-particle-one-hole and two-hole-one-particle excitations. This method goes beyond the frequently used third-order algebraic diagrammatic construction method: all diagrams involving the exchange of phonons in the particle-hole and particle-particle channel are retained, but the phonons are now described at the level of the random-phase approximation, which includes ground-state correlations, rather than at the Tamm-Dancoff approximation level, where ground-state correlations are excluded. Previously applied to atoms, this paper presents results for small molecules at equilibrium geometry.
On the Accuracy of the MINC approximation
Lai, C.H.; Pruess, K.; Bodvarsson, G.S.
1986-02-01
The method of ''multiple interacting continua'' is based on the assumption that changes in thermodynamic conditions of rock matrix blocks are primarily controlled by the distance from the nearest fracture. The accuracy of this assumption was evaluated for regularly shaped (cubic and rectangular) rock blocks with uniform initial conditions, which are subjected to a step change in boundary conditions on the surface. Our results show that pressures (or temperatures) predicted from the MINC approximation may deviate from the exact solutions by as much as 10 to 15% at certain points within the blocks. However, when fluid (or heat) flow rates are integrated over the entire block surface, MINC-approximation and exact solution agree to better than 1%. This indicates that the MINC approximation can accurately represent transient inter-porosity flow in fractured porous media, provided that matrix blocks are indeed subjected to nearly uniform boundary conditions at all times.
System Safety Common Cause Analysis
Energy Science and Technology Software Center (ESTSC)
1992-03-10
The COMCAN fault tree analysis codes are designed to analyze complex systems such as nuclear plants for common causes of failure. A common cause event, or common mode failure, is a secondary cause that could contribute to the failure of more than one component and violates the assumption of independence. Analysis of such events is an integral part of system reliability and safety analysis. A significant common cause event is a secondary cause common tomore » all basic events in one or more minimal cut sets. Minimal cut sets containing events from components sharing a common location or a common link are called common cause candidates. Components share a common location if no barrier insulates any one of them from the secondary cause. A common link is a dependency among components which cannot be removed by a physical barrier (e.g.,a common energy source or common maintenance instructions).« less
Improving the Dupuit-Forchheimer Approximation for Free Surface Flow in an Unconfined Aquifer
NASA Astrophysics Data System (ADS)
Knight, J. H.
2003-12-01
The classical Dupuit-Forchheimer (DF) approximation for groundwater free surface flow in an unconfined aquifer assumes that the vertical component of the seepage velocity is zero. This assumption is expected to be least accurate when there is non-zero accretion at the free surface. The DF approximation leads to a nonlinear diffusion equation satisfied by the height of the free surface. The general principles of integral methods used by Yves Parlange are to assume some simple approximate shape for some unknown function, and then to choose the parameters of this function to satisfy some known integral relation of the flow system. The DF approximation is improved by assuming that the vertical velocity component is zero at the impermeable horizontal base, and increases linearly to its unknown value at the free surface. The well known Guirinsky potential which depends only on the free surface height corresponds to the DF assumptions. Youngs used an integral relation to define a new potential which depends on the free surface height and also on the vertical velocity component, and which for steady flow satisfies a Poisson equation in the horizontal coordinates. We use the assumption of linear variation of vertical velocity to calculate an approximation to the Youngs potential. In some simple flow systems such as the classical dam problem this leads to a simple differential equation for the free surface height, which can be solved numerically. ln some cases simple explicit approximations can be found for quantities of interest, such as the maximum free surface height between drainage ditches.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Approximation by fully complex multilayer perceptrons.
Kim, Taehwan; Adali, Tülay
2003-07-01
We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville's theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two real-valued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the Cauchy-Riemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential function e(z) that are analytic are defined as fully complex activation functions and are shown to provide a parsimonious structure for processing data in the complex domain and address most of the shortcomings of the traditional approach. The introduction of ETFs, however, raises a new question in the approximation capability of this fully complex MLP. In this letter, three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs. First, the fully complex MLPs with continuous ETFs over a compact set in the complex vector field are shown to be the universal approximator of any continuous complex mappings. The complex universal approximation theorem extends to bounded measurable ETFs possessing a removable singularity. Finally, it is shown that the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularity nearest to the origin. PMID:12816570
[Diagnostics of approximal caries - literature review].
Berczyński, Paweł; Gmerek, Anna; Buczkowska-Radlińska, Jadwiga
2015-01-01
The most important issue in modern cariology is the early diagnostics of carious lesions, because only early detected lesions can be treated with as little intervention as possible. This is extremely difficult on approximal surfaces because of their anatomy, late onset of pain, and very few clinical symptoms. Modern diagnostic methods make dentists' everyday work easier, often detecting lesions unseen during visual examination. This work presents a review of the literature on the subject of modern diagnostic methods that can be used to detect approximal caries. PMID:27344873
Approximate convective heating equations for hypersonic flows
NASA Technical Reports Server (NTRS)
Zoby, E. V.; Moss, J. N.; Sutton, K.
1979-01-01
Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
Characterizing inflationary perturbations: The uniform approximation
Habib, Salman; Heinen, Andreas; Heitmann, Katrin; Jungman, Gerard; Molina-Paris, Carmen
2004-10-15
The spectrum of primordial fluctuations from inflation can be obtained using a mathematically controlled, and systematically extendable, uniform approximation. Closed-form expressions for power spectra and spectral indices may be found without making explicit slow-roll assumptions. Here we provide details of our previous calculations, extend the results beyond leading-order in the approximation, and derive general error bounds for power spectra and spectral indices. Already at next-to-leading-order, the errors in calculating the power spectrum are less than a percent. This meets the accuracy requirement for interpreting next-generation cosmic microwave background observations.
HALOGEN: Approximate synthetic halo catalog generator
NASA Astrophysics Data System (ADS)
Avila Perez, Santiago; Murray, Steven
2015-05-01
HALOGEN generates approximate synthetic halo catalogs. Written in C, it decomposes the problem of generating cosmological tracer distributions (eg. halos) into four steps: generating an approximate density field, generating the required number of tracers from a CDF over mass, placing the tracers on field particles according to a bias scheme dependent on local density, and assigning velocities to the tracers based on velocities of local particles. It also implements a default set of four models for these steps. HALOGEN uses 2LPTic (ascl:1201.005) and CUTE (ascl:1505.016); the software is flexible and can be adapted to varying cosmologies and simulation specifications.
ANALOG QUANTUM NEURON FOR FUNCTIONS APPROXIMATION
A. EZHOV; A. KHROMOV; G. BERMAN
2001-05-01
We describe a system able to perform universal stochastic approximations of continuous multivariable functions in both neuron-like and quantum manner. The implementation of this model in the form of multi-barrier multiple-silt system has been earlier proposed. For the simplified waveguide variant of this model it is proved, that the system can approximate any continuous function of many variables. This theorem is also applied to the 2-input quantum neural model analogical to the schemes developed for quantum control.
Small-angle approximation to the transfer of narrow laser beams in anisotropic scattering media
NASA Technical Reports Server (NTRS)
Box, M. A.; Deepak, A.
1981-01-01
The broadening and the signal power detected of a laser beam traversing an anisotropic scattering medium were examined using the small-angle approximation to the radiative transfer equation in which photons suffering large-angle deflections are neglected. To obtain tractable answers, simple Gaussian and non-Gaussian functions for the scattering phase functions are assumed. Two other approximate approaches employed in the field to further simplify the small-angle approximation solutions are described, and the results obtained by one of them are compared with those obtained using small-angle approximation. An exact method for obtaining the contribution of each higher order scattering to the radiance field is examined but no results are presented.
Common Control System Vulnerability
Trent Nelson
2005-12-01
The Control Systems Security Program and other programs within the Idaho National Laboratory have discovered a vulnerability common to control systems in all sectors that allows an attacker to penetrate most control systems, spoof the operator, and gain full control of targeted system elements. This vulnerability has been identified on several systems that have been evaluated at INL, and in each case a 100% success rate of completing the attack paths that lead to full system compromise was observed. Since these systems are employed in multiple critical infrastructure sectors, this vulnerability is deemed common to control systems in all sectors. Modern control systems architectures can be considered analogous to today's information networks, and as such are usually approached by attackers using a common attack methodology to penetrate deeper and deeper into the network. This approach often is composed of several phases, including gaining access to the control network, reconnaissance, profiling of vulnerabilities, launching attacks, escalating privilege, maintaining access, and obscuring or removing information that indicates that an intruder was on the system. With irrefutable proof that an external attack can lead to a compromise of a computing resource on the organization's business local area network (LAN), access to the control network is usually considered the first phase in the attack plan. Once the attacker gains access to the control network through direct connections and/or the business LAN, the second phase of reconnaissance begins with traffic analysis within the control domain. Thus, the communications between the workstations and the field device controllers can be monitored and evaluated, allowing an attacker to capture, analyze, and evaluate the commands sent among the control equipment. Through manipulation of the communication protocols of control systems (a process generally referred to as ''reverse engineering''), an attacker can then map out the
Monotone Approximations of Minimum and Maximum Functions and Multi-objective Problems
Stipanovic, Dusan M.; Tomlin, Claire J.; Leitmann, George
2012-12-15
In this paper the problem of accomplishing multiple objectives by a number of agents represented as dynamic systems is considered. Each agent is assumed to have a goal which is to accomplish one or more objectives where each objective is mathematically formulated using an appropriate objective function. Sufficient conditions for accomplishing objectives are derived using particular convergent approximations of minimum and maximum functions depending on the formulation of the goals and objectives. These approximations are differentiable functions and they monotonically converge to the corresponding minimum or maximum function. Finally, an illustrative pursuit-evasion game example with two evaders and two pursuers is provided.
Construction of approximate analytical solutions to a new class of non-linear oscillator equation
NASA Technical Reports Server (NTRS)
Mickens, R. E.; Oyedeji, K.
1985-01-01
The principle of harmonic balance is invoked in the development of an approximate analytic model for a class of nonlinear oscillators typified by a mass attached to a stretched wire. By assuming that harmonic balance will hold, solutions are devised for a steady state limit cycle and/or limit point motion. A method of slowly varying amplitudes then allows derivation of approximate solutions by determining the form of the exact solutions and substituting into them the lowest order terms of their respective Fourier expansions. The latter technique is actually a generalization of the method proposed by Kryloff and Bogoliuboff (1943).
Approximations for inclusion of rotor lag dynamics in helicopter flight dynamics models
NASA Technical Reports Server (NTRS)
Mckillip, Robert, Jr.; Curtiss, Howard C., Jr.
1991-01-01
Approximate forms are suggested for augmenting linear rotor/body response models to include rotor lag dynamics. Use of an analytically linearized rotor/body model has shown that the primary effect comes from the additional angular rate contributions of the lag inertial response. Addition of lag dynamics may be made assuming these dynamics are represented by an isolated rotor with no shaft motion. Implications of such an approximation are indicated through comparison with flight test data and sensitivity of stability levels with body rate feedback.
Molecular collisions. 11: Semiclassical approximation to atom-symmetric top rotational excitation
NASA Technical Reports Server (NTRS)
Russell, D.; Curtiss, C. F.
1973-01-01
In a paper of this series a distorted wave approximation to the T matrix for atom-symmetric top scattering was developed which is correct to first order in the part of the interaction potential responsible for transitions in the component of rotational angular momentum along the symmetry axis of the top. A semiclassical expression for this T matrix is derived by assuming large values of orbital and rotational angular momentum quantum numbers.
Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods
NASA Astrophysics Data System (ADS)
Plantagie, Linda; Batenburg, Kees Joost
2015-01-01
We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.
NASA Astrophysics Data System (ADS)
Dai, Yongjiu; Zeng, Xubin; Dickinson, Robert E.; Baker, Ian; Bonan, Gordon B.; Bosilovich, Michael G.; Denning, A. Scott; Dirmeyer, Paul A.; Houser, Paul R.; Niu, Guoyue; Oleson, Keith W.; Schlosser, C. Adam; Yang, Zong-Liang
2003-08-01
The Common Land Model (CLM) was developed for community use by a grassroots collaboration of scientists who have an interest in making a general land model available for public use and further development. The major model characteristics include enough unevenly spaced layers to adequately represent soil temperature and soil moisture, and a multilayer parameterization of snow processes; an explicit treatment of the mass of liquid water and ice water and their phase change within the snow and soil system; a runoff parameterization following the TOPMODEL concept; a canopy photosynthesis-conductance model that describes the simultaneous transfer of CO2 and water vapor into and out of vegetation; and a tiled treatment of the subgrid fraction of energy and water balance. CLM has been extensively evaluated in offline mode and coupling runs with the NCAR Community Climate Model (CCM3). The results of two offline runs, presented as examples, are compared with observations and with the simulation of three other land models [the Biosphere-Atmosphere Transfer Scheme (BATS), Bonan's Land Surface Model (LSM), and the 1994 version of the Chinese Academy of Sciences Institute of Atmospheric Physics LSM (IAP94)].
Common approaches for adolescents.
1998-01-01
A South-South program organized by JOICFP provided an excellent opportunity for the exchange of experiences in the field of adolescent reproductive health (RH) between Mexico and the Philippines. Alfonso Lopez Juarez, executive director, Mexican Foundation for Family Planning (MEXFAM), shared MEXFAM's experiences with field personnel and GO-NGO representatives related to JOICFP's RH-oriented project in the Philippines while in the country from November 16 to 21. The program was also effective for identifying common issues and effective approaches to adolescent health issues and communicating with youth on RH and sexual health. The exchange was supported by the Hoken Kaikan Foundation and organized by JOICFP in collaboration with UNFPA-Manila and the Commission on Population (POPCOM). Lopez shared some of the lessons of MEXFAM's decade-long Gente Joven IEC program on adolescent health with GO and NGO representatives at a forum held on November 18. The event was opened by Dr. Carmencita Reodica, secretary, Department of Health (DOH). He then moved to the project sites of Balayan and Malvar municipalities of Batangas Province, where he spoke with field staff and demonstrated MEXFAM's approach in classroom situations with young people. Lopez also observed various adolescent activities such as group work with peer facilitators. "I am pleased that we can share some applicable experiences and learn from each other's projects," commented Lopez. PMID:12348336
COMMON ENVELOPE: ENTHALPY CONSIDERATION
Ivanova, N.; Chaichenets, S.
2011-04-20
In this Letter, we discuss a modification to the criterion for the common envelope (CE) event to result in envelope dispersion. We emphasize that the current energy criterion for the CE phase is not sufficient for an instability of the CE, nor for an ejection. However, in some cases, stellar envelopes undergo stationary mass outflows, which are likely to occur during the slow spiral-in stage of the CE event. We propose the condition for such outflows, in a manner similar to the currently standard {alpha}{sub CE}{lambda}-prescription but with an addition of P/{rho} term in the energy balance equation, accounting therefore for the enthalpy of the envelope rather than merely the gas internal energy. This produces a significant correction, which might help to dispense with an unphysically high value of energy efficiency parameter during the CE phase, currently required in the binary population synthesis studies to make the production of low-mass X-ray binaries with a black hole companion to match the observations.
Mars Surface Systems Common Capabilities and Challenges for Human Missions
NASA Technical Reports Server (NTRS)
Toups, Larry; Hoffman, Stephen J.; Watts, Kevin
2016-01-01
This paper describes the current status of common systems and operations as they are applied to actual locations on Mars that are representative of Exploration Zones (EZ) - NASA's term for candidate locations where humans could land, live and work on the martian surface. Given NASA's current concepts for human missions to Mars, an EZ is a collection of Regions of Interest (ROIs) located within approximately 100 kilometers of a centralized landing site. ROIs are areas that are relevant for scientific investigation and/or development/maturation of capabilities and resources necessary for a sustainable human presence. An EZ also contains a habitation site that will be used by multiple human crews during missions to explore and utilize the ROIs within the EZ. The Evolvable Mars Campaign (EMC), a description of NASA's current approach to these human Mars missions, assumes that a single EZ will be identified within which NASA will establish a substantial and durable surface infrastructure that will be used by multiple human crews. The process of identifying and eventually selecting this single EZ will likely take many years to finalized. Because of this extended EZ selection process it becomes important to evaluate the current suite of surface systems and operations being evaluated for the EMC as they are likely to perform at a variety of proposed EZ locations and for the types of operations - both scientific and development - that are proposed for these candidate EZs. It is also important to evaluate proposed EZs for their suitability to be explored or developed given the range of capabilities and constraints for the types of surface systems and operations being considered within the EMC.
Progressive Image Coding by Hierarchical Linear Approximation.
ERIC Educational Resources Information Center
Wu, Xiaolin; Fang, Yonggang
1994-01-01
Proposes a scheme of hierarchical piecewise linear approximation as an adaptive image pyramid. A progressive image coder comes naturally from the proposed image pyramid. The new pyramid is semantically more powerful than regular tessellation but syntactically simpler than free segmentation. This compromise between adaptability and complexity…
Median Approximations for Genomes Modeled as Matrices.
Zanetti, Joao Paulo Pereira; Biller, Priscila; Meidanis, Joao
2016-04-01
The genome median problem is an important problem in phylogenetic reconstruction under rearrangement models. It can be stated as follows: Given three genomes, find a fourth that minimizes the sum of the pairwise rearrangement distances between it and the three input genomes. In this paper, we model genomes as matrices and study the matrix median problem using the rank distance. It is known that, for any metric distance, at least one of the corners is a [Formula: see text]-approximation of the median. Our results allow us to compute up to three additional matrix median candidates, all of them with approximation ratios at least as good as the best corner, when the input matrices come from genomes. We also show a class of instances where our candidates are optimal. From the application point of view, it is usually more interesting to locate medians farther from the corners, and therefore, these new candidates are potentially more useful. In addition to the approximation algorithm, we suggest a heuristic to get a genome from an arbitrary square matrix. This is useful to translate the results of our median approximation algorithm back to genomes, and it has good results in our tests. To assess the relevance of our approach in the biological context, we ran simulated evolution tests and compared our solutions to those of an exact DCJ median solver. The results show that our method is capable of producing very good candidates. PMID:27072561
Approximate analysis of electromagnetically coupled microstrip dipoles
NASA Astrophysics Data System (ADS)
Kominami, M.; Yakuwa, N.; Kusaka, H.
1990-10-01
A new dynamic analysis model for analyzing electromagnetically coupled (EMC) microstrip dipoles is proposed. The formulation is based on an approximate treatment of the dielectric substrate. Calculations of the equivalent impedance of two different EMC dipole configurations are compared with measured data and full-wave solutions. The agreement is very good.
Approximations For Controls Of Hereditary Systems
NASA Technical Reports Server (NTRS)
Milman, Mark H.
1988-01-01
Convergence properties of controls, trajectories, and feedback kernels analyzed. Report discusses use of factorization techniques to approximate optimal feedback gains in finite-time, linear-regulator/quadratic-cost-function problem of system governed by retarded-functional-difference equations RFDE's with control delays. Presents approach to factorization based on discretization of state penalty leading to simple structure for feedback control law.
Revisiting Twomey's approximation for peak supersaturation
NASA Astrophysics Data System (ADS)
Shipway, B. J.
2015-04-01
Twomey's seminal 1959 paper provided lower and upper bound approximations to the estimation of peak supersaturation within an updraft and thus provides the first closed expression for the number of nucleated cloud droplets. The form of this approximation is simple, but provides a surprisingly good estimate and has subsequently been employed in more sophisticated treatments of nucleation parametrization. In the current paper, we revisit the lower bound approximation of Twomey and make a small adjustment that can be used to obtain a more accurate calculation of peak supersaturation under all potential aerosol loadings and thermodynamic conditions. In order to make full use of this improved approximation, the underlying integro-differential equation for supersaturation evolution and the condition for calculating peak supersaturation are examined. A simple rearrangement of the algebra allows for an expression to be written down that can then be solved with a single lookup table with only one independent variable for an underlying lognormal aerosol population. While multimodal aerosol with N different dispersion characteristics requires 2N+1 inputs to calculate the activation fraction, only N of these one-dimensional lookup tables are needed. No additional information is required in the lookup table to deal with additional chemical, physical or thermodynamic properties. The resulting implementation provides a relatively simple, yet computationally cheap, physically based parametrization of droplet nucleation for use in climate and Numerical Weather Prediction models.
Padé approximations and diophantine geometry
Chudnovsky, D. V.; Chudnovsky, G. V.
1985-01-01
Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552
Achievements and Problems in Diophantine Approximation Theory
NASA Astrophysics Data System (ADS)
Sprindzhuk, V. G.
1980-08-01
ContentsIntroduction I. Metrical theory of approximation on manifolds § 1. The basic problem § 2. Brief survey of results § 3. The principal conjecture II. Metrical theory of transcendental numbers § 1. Mahler's classification of numbers § 2. Metrical characterization of numbers with a given type of approximation § 3. Further problems III. Approximation of algebraic numbers by rationals § 1. Simultaneous approximations § 2. The inclusion of p-adic metrics § 3. Effective improvements of Liouville's inequality IV. Estimates of linear forms in logarithms of algebraic numbers § 1. The basic method § 2. Survey of results § 3. Estimates in the p-adic metric V. Diophantine equations § 1. Ternary exponential equations § 2. The Thue and Thue-Mahler equations § 3. Equations of hyperelliptic type § 4. Algebraic-exponential equations VI. The arithmetic structure of polynomials and the class number § 1. The greatest prime divisor of a polynomial in one variable § 2. The greatest prime divisor of a polynomial in two variables § 3. Square-free divisors of polynomials and the class number § 4. The general problem of the size of the class number Conclusion References
Approximation of virus structure by icosahedral tilings.
Salthouse, D G; Indelicato, G; Cermelli, P; Keef, T; Twarock, R
2015-07-01
Viruses are remarkable examples of order at the nanoscale, exhibiting protein containers that in the vast majority of cases are organized with icosahedral symmetry. Janner used lattice theory to provide blueprints for the organization of material in viruses. An alternative approach is provided here in terms of icosahedral tilings, motivated by the fact that icosahedral symmetry is non-crystallographic in three dimensions. In particular, a numerical procedure is developed to approximate the capsid of icosahedral viruses by icosahedral tiles via projection of high-dimensional tiles based on the cut-and-project scheme for the construction of three-dimensional quasicrystals. The goodness of fit of our approximation is assessed using techniques related to the theory of polygonal approximation of curves. The approach is applied to a number of viral capsids and it is shown that detailed features of the capsid surface can indeed be satisfactorily described by icosahedral tilings. This work complements previous studies in which the geometry of the capsid is described by point sets generated as orbits of extensions of the icosahedral group, as such point sets are by construction related to the vertex sets of icosahedral tilings. The approximations of virus geometry derived here can serve as coarse-grained models of viral capsids as a basis for the study of virus assembly and structural transitions of viral capsids, and also provide a new perspective on the design of protein containers for nanotechnology applications. PMID:26131897
Parameter Choices for Approximation by Harmonic Splines
NASA Astrophysics Data System (ADS)
Gutting, Martin
2016-04-01
The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.
Can Distributional Approximations Give Exact Answers?
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
Some mathematical activities and investigations for the classroom or the lecture theatre can appear rather contrived. This cannot, however, be levelled at the idea given here, since it is based on a perfectly sensible question concerning distributional approximations that was posed by an undergraduate student. Out of this simple question, and…
Large Hierarchies from Approximate R Symmetries
Kappl, Rolf; Ratz, Michael; Schmidt-Hoberg, Kai; Nilles, Hans Peter; Ramos-Sanchez, Saul; Vaudrevange, Patrick K. S.
2009-03-27
We show that hierarchically small vacuum expectation values of the superpotential in supersymmetric theories can be a consequence of an approximate R symmetry. We briefly discuss the role of such small constants in moduli stabilization and understanding the huge hierarchy between the Planck and electroweak scales.
An approximate classical unimolecular reaction rate theory
NASA Astrophysics Data System (ADS)
Zhao, Meishan; Rice, Stuart A.
1992-05-01
We describe a classical theory of unimolecular reaction rate which is derived from the analysis of Davis and Gray by use of simplifying approximations. These approximations concern the calculation of the locations of, and the fluxes of phase points across, the bottlenecks to fragmentation and to intramolecular energy transfer. The bottleneck to fragment separation is represented as a vibration-rotation state dependent separatrix, which approximation is similar to but extends and improves the approximations for the separatrix introduced by Gray, Rice, and Davis and by Zhao and Rice. The novel feature in our analysis is the representation of the bottlenecks to intramolecular energy transfer as dividing surfaces in phase space; the locations of these dividing surfaces are determined by the same conditions as locate the remnants of robust tori with frequency ratios related to the golden mean (in a two degree of freedom system these are the cantori). The flux of phase points across each dividing surface is calculated with an analytic representation instead of a stroboscopic mapping. The rate of unimolecular reaction is identified with the net rate at which phase points escape from the region of quasiperiodic bounded motion to the region of free fragment motion by consecutively crossing the dividing surfaces for intramolecular energy exchange and the separatrix. This new theory generates predictions of the rates of predissociation of the van der Waals molecules HeI2, NeI2 and ArI2 which are in very good agreement with available experimental data.
Approximation and compression with sparse orthonormal transforms.
Sezer, Osman Gokhan; Guleryuz, Onur G; Altunbasak, Yucel
2015-08-01
We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods. PMID:25823033
Quickly Approximating the Distance Between Two Objects
NASA Technical Reports Server (NTRS)
Hammen, David
2009-01-01
A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Fostering Formal Commutativity Knowledge with Approximate Arithmetic.
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
Block Addressing Indices for Approximate Text Retrieval.
ERIC Educational Resources Information Center
Baeza-Yates, Ricardo; Navarro, Gonzalo
2000-01-01
Discusses indexing in large text databases, approximate text searching, and space-time tradeoffs for indexed text searching. Studies the space overhead and retrieval times as functions of the text block size, concludes that an index can be sublinear in space overhead and query time, and applies the analysis to the Web. (Author/LRW)
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A structural synthesis methodology for the minimum mass design of 3-dimensionall frame-truss structures under multiple static loading conditions and subject to limits on displacements, rotations, stresses, local buckling, and element cross-sectional dimensions is presented. A variety of approximation concept options are employed to yield near optimum designs after no more than 10 structural analyses. Available options include: (A) formulation of the nonlinear mathematcal programming problem in either reciprocal section property (RSP) or cross-sectional dimension (CSD) space; (B) two alternative approximate problem structures in each design space; and (C) three distinct assumptions about element end-force variations. Fixed element, design element linking, and temporary constraint deletion features are also included. The solution of each approximate problem, in either its primal or dual form, is obtained using CONMIN, a feasible directions program. The frame-truss synthesis methodology is implemented in the COMPASS computer program and is used to solve a variety of problems. These problems were chosen so that, in addition to exercising the various approximation concepts options, the results could be compared with previously published work.
An adiabatic approximation for grain alignment theory
NASA Astrophysics Data System (ADS)
Roberge, W. G.
1997-10-01
The alignment of interstellar dust grains is described by the joint distribution function for certain `internal' and `external' variables, where the former describe the orientation of the axes of a grain with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical time-scales of the internal and external variables - which is typically 2-3 orders of magnitude - can be exploited to simplify calculations of the required distribution greatly. The method is based on an `adiabatic approximation' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the `fast' dynamical variables and a simplified Fokker-Planck equation for the `slow' variables which can be solved straightforwardly using various techniques. These solutions are accurate to O(epsilon), where epsilon is the ratio of the fast and slow dynamical time-scales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.
An Adiabatic Approximation for Grain Alignment Theory
NASA Astrophysics Data System (ADS)
Roberge, W. G.
1997-12-01
The alignment of interstellar dust grains is described by the joint distribution function for certain ``internal'' and ``external'' variables, where the former describe the orientation of a grain's axes with respect to its angular momentum, J, and the latter describe the orientation of J relative to the interstellar magnetic field. I show how the large disparity between the dynamical timescales of the internal and external variables--- which is typically 2--3 orders of magnitude--- can be exploited to greatly simplify calculations of the required distribution. The method is based on an ``adiabatic approximation'' which closely resembles the Born-Oppenheimer approximation in quantum mechanics. The adiabatic approximation prescribes an analytic distribution function for the ``fast'' dynamical variables and a simplified Fokker-Planck equation for the ``slow'' variables which can be solved straightforwardly using various techniques. These solutions are accurate to cal {O}(epsilon ), where epsilon is the ratio of the fast and slow dynamical timescales. As a simple illustration of the method, I derive an analytic solution for the joint distribution established when Barnett relaxation acts in concert with gas damping. The statistics of the analytic solution agree with the results of laborious numerical calculations which do not exploit the adiabatic approximation.
Approximation algorithms for planning and control
NASA Technical Reports Server (NTRS)
Boddy, Mark; Dean, Thomas
1989-01-01
A control system operating in a complex environment will encounter a variety of different situations, with varying amounts of time available to respond to critical events. Ideally, such a control system will do the best possible with the time available. In other words, its responses should approximate those that would result from having unlimited time for computation, where the degree of the approximation depends on the amount of time it actually has. There exist approximation algorithms for a wide variety of problems. Unfortunately, the solution to any reasonably complex control problem will require solving several computationally intensive problems. Algorithms for successive approximation are a subclass of the class of anytime algorithms, algorithms that return answers for any amount of computation time, where the answers improve as more time is allotted. An architecture is described for allocating computation time to a set of anytime algorithms, based on expectations regarding the value of the answers they return. The architecture described is quite general, producing optimal schedules for a set of algorithms under widely varying conditions.
Kravchuk functions for the finite oscillator approximation
NASA Technical Reports Server (NTRS)
Atakishiyev, Natig M.; Wolf, Kurt Bernardo
1995-01-01
Kravchuk orthogonal functions - Kravchuk polynomials multiplied by the square root of the weight function - simplify the inversion algorithm for the analysis of discrete, finite signals in harmonic oscillator components. They can be regarded as the best approximation set. As the number of sampling points increases, the Kravchuk expansion becomes the standard oscillator expansion.
Strategic mating with common preferences.
Alpern, Steve; Reyniers, Diane
2005-12-21
We present a two-sided search model in which individuals from two groups (males and females, employers and workers) would like to form a long-term relationship with a highly ranked individual of the other group, but are limited to individuals who they randomly encounter and to those who also accept them. This article extends the research program, begun in Alpern and Reyniers [1999. J. Theor. Biol. 198, 71-88], of providing a game theoretic analysis for the Kalick-Hamilton [1986. J. Personality Soc. Psychol. 51, 673-682] mating model in which a cohort of males and females of various 'fitness' or 'attractiveness' levels are randomly paired in successive periods and mate if they accept each other. Their model compared two acceptance rules chosen to represent homotypic (similarity) preferences and common (or 'type') preferences. Our earlier paper modeled the first kind by assuming that if a level x male mates with a level y female, both get utility -|x-y|, whereas this paper models the second kind by giving the male utility y and the female utility x. Our model can also be seen as a continuous generalization of the discrete fitness-level game of Johnstone [1997. Behav. Ecol. Sociobiol. 40, 51-59]. We establish the existence of equilibrium strategy pairs, give examples of multiple equilibria, and conditions guaranteeing uniqueness. In all equilibria individuals become less choosy over time, with high fitness individuals pairing off with each other first, leaving the rest to pair off later. This route to assortative mating was suggested by Parker [1983. Mate Choice, Cambridge University Press, Cambridge, pp. 141-164]. If the initial fitness distributions have atoms, then mixed strategy equilibria may also occur. If these distributions are unknown, there are equilibria in which only individuals in the same fitness band are mated, as in the steady-state model of MacNamara and Collins [1990. J. Appl. Prob. 28, 815-827] for the job search problem. PMID:16171826
Benchmarking mean-field approximations to level densities
NASA Astrophysics Data System (ADS)
Alhassid, Y.; Bertsch, G. F.; Gilbreth, C. N.; Nakada, H.
2016-04-01
We assess the accuracy of finite-temperature mean-field theory using as a standard the Hamiltonian and model space of the shell model Monte Carlo calculations. Two examples are considered: the nucleus 162Dy, representing a heavy deformed nucleus, and 148Sm, representing a nearby heavy spherical nucleus with strong pairing correlations. The errors inherent in the finite-temperature Hartree-Fock and Hartree-Fock-Bogoliubov approximations are analyzed by comparing the entropies of the grand canonical and canonical ensembles, as well as the level density at the neutron resonance threshold, with shell model Monte Carlo calculations, which are accurate up to well-controlled statistical errors. The main weak points in the mean-field treatments are found to be: (i) the extraction of number-projected densities from the grand canonical ensembles, and (ii) the symmetry breaking by deformation or by the pairing condensate. In the absence of a pairing condensate, we confirm that the usual saddle-point approximation to extract the number-projected densities is not a significant source of error compared to other errors inherent to the mean-field theory. We also present an alternative formulation of the saddle-point approximation that makes direct use of an approximate particle-number projection and avoids computing the usual three-dimensional Jacobian of the saddle-point integration. We find that the pairing condensate is less amenable to approximate particle-number projection methods because of the explicit violation of particle-number conservation in the pairing condensate. Nevertheless, the Hartree-Fock-Bogoliubov theory is accurate to less than one unit of entropy for 148Sm at the neutron threshold energy, which is above the pairing phase transition. This result provides support for the commonly used "back-shift" approximation, treating pairing as only affecting the excitation energy scale. When the ground state is strongly deformed, the Hartree-Fock entropy is significantly
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
Yang, Z
1994-09-01
Two approximate methods are proposed for maximum likelihood phylogenetic estimation, which allow variable rates of substitution across nucleotide sites. Three data sets with quite different characteristics were analyzed to examine empirically the performance of these methods. The first, called the "discrete gamma model," uses several categories of rates to approximate the gamma distribution, with equal probability for each category. The mean of each category is used to represent all the rates falling in the category. The performance of this method is found to be quite good, and four such categories appear to be sufficient to produce both an optimum, or near-optimum fit by the model to the data, and also an acceptable approximation to the continuous distribution. The second method, called "fixed-rates model", classifies sites into several classes according to their rates predicted assuming the star tree. Sites in different classes are then assumed to be evolving at these fixed rates when other tree topologies are evaluated. Analyses of the data sets suggest that this method can produce reasonable results, but it seems to share some properties of a least-squares pairwise comparison; for example, interior branch lengths in nonbest trees are often found to be zero. The computational requirements of the two methods are comparable to that of Felsenstein's (1981, J Mol Evol 17:368-376) model, which assumes a single rate for all the sites. PMID:7932792
Code of Federal Regulations, 2014 CFR
2014-10-01
... 43 Public Lands: Interior 2 2014-10-01 2014-10-01 false If I acquire a lease by an assignment or transfer, what obligations do I agree to assume? 3106.7-6 Section 3106.7-6 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND MANAGEMENT, DEPARTMENT OF THE...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 43 Public Lands: Interior 2 2012-10-01 2012-10-01 false If I acquire a lease by an assignment or transfer, what obligations do I agree to assume? 3106.7-6 Section 3106.7-6 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND MANAGEMENT, DEPARTMENT OF THE...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 43 Public Lands: Interior 2 2011-10-01 2011-10-01 false If I acquire a lease by an assignment or transfer, what obligations do I agree to assume? 3106.7-6 Section 3106.7-6 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND MANAGEMENT, DEPARTMENT OF THE...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 43 Public Lands: Interior 2 2013-10-01 2013-10-01 false If I acquire a lease by an assignment or transfer, what obligations do I agree to assume? 3106.7-6 Section 3106.7-6 Public Lands: Interior Regulations Relating to Public Lands (Continued) BUREAU OF LAND MANAGEMENT, DEPARTMENT OF THE...
ERIC Educational Resources Information Center
van Noije, Lonneke; Wittebrood, Karin
2010-01-01
How effective are policy interventions to fight crime and how valid is the policy theory that underlies them? This is the twofold research question addressed in this article, which presents an evidence-based evaluation of Dutch social safety policy. By bridging the gap between actual effects and assumed effects, this study seeks to make fuller use…
Code of Federal Regulations, 2011 CFR
2011-07-01
... direct responsibility for the costs of preparing and transporting my mobile home? 302-10.206 Section 302... ALLOWANCES TRANSPORTATION AND STORAGE OF PROPERTY 10-ALLOWANCES FOR TRANSPORTATION OF MOBILE HOMES AND BOATS... responsibility for the costs of preparing and transporting my mobile home? Yes, your agency may assume...
Code of Federal Regulations, 2010 CFR
2010-07-01
... direct responsibility for the costs of preparing and transporting my mobile home? 302-10.206 Section 302... ALLOWANCES TRANSPORTATION AND STORAGE OF PROPERTY 10-ALLOWANCES FOR TRANSPORTATION OF MOBILE HOMES AND BOATS... responsibility for the costs of preparing and transporting my mobile home? Yes, your agency may assume...
Icamina, P
1993-04-01
Indigenous knowledge is examined as it is affected by development and scientific exploration. The indigenous culture of shamanism, which originated in northern and southeast Asia, is a "political and religious technique for managing societies through rituals, myths, and world views." There is respect for the natural environment and community life as a social common good. This world view is still practiced by many in Latin America and in Colombia specifically. Colombian shamanism has an environmental accounting system, but the Brazilian government has established its own system of land tenure and political representation which does not adequately represent shamanism. In 1992 a conference was held in the Philippines by the International Institute for Rural Reconstruction and IDRC on sustainable development and indigenous knowledge. The link between the two is necessary. Unfortunately, there are already examples in the Philippines of loss of traditional crop diversity after the introduction of modern farming techniques and new crop varieties. An attempt was made to collect species, but without proper identification. Opposition was expressed to the preservation of wilderness preserves; the desire was to allow indigenous people to maintain their homeland and use their time-tested sustainable resource management strategies. Property rights were also discussed during the conference. Of particular concern was the protection of knowledge rights about biological diversity or pharmaceutical properties of indigenous plant species. The original owners and keepers of the knowledge must retain access and control. The research gaps were identified and found to be expansive. Reference was made to a study of Mexican Indian children who knew 138 plant species while non-Indian children knew only 37. Sometimes there is conflict of interest where foresters prefer timber forests and farmers desire fuelwood supplies and fodder and grazing land, which is provided by shrubland. Information
Damping effects in doped graphene: The relaxation-time approximation
NASA Astrophysics Data System (ADS)
Kupčić, I.
2014-11-01
The dynamical conductivity of interacting multiband electronic systems derived by Kupčić et al. [J. Phys.: Condens. Matter 90, 145602 (2013), 10.1088/0953-8984/25/14/145602] is shown to be consistent with the general form of the Ward identity. Using the semiphenomenological form of this conductivity formula, we have demonstrated that the relaxation-time approximation can be used to describe the damping effects in weakly interacting multiband systems only if local charge conservation in the system and gauge invariance of the response theory are properly treated. Such a gauge-invariant response theory is illustrated on the common tight-binding model for conduction electrons in doped graphene. The model predicts two distinctly resolved maxima in the energy-loss-function spectra. The first one corresponds to the intraband plasmons (usually called the Dirac plasmons). On the other hand, the second maximum (π plasmon structure) is simply a consequence of the Van Hove singularity in the single-electron density of states. The dc resistivity and the real part of the dynamical conductivity are found to be well described by the relaxation-time approximation, but only in the parametric space in which the damping is dominated by the direct scattering processes. The ballistic transport and the damping of Dirac plasmons are thus the problems that require abandoning the relaxation-time approximation.
Approximate Uncertainty Modeling in Risk Analysis with Vine Copulas
Bedford, Tim; Daneshkhah, Alireza
2015-01-01
Many applications of risk analysis require us to jointly model multiple uncertain quantities. Bayesian networks and copulas are two common approaches to modeling joint uncertainties with probability distributions. This article focuses on new methodologies for copulas by developing work of Cooke, Bedford, Kurowica, and others on vines as a way of constructing higher dimensional distributions that do not suffer from some of the restrictions of alternatives such as the multivariate Gaussian copula. The article provides a fundamental approximation result, demonstrating that we can approximate any density as closely as we like using vines. It further operationalizes this result by showing how minimum information copulas can be used to provide parametric classes of copulas that have such good levels of approximation. We extend previous approaches using vines by considering nonconstant conditional dependencies, which are particularly relevant in financial risk modeling. We discuss how such models may be quantified, in terms of expert judgment or by fitting data, and illustrate the approach by modeling two financial data sets. PMID:26332240
Spacecraft Orbit Determination with The B-spline Approximation Method
NASA Astrophysics Data System (ADS)
Song, Ye-zhi; Huang, Yong; Hu, Xiao-gong; Li, Pei-jia; Cao, Jian-feng
2014-04-01
It is known that the dynamical orbit determination is the most common way to get the precise orbits of spacecraft. However, it is hard to build up the precise dynamical model of spacecraft sometimes. In order to solve this problem, the technique of the orbit determination with the B-spline approximation method based on the theory of function approximation is presented in this article. In order to verify the effectiveness of this method, simulative orbit determinations in the cases of LEO (Low Earth Orbit), MEO (Medium Earth Orbit), and HEO (Highly Eccentric Orbit) satellites are performed, and it is shown that this method has a reliable accuracy and stable solution. The approach can be performed in both the conventional celestial coordinate system and the conventional terrestrial coordinate system. The spacecraft's position and velocity can be calculated directly with the B-spline approximation method, it needs not to integrate the dynamical equations, nor to calculate the state transfer matrix, thus the burden of calculations in the orbit determination is reduced substantially relative to the dynamical orbit determination method. The technique not only has a certain theoretical significance, but also can serve as a conventional algorithm in the spacecraft orbit determination.
Spacecraft Orbit Determination with B Spline Approximation Method
NASA Astrophysics Data System (ADS)
Song, Y. Z.; Huang, Y.; Hu, X. G.; Li, P. J.; Cao, J. F.
2013-07-01
It is known that the dynamical orbit determination is the most common way to get the precise orbit of spacecraft. However, it is hard to describe the precise orbit of spacecraft sometimes. In order to solve this problem, the technique of the orbit determination with the B spline approximation method based on the theory of function approximation is presented in this article. Several simulation cases of the orbit determination including LEO (Low Earth Orbit), MEO (Medium Earth Orbit), and HEO (Highly Eccentric Orbit) satellites are performed, and it is shown that the accuracy of this method is reliable and stable.The approach can be performed in the conventional celestial coordinate system and conventional terrestrial coordinate system.The spacecraft's position and velocity can be calculated directly with the B spline approximation method, which means that it is unnecessary to integrate the dynamics equations and variational equations. In that case, it makes the calculation amount of orbit determination reduce substantially relative to the dynamical orbit determination method. The technique not only has a certain theoretical significance, but also can be as a conventional algorithm in the spacecraft orbit determination.
Approximating conductive ellipsoid inductive responses using static quadrupole moments
Smith, J. Torquil
2008-10-01
Smith and Morrison (2006) developed an approximation for the inductive response of conducting magnetic (permeable) spheroids (e.g., steel spheroids) based on the inductive response of conducting magnetic spheres of related dimensions. Spheroids are axially symmetric objects with elliptical cross-sections along the axis of symmetry and circular cross sections perpendicular to the axis of symmetry. Spheroids are useful as an approximation to the shapes of unexploded ordnance (UXO) for approximating their responses. Ellipsoids are more general objects with three orthogonal principal axes, with elliptical cross sections along planes normal to the axes. Ellipsoids reduce to spheroids in the limiting case of ellipsoids with cross-sections that are in fact circles along planes normal to one axis. Parametrizing the inductive response of unknown objects in terms of the response of an ellipsoid is useful as it allows fitting responses of objects with no axis of symmetry, in addition to fitting the responses of axially symmetric objects. It is thus more appropriate for fitting the responses of metal scrap to be distinguished electromagnetically from unexploded ordnance. Here the method of Smith and Morrison (2006) is generalized to the case of conductive magnetic ellipsoids, and a simplified form used to parametrize the inductive response of isolated objects. The simplified form is developed for the case of non-uniform source fields, for the first eight terms in an ellipsoidal harmonic decomposition of the source fields, allowing limited corrections for source field geometry beyond the common assumption of uniform source fields.
Palmer, Keith T; Reading, Isabel; Calnan, Michael; Coggon, David
2013-01-01
Objective Statistics from Labour Force Surveys are widely quoted as evidence for the scale of occupational illness in Europe. However, occupational attribution depends on whether participants believe their health problem is caused or aggravated by work, and personal beliefs may be unreliable. We assessed the potential for error for work-associated arm pain. Methods We mailed a questionnaire to working-aged adults, randomly chosen from five British general practices. We asked about: occupational activities; mental health; self-rated health; arm pain; and beliefs about its causation. Those in work (n = 1769) were asked about activities likely to cause arm pain, from which we derived a variable for exposure to any ‘arm-straining’ occupational activity. We estimated the relative risk (RR) from arm-straining activity, using a modified Cox model, and derived the population attributable fraction (PAF). We compared the proportion of arm pain cases reporting their symptom as caused or made worse by work with the calculated PAF, overall and for subsets defined by demographic and other characteristics. Results Arm pain in the past year was more common in the 1,143 subjects who reported exposure to arm-straining occupational activity (RR 1.2, 95% confidence interval 1.1 to 1.5). In the study sample as a whole, 53.9% of 817 cases reported their arm pain as work-associated, whereas the PAF for arm-straining occupational activity was only 13.9%. The ratio of cases reported as work-related to the calculated attributable number was substantially higher below 50 years (5.4) than at older ages (3.0) and higher in those with worse self-rated and mental health. Conclusions Counting people with arm pain which they believe to be work-related can overestimate the number of cases attributable to work substantially. This casts doubt on the validity of a major source of information used by European Governments to evaluate their occupational health strategies. PMID:18056747
Approximate gauge symemtry of composite vector bosons
Suzuki, Mahiko
2010-06-01
It can be shown in a solvable field theory model that the couplings of the composite vector mesons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in more an intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.
Private Medical Record Linkage with Approximate Matching
Durham, Elizabeth; Xue, Yuan; Kantarcioglu, Murat; Malin, Bradley
2010-01-01
Federal regulations require patient data to be shared for reuse in a de-identified manner. However, disparate providers often share data on overlapping populations, such that a patient’s record may be duplicated or fragmented in the de-identified repository. To perform unbiased statistical analysis in a de-identified setting, it is crucial to integrate records that correspond to the same patient. Private record linkage techniques have been developed, but most methods are based on encryption and preclude the ability to determine similarity, decreasing the accuracy of record linkage. The goal of this research is to integrate a private string comparison method that uses Bloom filters to provide an approximate match, with a medical record linkage algorithm. We evaluate the approach with 100,000 patients’ identifiers and demographics from the Vanderbilt University Medical Center. We demonstrate that the private approximation method achieves sensitivity that is, on average, 3% higher than previous methods. PMID:21346965
Approximate gauge symmetry of composite vector bosons
NASA Astrophysics Data System (ADS)
Suzuki, Mahiko
2010-08-01
It can be shown in a solvable field theory model that the couplings of the composite vector bosons made of a fermion pair approach the gauge couplings in the limit of strong binding. Although this phenomenon may appear accidental and special to the vector bosons made of a fermion pair, we extend it to the case of bosons being constituents and find that the same phenomenon occurs in a more intriguing way. The functional formalism not only facilitates computation but also provides us with a better insight into the generating mechanism of approximate gauge symmetry, in particular, how the strong binding and global current conservation conspire to generate such an approximate symmetry. Remarks are made on its possible relevance or irrelevance to electroweak and higher symmetries.
Approximate locality for quantum systems on graphs.
Osborne, Tobias J
2008-10-01
In this Letter we make progress on a long-standing open problem of Aaronson and Ambainis [Theory Comput. 1, 47 (2005)]: we show that if U is a sparse unitary operator with a gap Delta in its spectrum, then there exists an approximate logarithm H of U which is also sparse. The sparsity pattern of H gets more dense as 1/Delta increases. This result can be interpreted as a way to convert between local continuous-time and local discrete-time quantum processes. As an example we show that the discrete-time coined quantum walk can be realized stroboscopically from an approximately local continuous-time quantum walk. PMID:18851512
Approximation of pseudospectra on a Hilbert space
NASA Astrophysics Data System (ADS)
Schmidt, Torge; Lindner, Marko
2016-06-01
The study of spectral properties of linear operators on an infinite-dimensional Hilbert space is of great interest. This task is especially difficult when the operator is non-selfadjoint or even non-normal. Standard approaches like spectral approximation by finite sections generally fail in that case. In this talk we present an algorithm which rigorously computes upper and lower bounds for the spectrum and pseudospectrum of such operators using finite-dimensional approximations. One of our main fields of research is an efficient implementation of this algorithm. To this end we will demonstrate and evaluate methods for the computation of the pseudospectrum of finite-dimensional operators based on continuation techniques.
Weizsacker-Williams approximation in quantum chromodynamics
NASA Astrophysics Data System (ADS)
Kovchegov, Yuri V.
The Weizsacker-Williams approximation for a large nucleus in quantum chromodynamics is developed. The non-Abelian Wieizsacker Williams field for a large ultrarelativistic nucleus is constructed. This field is an exact solution of the classical Yang-Mills equations of motion in light cone gauge. The connection is made to the McLerran- Venugopalan model of a large nucleus, and the color charge density for a nucleus in this model is found. The density of states distribution, as a function of color charge density, is proved to be Gaussian. We construct the Feynman diagrams in the light cone gauge which correspond to the classical Weizsacker Williams field. Analyzing these diagrams we obtain a limitation on using the quasi-classical approximation for nuclear collisions.
Small Clique Detection and Approximate Nash Equilibria
NASA Astrophysics Data System (ADS)
Minder, Lorenz; Vilenchik, Dan
Recently, Hazan and Krauthgamer showed [12] that if, for a fixed small ɛ, an ɛ-best ɛ-approximate Nash equilibrium can be found in polynomial time in two-player games, then it is also possible to find a planted clique in G n, 1/2 of size C logn, where C is a large fixed constant independent of ɛ. In this paper, we extend their result to show that if an ɛ-best ɛ-approximate equilibrium can be efficiently found for arbitrarily small ɛ> 0, then one can detect the presence of a planted clique of size (2 + δ) logn in G n, 1/2 in polynomial time for arbitrarily small δ> 0. Our result is optimal in the sense that graphs in G n, 1/2 have cliques of size (2 - o(1)) logn with high probability.
Flow past a porous approximate spherical shell
NASA Astrophysics Data System (ADS)
Srinivasacharya, D.
2007-07-01
In this paper, the creeping flow of an incompressible viscous liquid past a porous approximate spherical shell is considered. The flow in the free fluid region outside the shell and in the cavity region of the shell is governed by the Navier Stokes equation. The flow within the porous annulus region of the shell is governed by Darcy’s Law. The boundary conditions used at the interface are continuity of the normal velocity, continuity of the pressure and Beavers and Joseph slip condition. An exact solution for the problem is obtained. An expression for the drag on the porous approximate spherical shell is obtained. The drag experienced by the shell is evaluated numerically for several values of the parameters governing the flow.
Approximate Solutions in Planted 3-SAT
NASA Astrophysics Data System (ADS)
Hsu, Benjamin; Laumann, Christopher; Moessner, Roderich; Sondhi, Shivaji
2013-03-01
In many computational settings, there exists many instances where finding a solution requires a computing time that grows exponentially in the number of variables. Concrete examples occur in combinatorial optimization problems and cryptography in computer science or glassy systems in physics. However, while exact solutions are often known to require exponential time, a related and important question is the running time required to find approximate solutions. Treating this problem as a problem in statistical physics at finite temperature, we examine the computational running time in finding approximate solutions in 3-satisfiability for randomly generated 3-SAT instances which are guaranteed to have a solution. Analytic predictions are corroborated by numerical evidence using stochastic local search algorithms. A first order transition is found in the running time of these algorithms.
Uncertainty relations for approximation and estimation
NASA Astrophysics Data System (ADS)
Lee, Jaeha; Tsutsui, Izumi
2016-05-01
We present a versatile inequality of uncertainty relations which are useful when one approximates an observable and/or estimates a physical parameter based on the measurement of another observable. It is shown that the optimal choice for proxy functions used for the approximation is given by Aharonov's weak value, which also determines the classical Fisher information in parameter estimation, turning our inequality into the genuine Cramér-Rao inequality. Since the standard form of the uncertainty relation arises as a special case of our inequality, and since the parameter estimation is available as well, our inequality can treat both the position-momentum and the time-energy relations in one framework albeit handled differently.
Approximate inverse preconditioners for general sparse matrices
Chow, E.; Saad, Y.
1994-12-31
Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.
Some approximation concepts for structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1974-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss examples problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Some approximation concepts for structural synthesis.
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Farshi, B.
1973-01-01
An efficient automated minimum weight design procedure is presented which is applicable to sizing structural systems that can be idealized by truss, shear panel, and constant strain triangles. Static stress and displacement constraints under alternative loading conditions are considered. The optimization algorithm is an adaptation of the method of inscribed hyperspheres and high efficiency is achieved by using several approximation concepts including temporary deletion of noncritical constraints, design variable linking, and Taylor series expansions for response variables in terms of design variables. Optimum designs for several planar and space truss example problems are presented. The results reported support the contention that the innovative use of approximation concepts in structural synthesis can produce significant improvements in efficiency.
Second derivatives for approximate spin projection methods
Thompson, Lee M.; Hratchian, Hrant P.
2015-02-07
The use of broken-symmetry electronic structure methods is required in order to obtain correct behavior of electronically strained open-shell systems, such as transition states, biradicals, and transition metals. This approach often has issues with spin contamination, which can lead to significant errors in predicted energies, geometries, and properties. Approximate projection schemes are able to correct for spin contamination and can often yield improved results. To fully make use of these methods and to carry out exploration of the potential energy surface, it is desirable to develop an efficient second energy derivative theory. In this paper, we formulate the analytical second derivatives for the Yamaguchi approximate projection scheme, building on recent work that has yielded an efficient implementation of the analytical first derivatives.