Sample records for local equivalence problem

  1. How many invariant polynomials are needed to decide local unitary equivalence of qubit states?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maciążek, Tomasz; Faculty of Physics, University of Warsaw, ul. Hoża 69, 00-681 Warszawa; Oszmaniec, Michał

    2013-09-15

    Given L-qubit states with the fixed spectra of reduced one-qubit density matrices, we find a formula for the minimal number of invariant polynomials needed for solving local unitary (LU) equivalence problem, that is, problem of deciding if two states can be connected by local unitary operations. Interestingly, this number is not the same for every collection of the spectra. Some spectra require less polynomials to solve LU equivalence problem than others. The result is obtained using geometric methods, i.e., by calculating the dimensions of reduced spaces, stemming from the symplectic reduction procedure.

  2. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  3. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  4. Localization of the eigenvalues of linear integral equations with applications to linear ordinary differential equations.

    NASA Technical Reports Server (NTRS)

    Sloss, J. M.; Kranzler, S. K.

    1972-01-01

    The equivalence of a considered integral equation form with an infinite system of linear equations is proved, and the localization of the eigenvalues of the infinite system is expressed. Error estimates are derived, and the problems of finding upper bounds and lower bounds for the eigenvalues are solved simultaneously.

  5. Short-time quantum dynamics of sharp boundaries potentials

    NASA Astrophysics Data System (ADS)

    Granot, Er'el; Marchewka, Avi

    2015-02-01

    Despite the high prevalence of singular potential in general, and rectangular potentials in particular, in applied scattering models, to date little is known about their short time effects. The reason is that singular potentials cause a mixture of complicated local as well as non-local effects. The object of this work is to derive a generic method to calculate analytically the short-time impact of any singular potential. In this paper it is shown that the scattering of a smooth wavefunction on a singular potential is totally equivalent, in the short-time regime, to the free propagation of a singular wavefunction. However, the latter problem was totally addressed analytically in Ref. [7]. Therefore, this equivalency can be utilized in solving analytically the short time dynamics of any smooth wavefunction at the presence of a singular potentials. In particular, with this method the short-time dynamics of any problem where a sharp boundaries potential (e.g., a rectangular barrier) is turned on instantaneously can easily be solved analytically.

  6. Application of local linearization and the transonic equivalence rule to the flow about slender analytic bodies at Mach numbers near 1.0

    NASA Technical Reports Server (NTRS)

    Tyson, R. W.; Muraca, R. J.

    1975-01-01

    The local linearization method for axisymmetric flow is combined with the transonic equivalence rule to calculate pressure distribution on slender bodies at free-stream Mach numbers from .8 to 1.2. This is an approximate solution to the transonic flow problem which yields results applicable during the preliminary design stages of a configuration development. The method can be used to determine the aerodynamic loads on parabolic arc bodies having either circular or elliptical cross sections. It is particularly useful in predicting pressure distributions and normal force distributions along the body at small angles of attack. The equations discussed may be extended to include wing-body combinations.

  7. Distance Constraint Satisfaction Problems

    NASA Astrophysics Data System (ADS)

    Bodirsky, Manuel; Dalmau, Victor; Martin, Barnaby; Pinsker, Michael

    We study the complexity of constraint satisfaction problems for templates Γ that are first-order definable in ({ Z}; {suc}), the integers with the successor relation. Assuming a widely believed conjecture from finite domain constraint satisfaction (we require the tractability conjecture by Bulatov, Jeavons and Krokhin in the special case of transitive finite templates), we provide a full classification for the case that Γ is locally finite (i.e., the Gaifman graph of Γ has finite degree). We show that one of the following is true: The structure Γ is homomorphically equivalent to a structure with a certain majority polymorphism (which we call modular median) and CSP(Γ) can be solved in polynomial time, or Γ is homomorphically equivalent to a finite transitive structure, or CSP(Γ) is NP-complete.

  8. Optimal matching for prostate brachytherapy seed localization with dimension reduction.

    PubMed

    Lee, Junghoon; Labat, Christian; Jain, Ameet K; Song, Danny Y; Burdette, Everette C; Fichtinger, Gabor; Prince, Jerry L

    2009-01-01

    In prostate brachytherapy, x-ray fluoroscopy has been used for intra-operative dosimetry to provide qualitative assessment of implant quality. More recent developments have made possible 3D localization of the implanted radioactive seeds. This is usually modeled as an assignment problem and solved by resolving the correspondence of seeds. It is, however, NP-hard, and the problem is even harder in practice due to the significant number of hidden seeds. In this paper, we propose an algorithm that can find an optimal solution from multiple projection images with hidden seeds. It solves an equivalent problem with reduced dimensional complexity, thus allowing us to find an optimal solution in polynomial time. Simulation results show the robustness of the algorithm. It was validated on 5 phantom and 18 patient datasets, successfully localizing the seeds with detection rate of > or = 97.6% and reconstruction error of < or = 1.2 mm. This is considered to be clinically excellent performance.

  9. Localization from near-source quasi-static electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Mosher, J. C.

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  10. Localization from near-source quasi-static electromagnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, John Compton

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less

  11. Local unitary equivalence of quantum states and simultaneous orthogonal equivalence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jing, Naihuan, E-mail: jing@ncsu.edu; Yang, Min; Zhao, Hui, E-mail: zhaohui@bjut.edu.cn

    2016-06-15

    The correspondence between local unitary equivalence of bipartite quantum states and simultaneous orthogonal equivalence is thoroughly investigated and strengthened. It is proved that local unitary equivalence can be studied through simultaneous similarity under projective orthogonal transformations, and four parametrization independent algorithms are proposed to judge when two density matrices on ℂ{sup d{sub 1}} ⊗ ℂ{sup d{sub 2}} are locally unitary equivalent in connection with trace identities, Kronecker pencils, Albert determinants and Smith normal forms.

  12. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  13. Non-local sub-characteristic zones of influence in unsteady interactive boundary-layers

    NASA Technical Reports Server (NTRS)

    Rothmayer, A. P.

    1992-01-01

    The properties of incompressible, unsteady, interactive, boundary layers are examined for a model hypersonic boundary layer and internal flow past humps or, equivalently, external flow past short-scaled humps. Using a linear high frequency analysis, it is shown that the domains of dependence within the viscous sublayer may be a strong function of position within the sublayer and may be strongly influenced by the pressure displacement interaction, or the prescribed displacement condition. Detailed calculations are presented for the hypersonic boundary layer. This effect is found to carry over directly to the fully viscous problem as well as the nonlinear problem. In the fully viscous problem, the non-local character of the domains of dependence manifests itself in the sub-characteristics. Potential implications of the domain of dependence structure on finite difference computations of unsteady boundary layers are briefly discussed.

  14. Equivalent theories redefine Hamiltonian observables to exhibit change in general relativity

    NASA Astrophysics Data System (ADS)

    Pitts, J. Brian

    2017-03-01

    Change and local spatial variation are missing in canonical General Relativity’s observables as usually defined, an aspect of the problem of time. Definitions can be tested using equivalent formulations of a theory, non-gauge and gauge, because they must have equivalent observables and everything is observable in the non-gauge formulation. Taking an observable from the non-gauge formulation and finding the equivalent in the gauge formulation, one requires that the equivalent be an observable, thus constraining definitions. For massive photons, the de Broglie-Proca non-gauge formulation observable {{A}μ} is equivalent to the Stueckelberg-Utiyama gauge formulation quantity {{A}μ}+{{\\partial}μ}φ, which must therefore be an observable. To achieve that result, observables must have 0 Poisson bracket not with each first-class constraint, but with the Rosenfeld-Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints, in accord with the Pons-Salisbury-Sundermeyer definition of observables. The definition for external gauge symmetries can be tested using massive gravity, where one can install gauge freedom by parametrization with clock fields X A . The non-gauge observable {{g}μ ν} has the gauge equivalent {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν}. The Poisson bracket of {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν} with G turns out to be not 0 but a Lie derivative. This non-zero Poisson bracket refines and systematizes Kuchař’s proposal to relax the 0 Poisson bracket condition with the Hamiltonian constraint. Thus observables need covariance, not invariance, in relation to external gauge symmetries. The Lagrangian and Hamiltonian for massive gravity are those of General Relativity  +   Λ   +  4 scalars, so the same definition of observables applies to General Relativity. Local fields such as {{g}μ ν} are observables. Thus observables change. Requiring equivalent observables for equivalent theories also recovers Hamiltonian-Lagrangian equivalence.

  15. Final report on CCT-K6: Comparison of local realisations of dew-point temperature scales in the range -50 °C to +20 °C

    NASA Astrophysics Data System (ADS)

    Bell, S.; Stevens, M.; Abe, H.; Benyon, R.; Bosma, R.; Fernicola, V.; Heinonen, M.; Huang, P.; Kitano, H.; Li, Z.; Nielsen, J.; Ochi, N.; Podmurnaya, O. A.; Scace, G.; Smorgon, D.; Vicente, T.; Vinge, A. F.; Wang, L.; Yi, H.

    2015-01-01

    A key comparison in dew-point temperature was carried out among the national standards held by NPL (pilot), NMIJ, INTA, VSL, INRIM, MIKES, NIST, NIM, VNIIFTRI-ESB and NMC. A pair of condensation-principle dew-point hygrometers was circulated and used to compare the local realisations of dew point for participant humidity generators in the range -50 °C to +20 °C. The duration of the comparison was prolonged by numerous problems with the hygrometers, requiring some repairs, and several additional check measurements by the pilot. Despite the problems and the extended timescale, the comparison was effective in providing evidence of equivalence. Agreement with the key comparison reference value was achieved in the majority of cases, and bilateral degrees of equivalence are also reported. Main text. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database kcdb.bipm.org/. The final report has been peer-reviewed and approved for publication by the CCT, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).

  16. Limitations to Teaching Children 2 + 2 = 4: Typical Arithmetic Problems Can Hinder Learning of Mathematical Equivalence

    ERIC Educational Resources Information Center

    McNeil, Nicole M.

    2008-01-01

    Do typical arithmetic problems hinder learning of mathematical equivalence? Second and third graders (7-9 years old; N= 80) received lessons on mathematical equivalence either with or without typical arithmetic problems (e.g., 15 + 13 = 28 vs. 28 = 28, respectively). Children then solved math equivalence problems (e.g., 3 + 9 + 5 = 6 + __),…

  17. Lower bound on the time complexity of local adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Chen, Zhenghao; Koh, Pang Wei; Zhao, Yan

    2006-11-01

    The adiabatic theorem of quantum physics has been, in recent times, utilized in the design of local search quantum algorithms, and has been proven to be equivalent to standard quantum computation, that is, the use of unitary operators [D. Aharonov in Proceedings of the 45th Annual Symposium on the Foundations of Computer Science, 2004, Rome, Italy (IEEE Computer Society Press, New York, 2004), pp. 42-51]. Hence, the study of the time complexity of adiabatic evolution algorithms gives insight into the computational power of quantum algorithms. In this paper, we present two different approaches of evaluating the time complexity for local adiabatic evolution using time-independent parameters, thus providing effective tests (not requiring the evaluation of the entire time-dependent gap function) for the time complexity of newly developed algorithms. We further illustrate our tests by displaying results from the numerical simulation of some problems, viz. specially modified instances of the Hamming weight problem.

  18. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  19. Subspace-based analysis of the ERT inverse problem

    NASA Astrophysics Data System (ADS)

    Ben Hadj Miled, Mohamed Khames; Miller, Eric L.

    2004-05-01

    In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.

  20. A sparse equivalent source method for near-field acoustic holography.

    PubMed

    Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter

    2017-01-01

    This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.

  1. WWC Review of the Report "Benefits of Practicing 4 = 2 + 2: Nontraditional Problem Formats Facilitate Children's Understanding of Mathematical Equivalence." What Works Clearinghouse Single Study Review

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2014

    2014-01-01

    The 2011 study, "Benefits of Practicing 4 = 2 + 2: Nontraditional Problem Formats Facilitate Children's Understanding of Mathematical Equivalence," examined the effects of addition practice using nontraditional problem formats on students' understanding of mathematical equivalence. In nontraditional problem formats, operations appear on…

  2. Equivalence in Symbolic and Nonsymbolic Contexts: Benefits of Solving Problems with Manipulatives

    ERIC Educational Resources Information Center

    Sherman, Jody; Bisanz, Jeffrey

    2009-01-01

    Children's failure on equivalence problems (e.g., 5 + 4 = 7 + __) is believed to be the result of misunderstanding the equal sign and has been tested using symbolic problems (including "="). For Study 1 (N = 48), we designed a nonsymbolic method for presenting equivalence problems to determine whether Grade 2 children's difficulty is due…

  3. 47 CFR 51.903 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... equivalent of the incumbent local exchange carrier access service provided by a non-incumbent local exchange... or other customer provided by an incumbent local exchange carrier or any functional equivalent of the incumbent local exchange carrier access service provided by a non-incumbent local exchange carrier...

  4. 47 CFR 51.903 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... equivalent of the incumbent local exchange carrier access service provided by a non-incumbent local exchange... or other customer provided by an incumbent local exchange carrier or any functional equivalent of the incumbent local exchange carrier access service provided by a non-incumbent local exchange carrier...

  5. 47 CFR 51.903 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... equivalent of the incumbent local exchange carrier access service provided by a non-incumbent local exchange... or other customer provided by an incumbent local exchange carrier or any functional equivalent of the incumbent local exchange carrier access service provided by a non-incumbent local exchange carrier...

  6. Initial conditions and degrees of freedom of non-local gravity

    NASA Astrophysics Data System (ADS)

    Calcagni, Gianluca; Modesto, Leonardo; Nardelli, Giuseppe

    2018-05-01

    We prove the equivalence between non-local gravity with an arbitrary form factor and a non-local gravitational system with an extra rank-2 symmetric tensor. Thanks to this reformulation, we use the diffusion-equation method to transform the dynamics of renormalizable non-local gravity with exponential operators into a higher-dimensional system local in spacetime coordinates. This method, first illustrated with a scalar field theory and then applied to gravity, allows one to solve the Cauchy problem and count the number of initial conditions and of non-perturbative degrees of freedom, which is finite. In particular, the non-local scalar and gravitational theories with exponential operators are both characterized by four initial conditions in any dimension and, respectively, by one and eight degrees of freedom in four dimensions. The fully covariant equations of motion are written in a form convenient to find analytic non-perturbative solutions.

  7. Locally covariant quantum field theory and the problem of formulating the same physics in all space-times.

    PubMed

    Fewster, Christopher J

    2015-08-06

    The framework of locally covariant quantum field theory is discussed, motivated in part using 'ignorance principles'. It is shown how theories can be represented by suitable functors, so that physical equivalence of theories may be expressed via natural isomorphisms between the corresponding functors. The inhomogeneous scalar field is used to illustrate the ideas. It is argued that there are two reasonable definitions of the local physical content associated with a locally covariant theory; when these coincide, the theory is said to be dynamically local. The status of the dynamical locality condition is reviewed, as are its applications in relation to (i) the foundational question of what it means for a theory to represent the same physics in different space-times and (ii) a no-go result on the existence of natural states. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  8. Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Ishii, Hiroaki

    2009-01-01

    This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.

  9. Noticing relevant problem features: activating prior knowledge affects problem solving by guiding encoding

    PubMed Central

    Crooks, Noelle M.; Alibali, Martha W.

    2013-01-01

    This study investigated whether activating elements of prior knowledge can influence how problem solvers encode and solve simple mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __). Past work has shown that such problems are difficult for elementary school students (McNeil and Alibali, 2000). One possible reason is that children's experiences in math classes may encourage them to think about equations in ways that are ultimately detrimental. Specifically, children learn a set of patterns that are potentially problematic (McNeil and Alibali, 2005a): the perceptual pattern that all equations follow an “operations = answer” format, the conceptual pattern that the equal sign means “calculate the total”, and the procedural pattern that the correct way to solve an equation is to perform all of the given operations on all of the given numbers. Upon viewing an equivalence problem, knowledge of these patterns may be reactivated, leading to incorrect problem solving. We hypothesized that these patterns may negatively affect problem solving by influencing what people encode about a problem. To test this hypothesis in children would require strengthening their misconceptions, and this could be detrimental to their mathematical development. Therefore, we tested this hypothesis in undergraduate participants. Participants completed either control tasks or tasks that activated their knowledge of the three patterns, and were then asked to reconstruct and solve a set of equivalence problems. Participants in the knowledge activation condition encoded the problems less well than control participants. They also made more errors in solving the problems, and their errors resembled the errors children make when solving equivalence problems. Moreover, encoding performance mediated the effect of knowledge activation on equivalence problem solving. Thus, one way in which experience may affect equivalence problem solving is by influencing what students encode about the equations. PMID:24324454

  10. Neutrino Masses in the Landscape and Global-Local Dualities in Eternal Inflation

    NASA Astrophysics Data System (ADS)

    Mainemer Katz, Dan

    In this dissertation we study two topics in Theoretical Cosmology: one more formal, the other more phenomenological. We work in the context of eternally inflating cosmologies. These arise in any fundamental theory that contains at least one stable or metastable de Sitter vacuum. Each topic is presented in a different chapter: Chapter 1 deals with the measure problem in eternal inflation. Global-local duality is the equivalence of seemingly different regulators in eternal inflation. For example, the light- cone time cutoff (a global measure, which regulates time) makes the same predictions as the causal patch (a local measure that cuts off space). We show that global-local duality is far more general. It rests on a redundancy inherent in any global cutoff: at late times, an attractor regime is reached, characterized by the unlimited exponential self-reproduction of a certain fundamental region of spacetime. An equivalent local cutoff can be obtained by restricting to this fundamental region. We derive local duals to several global cutoffs of interest. The New Scale Factor Cutoff is dual to the Short Fat Geodesic, a geodesic of fixed infinitesimal proper width. Vilenkin's CAH Cutoff is equivalent to the Hubbletube, whose width is proportional to the local Hubble volume. The famous youngness problem of the Proper Time Cutoff can be readily understood by considering its local dual, the Incredible Shrinking Geodesic. The chapter closely follows our paper. Chapter 2 deals with the question of whether neutrino masses could be anthropically explained. The sum of active neutrino masses is well constrained, 58 meV ≤ mupsilon [is approximately less than] 0.23 eV, but the origin of this scale is not well understood. Here we investigate the possibility that it arises by environmental selection in a large landscape of vacua. Earlier work had noted the detrimental effects of neutrinos on large scale structure. However, using Boltzmann codes to compute the smoothed density contrast on Mpc scales, we find that dark matter halos form abundantly for mupsilon [is approximately greater than] 10eV. This finding rules out an anthropic origin of mupsilon, unless a different catastrophic boundary can be identified. Here we argue that galaxy formation becomes inefficient for mupsilon [is approximately greater than] 10 eV. We show that in this regime, structure forms late and is dominated by cluster scales, as in a top-down scenario. This is catastrophic: baryonic gas will cool too slowly to form stars in an abundance comparable to our universe. With this novel cooling boundary, we find that the anthropic prediction for mupsilon agrees at better than 2sigma with current observational bounds. A degenerate hierarchy is mildly preferred. The chapter closely follows our paper.

  11. A novel anti-windup framework for cascade control systems: an application to underactuated mechanical systems.

    PubMed

    Mehdi, Niaz; Rehan, Muhammad; Malik, Fahad Mumtaz; Bhatti, Aamer Iqbal; Tufail, Muhammad

    2014-05-01

    This paper describes the anti-windup compensator (AWC) design methodologies for stable and unstable cascade plants with cascade controllers facing actuator saturation. Two novel full-order decoupling AWC architectures, based on equivalence of the overall closed-loop system, are developed to deal with windup effects. The decoupled architectures have been developed, to formulate the AWC synthesis problem, by assuring equivalence of the coupled and the decoupled architectures, instead of using an analogy, for cascade control systems. A comparison of both AWC architectures from application point of view is provided to consolidate their utilities. Mainly, one of the architecture is better in terms of computational complexity for implementation, while the other is suitable for unstable cascade systems. On the basis of the architectures for cascade systems facing stability and performance degradation problems in the event of actuator saturation, the global AWC design methodologies utilizing linear matrix inequalities (LMIs) are developed. These LMIs are synthesized by application of the Lyapunov theory, the global sector condition and the ℒ2 gain reduction of the uncertain decoupled nonlinear component of the decoupled architecture. Further, an LMI-based local AWC design methodology is derived by utilizing a local sector condition by means of a quadratic Lyapunov function to resolve the windup problem for unstable cascade plants under saturation. To demonstrate effectiveness of the proposed AWC schemes, an underactuated mechanical system, the ball-and-beam system, is considered, and details of the simulation and practical implementation results are described. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Regularity results for the minimum time function with Hörmander vector fields

    NASA Astrophysics Data System (ADS)

    Albano, Paolo; Cannarsa, Piermarco; Scarinci, Teresa

    2018-03-01

    In a bounded domain of Rn with boundary given by a smooth (n - 1)-dimensional manifold, we consider the homogeneous Dirichlet problem for the eikonal equation associated with a family of smooth vector fields {X1 , … ,XN } subject to Hörmander's bracket generating condition. We investigate the regularity of the viscosity solution T of such problem. Due to the presence of characteristic boundary points, singular trajectories may occur. First, we characterize these trajectories as the closed set of all points at which the solution loses point-wise Lipschitz continuity. Then, we prove that the local Lipschitz continuity of T, the local semiconcavity of T, and the absence of singular trajectories are equivalent properties. Finally, we show that the last condition is satisfied whenever the characteristic set of {X1 , … ,XN } is a symplectic manifold. We apply our results to several examples.

  13. The Strengths and Difficulties Questionnaire (SDQ): Factor Structure and Gender Equivalence in Norwegian Adolescents.

    PubMed

    Bøe, Tormod; Hysing, Mari; Skogen, Jens Christoffer; Breivik, Kyrre

    2016-01-01

    Although frequently used with older adolescents, few studies of the factor structure, internal consistency and gender equivalence of the SDQ exists for this age group, with inconsistent findings. In the present study, confirmatory factor analysis (CFA) was used to evaluate the five-factor structure of the SDQ in a population sample of 10,254 16-18 year-olds from the youth@hordaland study. Measurement invariance across gender was assessed using multigroup CFA. A modestly modified five-factor solution fitted the data acceptably, accounting for one cross loading and some local dependencies. Importantly, partial measurement non-invariance was identified, with differential item functioning in eight items, and higher correlations between emotional and conduct problems for boys compared to girls. Implications for use clinically and in research are discussed.

  14. Theory and application of equivalent transformation relationships between plane wave and spherical wave

    NASA Astrophysics Data System (ADS)

    Wang, Yao; Yang, Zailin; Zhang, Jianwei; Yang, Yong

    2017-10-01

    Based on the governing equations and the equivalent models, we propose an equivalent transformation relationships between a plane wave in a one-dimensional medium and a spherical wave in globular geometry with radially inhomogeneous properties. These equivalent relationships can help us to obtain the analytical solutions of the elastodynamic issues in an inhomogeneous medium. The physical essence of the presented equivalent transformations is the equivalent relationships between the geometry and the material properties. It indicates that the spherical wave problem in globular geometry can be transformed into the plane wave problem in the bar with variable property fields, and its inverse transformation is valid as well. Four different examples of wave motion problems in the inhomogeneous media are solved based on the presented equivalent relationships. We obtain two basic analytical solution forms in Examples I and II, investigate the reflection behavior of inhomogeneous half-space in Example III, and exhibit a special inhomogeneity in Example IV, which can keep the traveling spherical wave in constant amplitude. This study implies that our idea makes solving the associated problem easier.

  15. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  16. Large-Scale Point-Cloud Visualization through Localized Textured Surface Reconstruction.

    PubMed

    Arikan, Murat; Preiner, Reinhold; Scheiblauer, Claus; Jeschke, Stefan; Wimmer, Michael

    2014-09-01

    In this paper, we introduce a novel scene representation for the visualization of large-scale point clouds accompanied by a set of high-resolution photographs. Many real-world applications deal with very densely sampled point-cloud data, which are augmented with photographs that often reveal lighting variations and inaccuracies in registration. Consequently, the high-quality representation of the captured data, i.e., both point clouds and photographs together, is a challenging and time-consuming task. We propose a two-phase approach, in which the first (preprocessing) phase generates multiple overlapping surface patches and handles the problem of seamless texture generation locally for each patch. The second phase stitches these patches at render-time to produce a high-quality visualization of the data. As a result of the proposed localization of the global texturing problem, our algorithm is more than an order of magnitude faster than equivalent mesh-based texturing techniques. Furthermore, since our preprocessing phase requires only a minor fraction of the whole data set at once, we provide maximum flexibility when dealing with growing data sets.

  17. Target detection and localization in shallow water: an experimental demonstration of the acoustic barrier problem at the laboratory scale.

    PubMed

    Marandet, Christian; Roux, Philippe; Nicolas, Barbara; Mars, Jérôme

    2011-01-01

    This study demonstrates experimentally at the laboratory scale the detection and localization of a wavelength-sized target in a shallow ultrasonic waveguide between two source-receiver arrays at 3 MHz. In the framework of the acoustic barrier problem, at the 1/1000 scale, the waveguide represents a 1.1-km-long, 52-m-deep ocean acoustic channel in the kilohertz frequency range. The two coplanar arrays record in the time-domain the transfer matrix of the waveguide between each pair of source-receiver transducers. Invoking the reciprocity principle, a time-domain double-beamforming algorithm is simultaneously performed on the source and receiver arrays. This array processing projects the multireverberated acoustic echoes into an equivalent set of eigenrays, which are defined by their launch and arrival angles. Comparison is made between the intensity of each eigenray without and with a target for detection in the waveguide. Localization is performed through tomography inversion of the acoustic impedance of the target, using all of the eigenrays extracted from double beamforming. The use of the diffraction-based sensitivity kernel for each eigenray provides both the localization and the signature of the target. Experimental results are shown in the presence of surface waves, and methodological issues are discussed for detection and localization.

  18. Lateral-deflection-controlled friction force microscopy

    NASA Astrophysics Data System (ADS)

    Fukuzawa, Kenji; Hamaoka, Satoshi; Shikida, Mitsuhiro; Itoh, Shintaro; Zhang, Hedong

    2014-08-01

    Lateral-deflection-controlled dual-axis friction force microscopy (FFM) is presented. In this method, an electrostatic force generated with a probe-incorporated micro-actuator compensates for friction force in real time during probe scanning using feedback control. This equivalently large rigidity can eliminate apparent boundary width and lateral snap-in, which are caused by lateral probe deflection. The method can evolve FFM as a method for quantifying local frictional properties on the micro/nanometer-scale by overcoming essential problems to dual-axis FFM.

  19. 47 CFR 69.105 - Carrier common line for non-price cap local exchange carriers.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Carrier common line for non-price cap local... for non-price cap local exchange carriers. (a) This section is applicable only to local exchange... capability to provide access for an MTS-WATS equivalent service that is substantially equivalent to the...

  20. 47 CFR 69.105 - Carrier common line for non-price cap local exchange carriers.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Carrier common line for non-price cap local... for non-price cap local exchange carriers. (a) This section is applicable only to local exchange... capability to provide access for an MTS-WATS equivalent service that is substantially equivalent to the...

  1. Waves on the Free Surface Described by Linearized Equations of Hydrodynamics with Localized Right-Hand Sides

    NASA Astrophysics Data System (ADS)

    Dobrokhotov, S. Yu.; Nazaikinskii, V. E.

    2018-01-01

    A linearized system of equations of hydrodynamics with time-dependent spatially localized right-hand side placed both on the free surface (and on the bottom of the basin) and also in the layer of the liquid is considered in a layer of variable depth with a given basic plane-parallel flow. A method of constructing asymptotic solutions of this problem is suggested; it consists of two stages: (1) a reduction of the three-dimensional problem to a two-dimensional inhomogeneous pseudodifferential equation on the nonperturbed free surface of the liquid, (2) a representation of the localized right-hand side in the form of a Maslov canonical operator on a special Lagrangian manifold and the subsequent application of a generalization to evolution problems of an approach, which was recently suggested in the paper [A. Yu. Anikin, S. Yu. Dobrokhotov, V. E. Nazaikinskii, and M. Rouleux, Dokl. Ross. Akad. Nauk 475 (6), 624-628 (2017); Engl. transl.: Dokl. Math. 96 (1), 406-410 (2017)], to solving stationary problems with localized right-hand sides and its combination with "nonstandard" characteristics. A method of calculation (generalizing long-standing results of Dobrokhotov and Zhevandrov) of an analog of the Kelvin wedge and the wave fields inside the wedge and in its neighborhood is suggested, which uses the consideration that this method is the projection to the extended configuration space of a Lagrangian manifold formed by the trajectories of the Hamiltonian vector field issuing from the intersection of the set of zeros of the extended Hamiltonian of the problem with conormal bundle to the graph of the vector function defining the trajectory of motion of an equivalent source on the surface of the liquid.

  2. The Strengths and Difficulties Questionnaire (SDQ): Factor Structure and Gender Equivalence in Norwegian Adolescents

    PubMed Central

    Hysing, Mari; Skogen, Jens Christoffer; Breivik, Kyrre

    2016-01-01

    Although frequently used with older adolescents, few studies of the factor structure, internal consistency and gender equivalence of the SDQ exists for this age group, with inconsistent findings. In the present study, confirmatory factor analysis (CFA) was used to evaluate the five-factor structure of the SDQ in a population sample of 10,254 16–18 year-olds from the youth@hordaland study. Measurement invariance across gender was assessed using multigroup CFA. A modestly modified five-factor solution fitted the data acceptably, accounting for one cross loading and some local dependencies. Importantly, partial measurement non-invariance was identified, with differential item functioning in eight items, and higher correlations between emotional and conduct problems for boys compared to girls. Implications for use clinically and in research are discussed. PMID:27138259

  3. The Principle of Equivalence: Demonstrations of Local Effective Vertical and Horizontal

    ERIC Educational Resources Information Center

    Munera, Hector A.

    2010-01-01

    It has been suggested that Einstein's principle of equivalence (PE) should be introduced at an early stage. This principle leads to the notion of local effective gravity, which in turn defines effective vertical and horizontal directions. Local effective gravity need not coincide with the direction of terrestrial gravity. This paper describes…

  4. Problems of Translation in Cross-Cultural Research

    ERIC Educational Resources Information Center

    Sechrest, Lee; And Others

    1972-01-01

    Various types of translation problems in cross-cultural research are translation of questions or other verbal stimuli, vocabulary equivalence, equivalence in idiom, grammar, syntax, and back-translation. (Author/SB)

  5. Singular optimal control and the identically non-regular problem in the calculus of variations

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.; Kelley, H. J.; Cliff, E. M.

    1985-01-01

    A small but interesting class of optimal control problems featuring a scalar control appearing linearly is equivalent to the class of identically nonregular problems in the Calculus of Variations. It is shown that a condition due to Mancill (1950) is equivalent to the generalized Legendre-Clebsch condition for this narrow class of problems.

  6. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerritsma, Marc; Bochev, Pavel

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  7. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE PAGES

    Gerritsma, Marc; Bochev, Pavel

    2016-03-22

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  8. Unifying Temporal and Structural Credit Assignment Problems

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian K.; Tumer, Kagan

    2004-01-01

    Single-agent reinforcement learners in time-extended domains and multi-agent systems share a common dilemma known as the credit assignment problem. Multi-agent systems have the structural credit assignment problem of determining the contributions of a particular agent to a common task. Instead, time-extended single-agent systems have the temporal credit assignment problem of determining the contribution of a particular action to the quality of the full sequence of actions. Traditionally these two problems are considered different and are handled in separate ways. In this article we show how these two forms of the credit assignment problem are equivalent. In this unified frame-work, a single-agent Markov decision process can be broken down into a single-time-step multi-agent process. Furthermore we show that Monte-Carlo estimation or Q-learning (depending on whether the values of resulting actions in the episode are known at the time of learning) are equivalent to different agent utility functions in a multi-agent system. This equivalence shows how an often neglected issue in multi-agent systems is equivalent to a well-known deficiency in multi-time-step learning and lays the basis for solving time-extended multi-agent problems, where both credit assignment problems are present.

  9. On non-autonomous dynamical systems

    NASA Astrophysics Data System (ADS)

    Anzaldo-Meneses, A.

    2015-04-01

    In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.

  10. Radiation exposure from consumer products and miscellaneous sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-01-01

    This review of the literature indicates that there is a variety of consumer products and miscellaneous sources of radiation that result in exposure to the U.S. population. A summary of the number of people exposed to each such source, an estimate of the resulting dose equivalents to the exposed population, and an estimate of the average annual population dose equivalent are tabulated. A review of the data in this table shows that the total average annual contribution to the whole-body dose equivalent of the U.S. population from consumer products is less than 5 mrem; about 70 percent of this arisesmore » from the presence of naturally-occurring radionuclides in building materials. Some of the consumer product sources contribute exposure mainly to localized tissues or organs. Such localized estimates include: 0.5 to 1 mrem to the average annual population lung dose equivalent (generalized); 2 rem to the average annual population bronchial epithelial dose equivalent (localized); and 10 to 15 rem to the average annual population basal mucosal dose equivalent (basal mucosa of the gum). Based on these estimates, these sources may be grouped or classified as those that involve many people and the dose equivalent is relative large or those that involve many people but the dose equivalent is relatively small, or the dose equivalent is relatively large but the number of people involved is small.« less

  11. Regularized two-step brain activity reconstruction from spatiotemporal EEG data

    NASA Astrophysics Data System (ADS)

    Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry

    2004-10-01

    We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.

  12. Local gauge symmetry on optical lattices?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuzhi; Meurice, Yannick; Tsai, Shan-Wen

    2012-11-01

    The versatile technology of cold atoms confined in optical lattices allows the creation of a vast number of lattice geometries and interactions, providing a promising platform for emulating various lattice models. This opens the possibility of letting nature take care of sign problems and real time evolution in carefully prepared situations. Up to now, experimentalists have succeeded to implement several types of Hubbard models considered by condensed matter theorists. In this proceeding, we discuss the possibility of extending this effort to lattice gauge theory. We report recent efforts to establish the strong coupling equivalence between the Fermi Hubbard model andmore » SU(2) pure gauge theory in 2+1 dimensions by standard determinantal methods developed by Robert Sugar and collaborators. We discuss the possibility of using dipolar molecules and external fields to build models where the equivalence holds beyond the leading order in the strong coupling expansion.« less

  13. The principle of equivalence reconsidered: assessing the relevance of the principle of equivalence in prison medicine.

    PubMed

    Jotterand, Fabrice; Wangmo, Tenzin

    2014-01-01

    In this article we critically examine the principle of equivalence of care in prison medicine. First, we provide an overview of how the principle of equivalence is utilized in various national and international guidelines on health care provision to prisoners. Second, we outline some of the problems associated with its applications, and argue that the principle of equivalence should go beyond equivalence to access and include equivalence of outcomes. However, because of the particular context of the prison environment, third, we contend that the concept of "health" in equivalence of health outcomes needs conceptual clarity; otherwise, it fails to provide a threshold for healthy states among inmates. We accomplish this by examining common understandings of the concepts of health and disease. We conclude our article by showing why the conceptualization of diseases as clinical problems provides a helpful approach in the delivery of health care in prison.

  14. On the number of different dynamics in Boolean networks with deterministic update schedules.

    PubMed

    Aracena, J; Demongeot, J; Fanchon, E; Montalva, M

    2013-04-01

    Deterministic Boolean networks are a type of discrete dynamical systems widely used in the modeling of genetic networks. The dynamics of such systems is characterized by the local activation functions and the update schedule, i.e., the order in which the nodes are updated. In this paper, we address the problem of knowing the different dynamics of a Boolean network when the update schedule is changed. We begin by proving that the problem of the existence of a pair of update schedules with different dynamics is NP-complete. However, we show that certain structural properties of the interaction diagraph are sufficient for guaranteeing distinct dynamics of a network. In [1] the authors define equivalence classes which have the property that all the update schedules of a given class yield the same dynamics. In order to determine the dynamics associated to a network, we develop an algorithm to efficiently enumerate the above equivalence classes by selecting a representative update schedule for each class with a minimum number of blocks. Finally, we run this algorithm on the well known Arabidopsis thaliana network to determine the full spectrum of its different dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Connected Component Model for Multi-Object Tracking.

    PubMed

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  16. A Study on a Centralized Under-Voltage Load Shedding Scheme Considering the Load Characteristics

    NASA Astrophysics Data System (ADS)

    Deng, Jiyu; Liu, Junyong

    Under-voltage load shedding is an important measure for maintaining voltage stability.Aiming at the optimal load shedding problem considering the load characteristics,firstly,the traditional under-voltage load shedding scheme based on a static load model may cause the analysis inaccurate is pointed out on the equivalent Thevenin circuit.Then,the dynamic voltage stability margin indicator is derived through local measurement.The derived indicator can reflect the voltage change of the key area in a myopia linear way.Dimensions of the optimal problem will be greatly simplified using this indicator.In the end,mathematical model of the centralized load shedding scheme is built with the indicator considering load characteristics.HSPPSO is introduced to slove the optimal problem.Simulation results on IEEE-39 system show that the proposed scheme display a good adaptability in solving the under-voltage load shedding considering dynamic load characteristics.

  17. Replicator equations, maximal cliques, and graph isomorphism.

    PubMed

    Pelillo, M

    1999-11-15

    We present a new energy-minimization framework for the graph isomorphism problem that is based on an equivalent maximum clique formulation. The approach is centered around a fundamental result proved by Motzkin and Straus in the mid-1960s, and recently expanded in various ways, which allows us to formulate the maximum clique problem in terms of a standard quadratic program. The attractive feature of this formulation is that a clear one-to-one correspondence exists between the solutions of the quadratic program and those in the original, combinatorial problem. To solve the program we use the so-called replicator equations--a class of straightforward continuous- and discrete-time dynamical systems developed in various branches of theoretical biology. We show how, despite their inherent inability to escape from local solutions, they nevertheless provide experimental results that are competitive with those obtained using more elaborate mean-field annealing heuristics.

  18. 28 CFR 36.605 - Procedure following preliminary determination of equivalency.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... State Laws or Local Building Codes § 36.605 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite...

  19. Discrete-time entropy formulation of optimal and adaptive control problems

    NASA Technical Reports Server (NTRS)

    Tsai, Yweting A.; Casiello, Francisco A.; Loparo, Kenneth A.

    1992-01-01

    The discrete-time version of the entropy formulation of optimal control of problems developed by G. N. Saridis (1988) is discussed. Given a dynamical system, the uncertainty in the selection of the control is characterized by the probability distribution (density) function which maximizes the total entropy. The equivalence between the optimal control problem and the optimal entropy problem is established, and the total entropy is decomposed into a term associated with the certainty equivalent control law, the entropy of estimation, and the so-called equivocation of the active transmission of information from the controller to the estimator. This provides a useful framework for studying the certainty equivalent and adaptive control laws.

  20. It Pays to Be Organized: Organizing Arithmetic Practice around Equivalent Values Facilitates Understanding of Math Equivalence

    ERIC Educational Resources Information Center

    McNeil, Nicole M.; Chesney, Dana L.; Matthews, Percival G.; Fyfe, Emily R.; Petersen, Lori A.; Dunwiddie, April E.; Wheeler, Mary C.

    2012-01-01

    This experiment tested the hypothesis that organizing arithmetic fact practice by equivalent values facilitates children's understanding of math equivalence. Children (M age = 8 years 6 months, N = 104) were randomly assigned to 1 of 3 practice conditions: (a) equivalent values, in which problems were grouped by equivalent sums (e.g., 3 + 4 = 7, 2…

  1. Translation, cultural adaptation and field-testing of the Thinking Healthy Program for Vietnam

    PubMed Central

    2014-01-01

    Background Depression and anxiety are prevalent among women in low- and lower-middle income countries who are pregnant or have recently given birth. There is promising evidence that culturally-adapted, evidence-informed, perinatal psycho-educational programs implemented in local communities are effective in reducing mental health problems. The Thinking Healthy Program (THP) has proved effective in Pakistan. The aims were to adapt the THP for rural Vietnam; establish the program’s comprehensibility, acceptability and salience for universal use, and investigate whether administration to small groups of women might be of equivalent effectiveness to administration in home visits to individual women. Methods The THP Handbook and Calendar were made available in English by the program developers and translated into Vietnamese. Cultural adaptation and field-testing were undertaken using WHO guidance. Field-testing of the four sessions of THP Module One was undertaken in weekly sessions with a small group in a rural commune and evaluated using baseline, process and endline surveys. Results The adapted Vietnamese version of the Thinking Healthy Program (THP-V) was found to be understandable, meaningful and relevant to pregnant women, and commune health centre and Women’s Union representatives in a rural district. It was delivered effectively by trained local facilitators. Role-play, brainstorming and small-group discussions to find shared solutions to common problems were appraised as helpful learning opportunities. Conclusions The THP-V is safe and comprehensible, acceptable and salient to pregnant women without mental health problems in rural Vietnam. Delivery in facilitated small groups provided valued opportunities for role-play rehearsal and shared problem solving. Local observers found the content and approach highly relevant to local needs and endorsed the approach as a mental health promotion strategy with potential for integration into local universal maternal and child health services. These preliminary data indicate that the impact of the THP-V should be tested in its complete form in a large scale trial. PMID:24886165

  2. Constrained model predictive control, state estimation and coordination

    NASA Astrophysics Data System (ADS)

    Yan, Jun

    In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to guarantee local stability or convergence to a target state. If these conditions are met for all subsystems, then this stability is inherited by the overall system. For the case when each subsystem suffers from disturbances in the dynamics, own self-measurement noises, and quantization errors on neighbors' information due to the finite-bit-rate channels, the constrained MPC strategy developed in Part (i) is appropriate to apply. In Part (iii), we discuss the local predictor design and bandwidth assignment problem in a coordinated vehicle formation context. The MPC controller used in Part (ii) relates the formation control performance and the information quality in the way that large standoff implies conservative performance. We first develop an LMI (Linear Matrix Inequality) formulation for cross-estimator design in a simple two-vehicle scenario with non-standard information: one vehicle does not have access to the other's exact control value applied at each sampling time, but to its known, pre-computed, coupling linear feedback control law. Then a similar LMI problem is formulated for the bandwidth assignment problem that minimizes the total number of bits by adjusting the prediction gain matrices and the number of bits assigned to each variable. (Abstract shortened by UMI.)

  3. Dipole source localization of event-related brain activity indicative of an early visual selective attention deficit in ADHD children.

    PubMed

    Jonkman, L M; Kenemans, J L; Kemner, C; Verbaten, M N; van Engeland, H

    2004-07-01

    This study was aimed at investigating whether attention-deficit hyperactivity disorder (ADHD) children suffer from specific early selective attention deficits in the visual modality with the aid of event-related brain potentials (ERPs). Furthermore, brain source localization was applied to identify brain areas underlying possible deficits in selective visual processing in ADHD children. A two-channel visual color selection task was administered to 18 ADHD and 18 control subjects in the age range of 7-13 years and ERP activity was derived from 30 electrodes. ADHD children exhibited lower perceptual sensitivity scores resulting in poorer target selection. The ERP data suggested an early selective-attention deficit as manifested in smaller frontal positive activity (frontal selection positivity; FSP) in ADHD children around 200 ms whereas later occipital and fronto-central negative activity (OSN and N2b; 200-400 ms latency) appeared to be unaffected. Source localization explained the FSP by posterior-medial equivalent dipoles in control subjects, which may reflect the contribution of numerous surrounding areas. ADHD children have problems with selective visual processing that might be caused by a specific early filtering deficit (absent FSP) occurring around 200 ms. The neural sources underlying these problems have to be further identified. Source localization also suggested abnormalities in the 200-400 ms time range, pertaining to the distribution of attention-modulated activity in lateral frontal areas.

  4. Quantum formalism for classical statistics

    NASA Astrophysics Data System (ADS)

    Wetterich, C.

    2018-06-01

    In static classical statistical systems the problem of information transport from a boundary to the bulk finds a simple description in terms of wave functions or density matrices. While the transfer matrix formalism is a type of Heisenberg picture for this problem, we develop here the associated Schrödinger picture that keeps track of the local probabilistic information. The transport of the probabilistic information between neighboring hypersurfaces obeys a linear evolution equation, and therefore the superposition principle for the possible solutions. Operators are associated to local observables, with rules for the computation of expectation values similar to quantum mechanics. We discuss how non-commutativity naturally arises in this setting. Also other features characteristic of quantum mechanics, such as complex structure, change of basis or symmetry transformations, can be found in classical statistics once formulated in terms of wave functions or density matrices. We construct for every quantum system an equivalent classical statistical system, such that time in quantum mechanics corresponds to the location of hypersurfaces in the classical probabilistic ensemble. For suitable choices of local observables in the classical statistical system one can, in principle, compute all expectation values and correlations of observables in the quantum system from the local probabilistic information of the associated classical statistical system. Realizing a static memory material as a quantum simulator for a given quantum system is not a matter of principle, but rather of practical simplicity.

  5. Estimation of local scale dispersion from local breakthrough curves during a tracer test in a heterogeneous aquifer: the Lagrangian approach.

    PubMed

    Vanderborght, Jan; Vereecken, Harry

    2002-01-01

    The local scale dispersion tensor, Dd, is a controlling parameter for the dilution of concentrations in a solute plume that is displaced by groundwater flow in a heterogeneous aquifer. In this paper, we estimate the local scale dispersion from time series or breakthrough curves, BTCs, of Br concentrations that were measured at several points in a fluvial aquifer during a natural gradient tracer test at Krauthausen. Locally measured BTCs were characterized by equivalent convection dispersion parameters: equivalent velocity, v(eq)(x) and expected equivalent dispersivity, [lambda(eq)(x)]. A Lagrangian framework was used to approximately predict these equivalent parameters in terms of the spatial covariance of log(e) transformed conductivity and the local scale dispersion coefficient. The approximate Lagrangian theory illustrates that [lambda(eq)(x)] increases with increasing travel distance and is much larger than the local scale dispersivity, lambda(d). A sensitivity analysis indicates that [lambda(eq)(x)] is predominantly determined by the transverse component of the local scale dispersion and by the correlation scale of the hydraulic conductivity in the transverse to flow direction whereas it is relatively insensitive to the longitudinal component of the local scale dispersion. By comparing predicted [lambda(eq)(x)] for a range of Dd values with [lambda(eq)(x)] obtained from locally measured BTCs, the transverse component of Dd, DdT, was estimated. The estimated transverse local scale dispersivity, lambda(dT) = DdT/U1 (U1 = mean advection velocity) is in the order of 10(1)-10(2) mm, which is relatively large but realistic for the fluvial gravel sediments at Krauthausen.

  6. Flexible resources for quantum metrology

    NASA Astrophysics Data System (ADS)

    Friis, Nicolai; Orsucci, Davide; Skotiniotis, Michalis; Sekatski, Pavel; Dunjko, Vedran; Briegel, Hans J.; Dür, Wolfgang

    2017-06-01

    Quantum metrology offers a quadratic advantage over classical approaches to parameter estimation problems by utilising entanglement and nonclassicality. However, the hurdle of actually implementing the necessary quantum probe states and measurements, which vary drastically for different metrological scenarios, is usually not taken into account. We show that for a wide range of tasks in metrology, 2D cluster states (a particular family of states useful for measurement-based quantum computation) can serve as flexible resources that allow one to efficiently prepare any required state for sensing, and perform appropriate (entangled) measurements using only single qubit operations. Crucially, the overhead in the number of qubits is less than quadratic, thus preserving the quantum scaling advantage. This is ensured by using a compression to a logarithmically sized space that contains all relevant information for sensing. We specifically demonstrate how our method can be used to obtain optimal scaling for phase and frequency estimation in local estimation problems, as well as for the Bayesian equivalents with Gaussian priors of varying widths. Furthermore, we show that in the paradigmatic case of local phase estimation 1D cluster states are sufficient for optimal state preparation and measurement.

  7. One-dimensional Euclidean matching problem: exact solutions, correlation functions, and universality.

    PubMed

    Caracciolo, Sergio; Sicuro, Gabriele

    2014-10-01

    We discuss the equivalence relation between the Euclidean bipartite matching problem on the line and on the circumference and the Brownian bridge process on the same domains. The equivalence allows us to compute the correlation function and the optimal cost of the original combinatorial problem in the thermodynamic limit; moreover, we solve also the minimax problem on the line and on the circumference. The properties of the average cost and correlation functions are discussed.

  8. UGV navigation in wireless sensor and actuator network environments

    NASA Astrophysics Data System (ADS)

    Zhang, Guyu; Li, Jianfeng; Duncan, Christian A.; Kanno, Jinko; Selmic, Rastko R.

    2012-06-01

    We consider a navigation problem in a distributed, self-organized and coordinate-free Wireless Sensor and Ac- tuator Network (WSAN). We rst present navigation algorithms that are veried using simulation results. Con- sidering more than one destination and multiple mobile Unmanned Ground Vehicles (UGVs), we introduce a distributed solution to the Multi-UGV, Multi-Destination navigation problem. The objective of the solution to this problem is to eciently allocate UGVs to dierent destinations and carry out navigation in the network en- vironment that minimizes total travel distance. The main contribution of this paper is to develop a solution that does not attempt to localize either the UGVs or the sensor and actuator nodes. Other than some connectivity as- sumptions about the communication graph, we consider that no prior information about the WSAN is available. The solution presented here is distributed, and the UGV navigation is solely based on feedback from neigh- boring sensor and actuator nodes. One special case discussed in the paper, the Single-UGV, Multi-Destination navigation problem, is essentially equivalent to the well-known and dicult Traveling Salesman Problem (TSP). Simulation results are presented that illustrate the navigation distance traveled through the network. We also introduce an experimental testbed for the realization of coordinate-free and localization-free UGV navigation. We use the Cricket platform as the sensor and actuator network and a Pioneer 3-DX robot as the UGV. The experiments illustrate the UGV navigation in a coordinate-free WSAN environment where the UGV successfully arrives at the assigned destinations.

  9. Hydrogel delivery of lysostaphin eliminates orthopedic implant infection by Staphylococcus aureus and supports fracture healing

    PubMed Central

    Johnson, Christopher T.; Wroe, James A.; Agarwal, Rachit; Martin, Karen E.; Guldberg, Robert E.; Donlan, Rodney M.; Westblade, Lars F.; García, Andrés J.

    2018-01-01

    Orthopedic implant infections are a significant clinical problem, with current therapies limited to surgical debridement and systemic antibiotic regimens. Lysostaphin is a bacteriolytic enzyme with high antistaphylococcal activity. We engineered a lysostaphin-delivering injectable PEG hydrogel to treat Staphylococcus aureus infections in bone fractures. The injectable hydrogel formulation adheres to exposed tissue and fracture surfaces, ensuring efficient, local delivery of lysostaphin. Lysostaphin encapsulation within this synthetic hydrogel maintained enzyme stability and activity. Lysostaphin-delivering hydrogels exhibited enhanced antibiofilm activity compared with soluble lysostaphin. Lysostaphin-delivering hydrogels eradicated S. aureus infection and outperformed prophylactic antibiotic and soluble lysostaphin therapy in a murine model of femur fracture. Analysis of the local inflammatory response to infections treated with lysostaphin-delivering hydrogels revealed indistinguishable differences in cytokine secretion profiles compared with uninfected fractures, demonstrating clearance of bacteria and associated inflammation. Importantly, infected fractures treated with lysostaphin-delivering hydrogels fully healed by 5 wk with bone formation and mechanical properties equivalent to those of uninfected fractures, whereas fractures treated without the hydrogel carrier were equivalent to untreated infections. Finally, lysostaphin-delivering hydrogels eliminate methicillin-resistant S. aureus infections, supporting this therapy as an alternative to antibiotics. These results indicate that lysostaphin-delivering hydrogels effectively eliminate orthopedic S. aureus infections while simultaneously supporting fracture repair. PMID:29760099

  10. Geometry of Conservation Laws for a Class of Parabolic Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Clelland, Jeanne Nielsen

    1996-08-01

    I consider the problem of computing the space of conservation laws for a second-order, parabolic partial differential equation for one function of three independent variables. The PDE is formulated as an exterior differential system {cal I} on a 12 -manifold M, and its conservation laws are identified with the vector space of closed 3-forms in the infinite prolongation of {cal I} modulo the so -called "trivial" conservation laws. I use the tools of exterior differential systems and Cartan's method of equivalence to study the structure of the space of conservation laws. My main result is:. Theorem. Any conservation law for a second-order, parabolic PDE for one function of three independent variables can be represented by a closed 3-form in the differential ideal {cal I} on the original 12-manifold M. I show that if a nontrivial conservation law exists, then {cal I} has a deprolongation to an equivalent system {cal J} on a 7-manifold N, and any conservation law for {cal I} can be expressed as a closed 3-form on N which lies in {cal J}. Furthermore, any such system in the real analytic category is locally equivalent to a system generated by a (parabolic) equation of the formA(u _{xx}u_{yy}-u_sp {xy}{2}) + B_1u_{xx }+2B_2u_{xy} +B_3u_ {yy}+C=0crwhere A, B_{i}, C are functions of x, y, t, u, u_{x}, u _{y}, u_{t}. I compute the space of conservation laws for several examples, and I begin the process of analyzing the general case using Cartan's method of equivalence. I show that the non-linearizable equation u_{t} = {1over2}e ^{-u}(u_{xx}+u_ {yy})has an infinite-dimensional space of conservation laws. This stands in contrast to the two-variable case, for which Bryant and Griffiths showed that any equation whose space of conservation laws has dimension 4 or more is locally equivalent to a linear equation, i.e., is linearizable.

  11. Integration of local motion is normal in amblyopia

    NASA Astrophysics Data System (ADS)

    Hess, Robert F.; Mansouri, Behzad; Dakin, Steven C.; Allen, Harriet A.

    2006-05-01

    We investigate the global integration of local motion direction signals in amblyopia, in a task where performance is equated between normal and amblyopic eyes at the single element level. We use an equivalent noise model to derive the parameters of internal noise and number of samples, both of which we show are normal in amblyopia for this task. This result is in apparent conflict with a previous study in amblyopes showing that global motion processing is defective in global coherence tasks [Vision Res. 43, 729 (2003)]. A similar discrepancy between the normalcy of signal integration [Vision Res. 44, 2955 (2004)] and anomalous global coherence form processing has also been reported [Vision Res. 45, 449 (2005)]. We suggest that these discrepancies for form and motion processing in amblyopia point to a selective problem in separating signal from noise in the typical global coherence task.

  12. Simulation of elastic wave propagation using cellular automata and peridynamics, and comparison with experiments

    DOE PAGES

    Nishawala, Vinesh V.; Ostoja-Starzewski, Martin; Leamy, Michael J.; ...

    2015-09-10

    Peridynamics is a non-local continuum mechanics formulation that can handle spatial discontinuities as the governing equations are integro-differential equations which do not involve gradients such as strains and deformation rates. This paper employs bond-based peridynamics. Cellular Automata is a local computational method which, in its rectangular variant on interior domains, is mathematically equivalent to the central difference finite difference method. However, cellular automata does not require the derivation of the governing partial differential equations and provides for common boundary conditions based on physical reasoning. Both methodologies are used to solve a half-space subjected to a normal load, known as Lamb’smore » Problem. The results are compared with theoretical solution from classical elasticity and experimental results. Furthermore, this paper is used to validate our implementation of these methods.« less

  13. Gaussian free field in the background of correlated random clusters, formed by metallic nanoparticles

    NASA Astrophysics Data System (ADS)

    Cheraghalizadeh, Jafar; Najafi, Morteza N.; Mohammadzadeh, Hossein

    2018-05-01

    The effect of metallic nano-particles (MNPs) on the electrostatic potential of a disordered 2D dielectric media is considered. The disorder in the media is assumed to be white-noise Coulomb impurities with normal distribution. To realize the correlations between the MNPs we have used the Ising model with an artificial temperature T that controls the number of MNPs as well as their correlations. In the T → 0 limit, one retrieves the Gaussian free field (GFF), and in the finite temperature the problem is equivalent to a GFF in iso-potential islands. The problem is argued to be equivalent to a scale-invariant random surface with some critical exponents which vary with T and correspondingly are correlation-dependent. Two type of observables have been considered: local and global quantities. We have observed that the MNPs soften the random potential and reduce its statistical fluctuations. This softening is observed in the local as well as the geometrical quantities. The correlation function of the electrostatic and its total variance are observed to be logarithmic just like the GFF, i.e. the roughness exponent remains zero for all temperatures, whereas the proportionality constants scale with T - T c . The fractal dimension of iso-potential lines ( D f ), the exponent of the distribution function of the gyration radius ( τ r ), and the loop lengths ( τ l ), and also the exponent of the loop Green function x l change in terms of T - T c in a power-law fashion, with some critical exponents reported in the text. Importantly we have observed that D f ( T) - D f ( T c ) 1/√ ξ( T), in which ξ( T) is the spin correlation length in the Ising model.

  14. Effect of cavity configuration on kerosene spark ignition in a scramjet combustor at Ma 4.5 flight condition

    NASA Astrophysics Data System (ADS)

    Bao, Heng; Zhou, Jin; Pan, Yu

    2015-12-01

    Spark ignition experiments of liquid kerosene are conducted in a scramjet model equipped with dual-cavities at Ma 4.5 flight condition with a stagnation temperature of 1032 K. The ignition ability of two cavities with different length is compared and analyzed based on the wall pressure distribution along the combustor and the thrust evolution. The experimental results indicate that the longer cavity (L/D=7) is more suitable than the smaller cavity (L/D=5) in spark ignition. When employing the smaller cavity, three steady combustion states are observed after spark ignition. The concept of 'local flame' is adopted to explain the expanding problem of weak combustion. The local equivalence ratio in the shear layer is the dominated factor in determining the developing process of local flame. The final steady combustion mode of the combustor is dependent on the flame developing process. When employing the longer cavity, the establishment of intense combustion state can be much easier.

  15. Disaster management and the critical thinking skills of local emergency managers: correlations with age, gender, education, and years in occupation.

    PubMed

    Peerbolte, Stacy L; Collins, Matthew Lloyd

    2013-01-01

    Emergency managers must be able to think critically in order to identify and anticipate situations, solve problems, make judgements and decisions effectively and efficiently, and assume and manage risk. Heretofore, a critical thinking skills assessment of local emergency managers had yet to be conducted that tested for correlations among age, gender, education, and years in occupation. An exploratory descriptive research design, using the Watson-Glaser Critical Thinking Appraisal-Short Form (WGCTA-S), was employed to determine the extent to which a sample of 54 local emergency managers demonstrated the critical thinking skills associated with the ability to assume and manage risk as compared to the critical thinking scores of a group of 4,790 peer-level managers drawn from an archival WGCTA-S database. This exploratory design suggests that the local emergency managers, surveyed in this study, had lower WGCTA-S critical thinking scores than their equivalents in the archival database with the exception of those in the high education and high experience group. © 2013 The Author(s). Journal compilation © Overseas Development Institute, 2013.

  16. 28 CFR 36.604 - Procedure following preliminary determination of equivalency.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... State Laws or Local Building Codes § 36.604 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite... enforcement of the code, at which interested individuals, including individuals with disabilities, are...

  17. 28 CFR 36.604 - Procedure following preliminary determination of equivalency.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... State Laws or Local Building Codes § 36.604 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite... enforcement of the code, at which interested individuals, including individuals with disabilities, are...

  18. 28 CFR 36.604 - Procedure following preliminary determination of equivalency.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... State Laws or Local Building Codes § 36.604 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite... enforcement of the code, at which interested individuals, including individuals with disabilities, are...

  19. 28 CFR 36.604 - Procedure following preliminary determination of equivalency.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... State Laws or Local Building Codes § 36.604 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite... enforcement of the code, at which interested individuals, including individuals with disabilities, are...

  20. Exponential localization of Wannier functions in insulators.

    PubMed

    Brouder, Christian; Panati, Gianluca; Calandra, Matteo; Mourougane, Christophe; Marzari, Nicola

    2007-01-26

    The exponential localization of Wannier functions in two or three dimensions is proven for all insulators that display time-reversal symmetry, settling a long-standing conjecture. Our proof relies on the equivalence between the existence of analytic quasi-Bloch functions and the nullity of the Chern numbers (or of the Hall current) for the system under consideration. The same equivalence implies that Chern insulators cannot display exponentially localized Wannier functions. An explicit condition for the reality of the Wannier functions is identified.

  1. Control of Finite-State, Finite Memory Stochastic Systems

    NASA Technical Reports Server (NTRS)

    Sandell, Nils R.

    1974-01-01

    A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.

  2. Wrinkle-free design of thin membrane structures using stress-based topology optimization

    NASA Astrophysics Data System (ADS)

    Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan

    2017-05-01

    Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.

  3. Two methods for measuring Bell nonlocality via local unitary invariants of two-qubit systems in Hong-Ou-Mandel interferometers

    NASA Astrophysics Data System (ADS)

    Bartkiewicz, Karol; Chimczak, Grzegorz

    2018-01-01

    We describe a direct method to experimentally determine local two-qubit invariants by performing interferometric measurements on multiple copies of a given two-qubit state. We use this framework to analyze two different kinds of two-qubit invariants of Makhlin and Jing et al. These invariants allow us to fully reconstruct any two-qubit state up to local unitaries. We demonstrate that measuring three invariants is sufficient to find, e.g., the optimal Bell inequality violation. These invariants can be measured with local or nonlocal measurements. We show that the nonlocal strategy that follows from Makhlin's invariants is more resource efficient than local strategy following from the invariants of Jing et al. To measure all of the Makhlin's invariants directly one needs to use both two-qubit singlets and three-qubit W -state projections on multiple copies of the two-qubit state. This problem is equivalent to a coordinate system handedness measurement. We demonstrate that these three-qubit measurements can be performed by utilizing Hong-Ou-Mandel interference, which gives significant speedup in comparison to the classical handedness measurement. Finally, we point to potential applications of our results in quantum secret sharing.

  4. Connes' embedding problem and Tsirelson's problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Junge, M.; Palazuelos, C.; Navascues, M.

    2011-01-15

    We show that Tsirelson's problem concerning the set of quantum correlations and Connes' embedding problem on finite approximations in von Neumann algebras (known to be equivalent to Kirchberg's QWEP conjecture) are essentially equivalent. Specifically, Tsirelson's problem asks whether the set of bipartite quantum correlations generated between tensor product separated systems is the same as the set of correlations between commuting C{sup *}-algebras. Connes' embedding problem asks whether any separable II{sub 1} factor is a subfactor of the ultrapower of the hyperfinite II{sub 1} factor. We show that an affirmative answer to Connes' question implies a positive answer to Tsirelson's. Conversely,more » a positive answer to a matrix valued version of Tsirelson's problem implies a positive one to Connes' problem.« less

  5. Nature of size effects in compact models of field effect transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torkhov, N. A., E-mail: trkf@mail.ru; Scientific-Research Institute of Semiconductor Devices, Tomsk 634050; Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050

    Investigations have shown that in the local approximation (for sizes L < 100 μm), AlGaN/GaN high electron mobility transistor (HEMT) structures satisfy to all properties of chaotic systems and can be described in the language of fractal geometry of fractional dimensions. For such objects, values of their electrophysical characteristics depend on the linear sizes of the examined regions, which explain the presence of the so-called size effects—dependences of the electrophysical and instrumental characteristics on the linear sizes of the active elements of semiconductor devices. In the present work, a relationship has been established for the linear model parameters of themore » equivalent circuit elements of internal transistors with fractal geometry of the heteroepitaxial structure manifested through a dependence of its relative electrophysical characteristics on the linear sizes of the examined surface areas. For the HEMTs, this implies dependences of their relative static (A/mm, mA/V/mm, Ω/mm, etc.) and microwave characteristics (W/mm) on the width d of the sink-source channel and on the number of sections n that leads to a nonlinear dependence of the retrieved parameter values of equivalent circuit elements of linear internal transistor models on n and d. Thus, it has been demonstrated that the size effects in semiconductors determined by the fractal geometry must be taken into account when investigating the properties of semiconductor objects on the levels less than the local approximation limit and designing and manufacturing field effect transistors. In general, the suggested approach allows a complex of problems to be solved on designing, optimizing, and retrieving the parameters of equivalent circuits of linear and nonlinear models of not only field effect transistors but also any arbitrary semiconductor devices with nonlinear instrumental characteristics.« less

  6. Evaluating Treatments for Functionally Equivalent Problem Behavior Maintained by Adult Compliance with Mands during Interactive Play

    ERIC Educational Resources Information Center

    Schmidt, Jonathan D.; Bednar, Mary K.; Willse, Lena V.; Goetzel, Amanda L.; Concepcion, Anthony; Pincus, Shari M.; Hardesty, Samantha L.; Bowman, Lynn G.

    2017-01-01

    A primary goal of behavioral interventions is to reduce dangerous or inappropriate behavior and to generalize treatment effects across various settings. However, there is a lack of research evaluating generalization of treatment effects while individuals with functionally equivalent problem behavior interact with each other. For the current study,…

  7. Bullying victimization: A risk factor of health problems among adolescents with hearing impairment.

    PubMed

    Akram, Bushra; Munawar, Asima

    2016-01-01

    To find bullying victimisation as a predictor of physical and psychological health problems among school-going children with hearing impairment. The co-relational cross-sectional study was conducted in Gujrat district of Pakistan's Punjab province from August 2014 to January 2015, and comprised adolescents with hearing impairment. The subjects were selected through multi-stage stratified proportionate sampling from the local schools. Two standardised instruments were administered to assess the relationship between bullying and health problems. Multidimensional Peer Victimisation Scale was used for measuring bullying behaviour, while the Health Questionnaire was used to assess physical and psychological health problems. Both scales were translated into Urdu using lexicon equivalence method of translation. Of the 286 subjects, 183(64%) were boys. A significant positive relationship was found between the four components of bullying and health problems (p<0.05 each). Boys experienced more physical victimisation than girls (p<0.05), but there was no significant difference between girls and boys in social manipulation (p>0.05). Children with hearing impairment experienced bullying just like those without such an impairment. Bullying needs to be considered a significant public health issue and should be dealt with effectively.

  8. A numerical solution of a singular boundary value problem arising in boundary layer theory.

    PubMed

    Hu, Jiancheng

    2016-01-01

    In this paper, a second-order nonlinear singular boundary value problem is presented, which is equivalent to the well-known Falkner-Skan equation. And the one-dimensional third-order boundary value problem on interval [Formula: see text] is equivalently transformed into a second-order boundary value problem on finite interval [Formula: see text]. The finite difference method is utilized to solve the singular boundary value problem, in which the amount of computational effort is significantly less than the other numerical methods. The numerical solutions obtained by the finite difference method are in agreement with those obtained by previous authors.

  9. Reduction of the two dimensional stationary Navier-Stokes problem to a sequence of Fredholm integral equations of the second kind

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.

    1981-01-01

    Present approaches to solving the stationary Navier-Stokes equations are of limited value; however, there does exist an equivalent representation of the problem that has significant potential in solving such problems. This is due to the fact that the equivalent representation consists of a sequence of Fredholm integral equations of the second kind, and the solving of this type of problem is very well developed. For the problem in this form, there is an excellent chance to also determine explicit error estimates, since bounded, rather than unbounded, linear operators are dealt with.

  10. One-Loop Calculations and Detailed Analysis of the Localized Non-Commutative p^{-2} U(1) Gauge Model

    NASA Astrophysics Data System (ADS)

    Blaschke, Daniel N.; Rofner, Arnold; Sedmik, René I. P.

    2010-05-01

    This paper carries forward a series of articles describing our enterprise to construct a gauge equivalent for the θ-deformed non-commutative p-2 model originally introduced by Gurau et al. [Comm. Math. Phys. 287 (2009), 275-290]. It is shown that breaking terms of the form used by Vilar et al. [J. Phys. A: Math. Theor. 43 (2010), 135401, 13 pages] and ourselves [Eur. Phys. J. C: Part. Fields 62 (2009), 433-443] to localize the BRST covariant operator (D2θ2D2)-1 lead to difficulties concerning renormalization. The reason is that this dimensionless operator is invariant with respect to any symmetry of the model, and can be inserted to arbitrary power. In the present article we discuss explicit one-loop calculations, and analyze the mechanism the mentioned problems originate from.

  11. Higher-order gravity and the classical equivalence principle

    NASA Astrophysics Data System (ADS)

    Accioly, Antonio; Herdy, Wallace

    2017-11-01

    As is well known, the deflection of any particle by a gravitational field within the context of Einstein’s general relativity — which is a geometrical theory — is, of course, nondispersive. Nevertheless, as we shall show in this paper, the mentioned result will change totally if the bending is analyzed — at the tree level — in the framework of higher-order gravity. Indeed, to first order, the deflection angle corresponding to the scattering of different quantum particles by the gravitational field mentioned above is not only spin dependent, it is also dispersive (energy-dependent). Consequently, it violates the classical equivalence principle (universality of free fall, or equality of inertial and gravitational masses) which is a nonlocal principle. However, contrary to popular belief, it is in agreement with the weak equivalence principle which is nothing but a statement about purely local effects. It is worthy of note that the weak equivalence principle encompasses the classical equivalence principle locally. We also show that the claim that there exists an incompatibility between quantum mechanics and the weak equivalence principle, is incorrect.

  12. From near to eternity: Spin-glass planting, tiling puzzles, and constraint-satisfaction problems

    NASA Astrophysics Data System (ADS)

    Hamze, Firas; Jacob, Darryl C.; Ochoa, Andrew J.; Perera, Dilina; Wang, Wenlong; Katzgraber, Helmut G.

    2018-04-01

    We present a methodology for generating Ising Hamiltonians of tunable complexity and with a priori known ground states based on a decomposition of the model graph into edge-disjoint subgraphs. The idea is illustrated with a spin-glass model defined on a cubic lattice, where subproblems, whose couplers are restricted to the two values {-1 ,+1 } , are specified on unit cubes and are parametrized by their local degeneracy. The construction is shown to be equivalent to a type of three-dimensional constraint-satisfaction problem known as the tiling puzzle. By varying the proportions of subproblem types, the Hamiltonian can span a dramatic range of typical computational complexity, from fairly easy to many orders of magnitude more difficult than prototypical bimodal and Gaussian spin glasses in three space dimensions. We corroborate this behavior via experiments with different algorithms and discuss generalizations and extensions to different types of graphs.

  13. A Variational Reduction and the Existence of a Fully Localised Solitary Wave for the Three-Dimensional Water-Wave Problem with Weak Surface Tension

    NASA Astrophysics Data System (ADS)

    Buffoni, Boris; Groves, Mark D.; Wahlén, Erik

    2017-12-01

    Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3} ) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3} . A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.

  14. A Variational Reduction and the Existence of a Fully Localised Solitary Wave for the Three-Dimensional Water-Wave Problem with Weak Surface Tension

    NASA Astrophysics Data System (ADS)

    Buffoni, Boris; Groves, Mark D.; Wahlén, Erik

    2018-06-01

    Fully localised solitary waves are travelling-wave solutions of the three- dimensional gravity-capillary water wave problem which decay to zero in every horizontal spatial direction. Their existence has been predicted on the basis of numerical simulations and model equations (in which context they are usually referred to as `lumps'), and a mathematically rigorous existence theory for strong surface tension (Bond number {β} greater than {1/3}) has recently been given. In this article we present an existence theory for the physically more realistic case {0 < β < 1/3}. A classical variational principle for fully localised solitary waves is reduced to a locally equivalent variational principle featuring a perturbation of the functional associated with the Davey-Stewartson equation. A nontrivial critical point of the reduced functional is found by minimising it over its natural constraint set.

  15. Management of toxic cyanobacteria for drinking water production of Ain Zada Dam.

    PubMed

    Saoudi, Amel; Brient, Luc; Boucetta, Sabrine; Ouzrout, Rachid; Bormans, Myriam; Bensouilah, Mourad

    2017-07-01

    Blooms of toxic cyanobacteria in Algerian reservoirs represent a potential health problem, mainly from drinking water that supplies the local population of Ain Zada (Bordj Bou Arreridj). The objective of this study is to monitor, detect, and identify the existence of cyanobacteria and microcystins during blooming times. Samples were taken in 2013 from eight stations. The results show that three potentially toxic cyanobacterial genera with the species Planktothrix agardhii were dominant. Cyanobacterial biomass, phycocyanin (PC) concentrations, and microcystin (MC) concentrations were high in the surface layer and at 14 m depth; these values were also high in the treated water. On 11 May 2013, MC concentrations were 6.3 μg/L in MC-LR equivalent in the drinking water. This study shows for the first time the presence of cyanotoxins in raw and treated waters, highlighting that regular monitoring of cyanobacteria and cyanotoxins must be undertaken to avoid potential health problems.

  16. The Goertler vortex instability mechanism in three-dimensional boundary layers

    NASA Technical Reports Server (NTRS)

    Hall, P.

    1984-01-01

    The two dimensional boundary layer on a concave wall is centrifugally unstable with respect to vortices aligned with the basic flow for sufficiently high values of the Goertler number. However, in most situations of practical interest the basic flow is three dimensional and previous theoretical investigations do not apply. The linear stability of the flow over an infinitely long swept wall of variable curvature is considered. If there is no pressure gradient in the boundary layer the instability problem can always be related to an equivalent two dimensional calculation. However, in general, this is not the case and even for small values of the crossflow velocity field dramatic differences between the two and three dimensional problems emerge. When the size of the crossflow is further increased, the vortices in the neutral location have their axes locally perpendicular to the vortex lines of the basic flow.

  17. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  18. A nurse staffing analysis at the largest hospital in the Gulf region

    NASA Astrophysics Data System (ADS)

    Louly, M.; Gharbi, A.; Azaiez, M. N.; Bouras, A.

    2014-12-01

    The paper considers a staffing problem at a local hospital. The managers consider they are understaffed and try to overwhelm the staffing deficit problem through overtime, rather than hiring additional nurses. However, the huge amount of allocated budget for overtime becomes a concern and needs some assessment, analysis and justification. The current hospital estimates suggests that the shortage at the hospital level corresponds to 300 full time equivalent (FTE) nurses, but the deficit is not basedon deep scientific approach. This paper deals with staffing model that provides the required scientific evidence on the deficit level. It also gives the accurate information on the overtime components. As a results, the suggested staffing model shows that some nursing units are unnecessarily overstaffed. Moreover, the current study reveals that the real deficit is of only 215 FTE resulting in a potential saving of 28%.

  19. Clustering Qualitative Data Based on Binary Equivalence Relations: Neighborhood Search Heuristics for the Clique Partitioning Problem

    ERIC Educational Resources Information Center

    Brusco, Michael J.; Kohn, Hans-Friedrich

    2009-01-01

    The clique partitioning problem (CPP) requires the establishment of an equivalence relation for the vertices of a graph such that the sum of the edge costs associated with the relation is minimized. The CPP has important applications for the social sciences because it provides a framework for clustering objects measured on a collection of nominal…

  20. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  1. A new method for CT dose estimation by determining patient water equivalent diameter from localizer radiographs: Geometric transformation and calibration methods using readily available phantoms.

    PubMed

    Zhang, Da; Mihai, Georgeta; Barbaras, Larry G; Brook, Olga R; Palmer, Matthew R

    2018-05-10

    Water equivalent diameter (Dw) reflects patient's attenuation and is a sound descriptor of patient size, and is used to determine size-specific dose estimator from a CT examination. Calculating Dw from CT localizer radiographs makes it possible to utilize Dw before actual scans and minimizes truncation errors due to limited reconstructed fields of view. One obstacle preventing the user community from implementing this useful tool is the necessity to calibrate localizer pixel values so as to represent water equivalent attenuation. We report a practical method to ease this calibration process. Dw is calculated from water equivalent area (Aw) which is deduced from the average localizer pixel value (LPV) of the line(s) in the localizer radiograph that correspond(s) to the axial image. The calibration process is conducted to establish the relationship between Aw and LPV. Localizer and axial images were acquired from phantoms of different total attenuation. We developed a program that automates the geometrical association between axial images and localizer lines and manages the measurements of Dw and average pixel values. We tested the calibration method on three CT scanners: a GE CT750HD, a Siemens Definition AS, and a Toshiba Acquilion Prime80, for both posterior-anterior (PA) and lateral (LAT) localizer directions (for all CTs) and with different localizer filters (for the Toshiba CT). The computer program was able to correctly perform the geometrical association between corresponding axial images and localizer lines. Linear relationships between Aw and LPV were observed (with R 2 all greater than 0.998) on all tested conditions, regardless of the direction and image filters used on the localizer radiographs. When comparing LAT and PA directions with the same image filter and for the same scanner, the slope values were close (maximum difference of 0.02 mm), and the intercept values showed larger deviations (maximum difference of 2.8 mm). Water equivalent diameter estimation on phantoms and patients demonstrated high accuracy of the calibration: percentage difference between Dw from axial images and localizers was below 2%. With five clinical chest examinations and five abdominal-pelvic examinations of varying patient sizes, the maximum percentage difference was approximately 5%. Our study showed that Aw and LPV are highly correlated, providing enough evidence to allow for the Dw determination once the experimental calibration process is established. © 2018 American Association of Physicists in Medicine.

  2. Antioxidant properties of selected fruit cultivars grown in Sri Lanka.

    PubMed

    Silva, K D R R; Sirasa, M S F

    2018-01-01

    Extracts of twenty locally available Sri Lankan fruits were analysed for 2,2-diphenyl-1-picrylhydrazyl (DPPH) scavenging activity, ferrous reducing antioxidant power (FRAP), total phenolic content (TPC), total flavonoid content (TFC) and vitamin C content. The results showed that gooseberry (Phyllanthus emblica 'local') exhibited the highest DPPH scavenging activity (111.25mg ascorbic acid equivalent antioxidant capacity (AEAC)/g), FRAP (1022.05μmol FeSO 4 /g), TPC (915.7mg gallic acid equivalents (GAE)/100g), TFC (873.2mg catechin equivalents (CE)/100g) and vitamin C (136.8mg ascorbic acid equivalents (AAE)/100g), respectively. Sugar apple (Annona squamosa 'local') and star fruit (Averrhoa carambola 'Honey Sweet') obtained the second and third highest antioxidant activities in terms of rankings of FRAP, DPPH activities, TPC, TFC and vitamin C content. Strong correlation between vitamin C, TPC and TFC with FRAP and DPPH showed their contribution to antioxidant capacity. Among the selected fruits, underutilized fruit cultivar gooseberry showed the highest overall antioxidant potential. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Black hole thermodynamics from a variational principle: asymptotically conical backgrounds

    DOE PAGES

    An, Ok Song; Cvetič, Mirjam; Papadimitriou, Ioannis

    2016-03-14

    The variational problem of gravity theories is directly related to black hole thermodynamics. For asymptotically locally AdS backgrounds it is known that holographic renormalization results in a variational principle in terms of equivalence classes of boundary data under the local asymptotic symmetries of the theory, which automatically leads to finite conserved charges satisfying the first law of thermodynamics. We show that this connection holds well beyond asymptotically AdS black holes. In particular, we formulate the variational problem for N = 2 STU supergravity in four dimensions with boundary conditions corresponding to those obeyed by the so called ‘subtracted geometries’. Wemore » show that such boundary conditions can be imposed covariantly in terms of a set of asymptotic second class constraints, and we derive the appropriate boundary terms that render the variational problem well posed in two different duality frames of the STU model. This allows us to define finite conserved charges associated with any asymptotic Killing vector and to demonstrate that these charges satisfy the Smarr formula and the first law of thermodynamics. Moreover, by uplifting the theory to five dimensions and then reducing on a 2-sphere, we provide a precise map between the thermodynamic observables of the subtracted geometries and those of the BTZ black hole. Finally, surface terms play a crucial role in this identification.« less

  4. Black hole thermodynamics from a variational principle: asymptotically conical backgrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ok Song; Cvetič, Mirjam; Papadimitriou, Ioannis

    The variational problem of gravity theories is directly related to black hole thermodynamics. For asymptotically locally AdS backgrounds it is known that holographic renormalization results in a variational principle in terms of equivalence classes of boundary data under the local asymptotic symmetries of the theory, which automatically leads to finite conserved charges satisfying the first law of thermodynamics. We show that this connection holds well beyond asymptotically AdS black holes. In particular, we formulate the variational problem for N = 2 STU supergravity in four dimensions with boundary conditions corresponding to those obeyed by the so called ‘subtracted geometries’. Wemore » show that such boundary conditions can be imposed covariantly in terms of a set of asymptotic second class constraints, and we derive the appropriate boundary terms that render the variational problem well posed in two different duality frames of the STU model. This allows us to define finite conserved charges associated with any asymptotic Killing vector and to demonstrate that these charges satisfy the Smarr formula and the first law of thermodynamics. Moreover, by uplifting the theory to five dimensions and then reducing on a 2-sphere, we provide a precise map between the thermodynamic observables of the subtracted geometries and those of the BTZ black hole. Finally, surface terms play a crucial role in this identification.« less

  5. Hydrogel delivery of lysostaphin eliminates orthopedic implant infection by Staphylococcus aureus and supports fracture healing.

    PubMed

    Johnson, Christopher T; Wroe, James A; Agarwal, Rachit; Martin, Karen E; Guldberg, Robert E; Donlan, Rodney M; Westblade, Lars F; García, Andrés J

    2018-05-29

    Orthopedic implant infections are a significant clinical problem, with current therapies limited to surgical debridement and systemic antibiotic regimens. Lysostaphin is a bacteriolytic enzyme with high antistaphylococcal activity. We engineered a lysostaphin-delivering injectable PEG hydrogel to treat Staphylococcus aureus infections in bone fractures. The injectable hydrogel formulation adheres to exposed tissue and fracture surfaces, ensuring efficient, local delivery of lysostaphin. Lysostaphin encapsulation within this synthetic hydrogel maintained enzyme stability and activity. Lysostaphin-delivering hydrogels exhibited enhanced antibiofilm activity compared with soluble lysostaphin. Lysostaphin-delivering hydrogels eradicated S. aureus infection and outperformed prophylactic antibiotic and soluble lysostaphin therapy in a murine model of femur fracture. Analysis of the local inflammatory response to infections treated with lysostaphin-delivering hydrogels revealed indistinguishable differences in cytokine secretion profiles compared with uninfected fractures, demonstrating clearance of bacteria and associated inflammation. Importantly, infected fractures treated with lysostaphin-delivering hydrogels fully healed by 5 wk with bone formation and mechanical properties equivalent to those of uninfected fractures, whereas fractures treated without the hydrogel carrier were equivalent to untreated infections. Finally, lysostaphin-delivering hydrogels eliminate methicillin-resistant S. aureus infections, supporting this therapy as an alternative to antibiotics. These results indicate that lysostaphin-delivering hydrogels effectively eliminate orthopedic S. aureus infections while simultaneously supporting fracture repair. Copyright © 2018 the Author(s). Published by PNAS.

  6. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    PubMed

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  7. An approximate method for solution to variable moment of inertia problems

    NASA Technical Reports Server (NTRS)

    Beans, E. W.

    1981-01-01

    An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.

  8. Bees Algorithm for Construction of Multiple Test Forms in E-Testing

    ERIC Educational Resources Information Center

    Songmuang, Pokpong; Ueno, Maomi

    2011-01-01

    The purpose of this research is to automatically construct multiple equivalent test forms that have equivalent qualities indicated by test information functions based on item response theory. There has been a trade-off in previous studies between the computational costs and the equivalent qualities of test forms. To alleviate this problem, we…

  9. Equilibria of perceptrons for simple contingency problems.

    PubMed

    Dawson, Michael R W; Dupuis, Brian

    2012-08-01

    The contingency between cues and outcomes is fundamentally important to theories of causal reasoning and to theories of associative learning. Researchers have computed the equilibria of Rescorla-Wagner models for a variety of contingency problems, and have used these equilibria to identify situations in which the Rescorla-Wagner model is consistent, or inconsistent, with normative models of contingency. Mathematical analyses that directly compare artificial neural networks to contingency theory have not been performed, because of the assumed equivalence between the Rescorla-Wagner learning rule and the delta rule training of artificial neural networks. However, recent results indicate that this equivalence is not as straightforward as typically assumed, suggesting a strong need for mathematical accounts of how networks deal with contingency problems. One such analysis is presented here, where it is proven that the structure of the equilibrium for a simple network trained on a basic contingency problem is quite different from the structure of the equilibrium for a Rescorla-Wagner model faced with the same problem. However, these structural differences lead to functionally equivalent behavior. The implications of this result for the relationships between associative learning, contingency theory, and connectionism are discussed.

  10. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  11. Channel surface plasmons in a continuous and flat graphene sheet

    NASA Astrophysics Data System (ADS)

    Chaves, A. J.; Peres, N. M. R.; da Costa, D. R.; Farias, G. A.

    2018-05-01

    We derive an integral equation describing surface-plasmon polaritons in graphene deposited on a substrate with a planar surface and a dielectric protrusion in the opposite surface of the dielectric slab. We show that the problem is mathematically equivalent to the solution of a Fredholm equation, which we solve exactly. In addition, we show that the dispersion relation of the channel surface plasmons is determined by the geometric parameters of the protrusion alone. We also show that such a system supports both even and odd modes. We give the electrostatic potential and the intensity plot of the electrostatic field, which clearly show the transverse localized nature of the surface plasmons in a continuous and flat graphene sheet.

  12. Equilibrium polymerization on the equivalent-neighbor lattice

    NASA Technical Reports Server (NTRS)

    Kaufman, Miron

    1989-01-01

    The equilibrium polymerization problem is solved exactly on the equivalent-neighbor lattice. The Flory-Huggins (Flory, 1986) entropy of mixing is exact for this lattice. The discrete version of the n-vector model is verified when n approaches 0 is equivalent to the equal reactivity polymerization process in the whole parameter space, including the polymerized phase. The polymerization processes for polymers satisfying the Schulz (1939) distribution exhibit nonuniversal critical behavior. A close analogy is found between the polymerization problem of index the Schulz r and the Bose-Einstein ideal gas in d = -2r dimensions, with the critical polymerization corresponding to the Bose-Einstein condensation.

  13. On the Perturbative Equivalence Between the Hamiltonian and Lagrangian Quantizations

    NASA Astrophysics Data System (ADS)

    Batalin, I. A.; Tyutin, I. V.

    The Hamiltonian (BFV) and Lagrangian (BV) quantization schemes are proved to be perturbatively equivalent to each other. It is shown in particular that the quantum master equation being treated perturbatively possesses a local formal solution.

  14. Differences between near-surface equivalent temperature and temperature trends for the Eastern United States. Equivalent temperature as an alternative measure of heat content

    USGS Publications Warehouse

    Davey, C.A.; Pielke, R.A.; Gallo, K.P.

    2006-01-01

    There is currently much attention being given to the observed increase in near-surface air temperatures during the last century. The proper investigation of heating trends, however, requires that we include surface heat content to monitor this aspect of the climate system. Changes in heat content of the Earth's climate are not fully described by temperature alone. Moist enthalpy or, alternatively, equivalent temperature, is more sensitive to surface vegetation properties than is air temperature and therefore more accurately depicts surface heating trends. The microclimates evident at many surface observation sites highlight the influence of land surface characteristics on local surface heating trends. Temperature and equivalent temperature trend differences from 1982-1997 are examined for surface sites in the Eastern U.S. Overall trend differences at the surface indicate equivalent temperature trends are relatively warmer than temperature trends in the Eastern U.S. Seasonally, equivalent temperature trends are relatively warmer than temperature trends in winter and are relatively cooler in the fall. These patterns, however, vary widely from site to site, so local microclimate is very important. ?? 2006 Elsevier B.V. All rights reserved.

  15. PML solution of longitudinal wave propagation in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Farzanian, M.; Arbabi, Freydoon; Pak, Ronald

    2016-06-01

    This paper describes the development of a model for unbounded heterogeneous domains with radiation damping produced by an unphysical wave absorbing layer. The Perfectly Matched Layer (PML) approach is used along with a displacement-based finite element. The heterogeneous model is validated using the closed-form solution of a benchmark problem: a free rod with two-part modulus subjected to a specified time history. Both elastically supported and unsupported semi-infinite rods with different degrees of inhomogeneity and loading are considered. Numerical results illustrate the effects of inhomogeneity on the response and are compared with those for equivalent homogeneous domains. The effects of characteristic features of the inhomogeneous problem, presence of local maxima and cut-off frequency are determined. A degenerate case of a homogeneous semi-infinite rod on elastic foundations is produced by tending the magnitude of the foundation stiffness to zero. The response of the latter is compared with that of a free rod. The importance of proper selection of the PML parameters to highly accurate and efficient results is demonstrated by example problems.

  16. Stoichiometry of Reducing Equivalents and Splitting of Water in the Citric Acid Cycle.

    ERIC Educational Resources Information Center

    Madeira, Vitor M. C.

    1988-01-01

    Presents a solution to the problem of finding the source of extra reducing equivalents, and accomplishing the stoichiometry of glucose oxidation reactions. Discusses the citric acid cycle and glycolysis. (CW)

  17. Symmetry investigations on the incompressible stationary axisymmetric Euler equations with swirl

    NASA Astrophysics Data System (ADS)

    Frewer, M.; Oberlack, M.; Guenther, S.

    2007-08-01

    We discuss the incompressible stationary axisymmetric Euler equations with swirl, for which we derive via a scalar stream function an equivalent representation, the Bragg-Hawthorne equation [Bragg, S.L., Hawthorne, W.R., 1950. Some exact solutions of the flow through annular cascade actuator discs. J. Aero. Sci. 17, 243]. Despite this obvious equivalence, we will show that under a local Lie point symmetry analysis the Bragg-Hawthorne equation exposes itself as not being fully equivalent to the original Euler equations. This is reflected in the way that it possesses additional symmetries not being admitted by its counterpart. In other words, a symmetry of the Bragg-Hawthorne equation is in general not a symmetry of the Euler equations. Not the differential Euler equations but rather a set of integro-differential equations attains full equivalence to the Bragg-Hawthorne equation. For these intermediate Euler equations, it is interesting to note that local symmetries of the Bragg-Hawthorne equation transform to local as well as to nonlocal symmetries. This behaviour, on the one hand, is in accordance with Zawistowski's result [Zawistowski, Z.J., 2001. Symmetries of integro-differential equations. Rep. Math. Phys. 48, 269; Zawistowski, Z.J., 2004. General criterion of invariance for integro-differential equations. Rep. Math. Phys. 54, 341] that it is possible for integro-differential equations to admit local Lie point symmetries. On the other hand, with this transformation process we collect symmetries which cannot be obtained when carrying out a usual local Lie point symmetry analysis. Finally, the symmetry classification of the Bragg-Hawthorne equation is used to find analytical solutions for the phenomenon of vortex breakdown.

  18. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    PubMed

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  19. Sound field reproduction as an equivalent acoustical scattering problem.

    PubMed

    Fazi, Filippo Maria; Nelson, Philip A

    2013-11-01

    Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.

  20. A computational algorithm for spacecraft control and momentum management

    NASA Technical Reports Server (NTRS)

    Dzielski, John; Bergmann, Edward; Paradiso, Joseph

    1990-01-01

    Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.

  1. Quantum gravity from noncommutative spacetime

    NASA Astrophysics Data System (ADS)

    Lee, Jungjai; Yang, Hyun Seok

    2014-12-01

    We review a novel and authentic way to quantize gravity. This novel approach is based on the fact that Einstein gravity can be formulated in terms of a symplectic geometry rather than a Riemannian geometry in the context of emergent gravity. An essential step for emergent gravity is to realize the equivalence principle, the most important property in the theory of gravity (general relativity), from U(1) gauge theory on a symplectic or Poisson manifold. Through the realization of the equivalence principle, which is an intrinsic property in symplectic geometry known as the Darboux theorem or the Moser lemma, one can understand how diffeomorphism symmetry arises from noncommutative U(1) gauge theory; thus, gravity can emerge from the noncommutative electromagnetism, which is also an interacting theory. As a consequence, a background-independent quantum gravity in which the prior existence of any spacetime structure is not a priori assumed but is defined by using the fundamental ingredients in quantum gravity theory can be formulated. This scheme for quantum gravity can be used to resolve many notorious problems in theoretical physics, such as the cosmological constant problem, to understand the nature of dark energy, and to explain why gravity is so weak compared to other forces. In particular, it leads to a remarkable picture of what matter is. A matter field, such as leptons and quarks, simply arises as a stable localized geometry, which is a topological object in the defining algebra (noncommutative ★-algebra) of quantum gravity.

  2. Resumming double logarithms in the QCD evolution of color dipoles

    DOE PAGES

    Iancu, E.; Madrigal, J. D.; Mueller, A. H.; ...

    2015-05-01

    The higher-order perturbative corrections, beyond leading logarithmic accuracy, to the BFKL evolution in QCD at high energy are well known to suffer from a severe lack-of-convergence problem, due to radiative corrections enhanced by double collinear logarithms. Via an explicit calculation of Feynman graphs in light cone (time-ordered) perturbation theory, we show that the corrections enhanced by double logarithms (either energy-collinear, or double collinear) are associated with soft gluon emissions which are strictly ordered in lifetime. These corrections can be resummed to all orders by solving an evolution equation which is non-local in rapidity. This equation can be equivalently rewritten inmore » local form, but with modified kernel and initial conditions, which resum double collinear logs to all orders. We extend this resummation to the next-to-leading order BFKL and BK equations. The first numerical studies of the collinearly-improved BK equation demonstrate the essential role of the resummation in both stabilizing and slowing down the evolution.« less

  3. Darkness without dark matter and energy - generalized unimodular gravity

    NASA Astrophysics Data System (ADS)

    Barvinsky, A. O.; Kamenshchik, A. Yu.

    2017-11-01

    We suggest a Lorentz non-invariant generalization of the unimodular gravity theory, which is classically equivalent to general relativity with a locally inert (devoid of local degrees of freedom) perfect fluid having an equation of state with a constant parameter w. For the range of w near -1 this dark fluid can play the role of dark energy, while for w = 0 this dark dust admits spatial inhomogeneities and can be interpreted as dark matter. We discuss possible implications of this model in the cosmological initial conditions problem. In particular, this is the extension of known microcanonical density matrix predictions for the initial quantum state of the closed cosmology to the case of spatially open Universe, based on the imitation of the spatial curvature by the dark fluid density. We also briefly discuss quantization of this model necessarily involving the method of gauge systems with reducible constraints and the effect of this method on the treatment of recently! suggested mechanism of vacuum energy sequestering.

  4. Large Nc equivalence and baryons

    NASA Astrophysics Data System (ADS)

    Blake, Mike; Cherman, Aleksey

    2012-09-01

    In the large Nc limit, gauge theories with different gauge groups and matter content sometimes turn out to be “large Nc equivalent,” in the sense of having a set of coincident correlation functions. Large Nc equivalence has mainly been explored in the glueball and meson sectors. However, a recent proposal to dodge the fermion sign problem of QCD with a quark number chemical potential using large Nc equivalence motivates investigating the applicability of large Nc equivalence to correlation functions involving baryon operators. Here we present evidence that large Nc equivalence extends to the baryon sector, under the same type of symmetry realization assumptions as in the meson sector, by adapting the classic Witten analysis of large Nc baryons.

  5. Adaptive functioning and behaviour problems in relation to level of education in children and adolescents with intellectual disability.

    PubMed

    de Bildt, A; Sytema, S; Kraijer, D; Sparrow, S; Minderaa, R

    2005-09-01

    The interrelationship between adaptive functioning, behaviour problems and level of special education was studied in 186 children with IQs ranging from 61 to 70. The objective was to increase the insight into the contribution of adaptive functioning and general and autistic behaviour problems to the level of education in children with intellectual disability (ID). Children from two levels of special education in the Netherlands were compared with respect to adaptive functioning [Vineland Adaptive Behavior Scales (VABS)], general behaviour problems [Child Behavior Checklist (CBCL)] and autistic behaviour problems [Autism Behavior Checklist (ABC)]. The effect of behaviour problems on adaptive functioning, and the causal relationships between behaviour problems, adaptive functioning and level of education were investigated. Children in schools for mild learning problems had higher VABS scores, and lower CBCL and ABC scores. The ABC had a significant effect on the total age equivalent of the VABS in schools for severe learning problems, the CBCL in schools for mild learning problems. A direct effect of the ABC and CBCL total scores on the VABS age equivalent was found, together with a direct effect of the VABS age equivalent on level of education and therefore an indirect effect of ABC and CBCL on level of education. In the children with the highest level of mild ID, adaptive functioning seems to be the most important factor that directly influences the level of education that a child attends. Autistic and general behaviour problems directly influence the level of adaptive functioning. Especially, autistic problems seem to have such a restrictive effect on the level of adaptive functioning that children do not reach the level of education that would be expected based on IQ. Clinical implications are discussed.

  6. Pursuing sustainable productivity with millions of smallholder farmers.

    PubMed

    Cui, Zhenling; Zhang, Hongyan; Chen, Xinping; Zhang, Chaochun; Ma, Wenqi; Huang, Chengdong; Zhang, Weifeng; Mi, Guohua; Miao, Yuxin; Li, Xiaolin; Gao, Qiang; Yang, Jianchang; Wang, Zhaohui; Ye, Youliang; Guo, Shiwei; Lu, Jianwei; Huang, Jianliang; Lv, Shihua; Sun, Yixiang; Liu, Yuanying; Peng, Xianlong; Ren, Jun; Li, Shiqing; Deng, Xiping; Shi, Xiaojun; Zhang, Qiang; Yang, Zhiping; Tang, Li; Wei, Changzhou; Jia, Liangliang; Zhang, Jiwang; He, Mingrong; Tong, Yanan; Tang, Qiyuan; Zhong, Xuhua; Liu, Zhaohui; Cao, Ning; Kou, Changlin; Ying, Hao; Yin, Yulong; Jiao, Xiaoqiang; Zhang, Qingsong; Fan, Mingsheng; Jiang, Rongfeng; Zhang, Fusuo; Dou, Zhengxia

    2018-03-15

    Sustainably feeding a growing population is a grand challenge, and one that is particularly difficult in regions that are dominated by smallholder farming. Despite local successes, mobilizing vast smallholder communities with science- and evidence-based management practices to simultaneously address production and pollution problems has been infeasible. Here we report the outcome of concerted efforts in engaging millions of Chinese smallholder farmers to adopt enhanced management practices for greater yield and environmental performance. First, we conducted field trials across China's major agroecological zones to develop locally applicable recommendations using a comprehensive decision-support program. Engaging farmers to adopt those recommendations involved the collaboration of a core network of 1,152 researchers with numerous extension agents and agribusiness personnel. From 2005 to 2015, about 20.9 million farmers in 452 counties adopted enhanced management practices in fields with a total of 37.7 million cumulative hectares over the years. Average yields (maize, rice and wheat) increased by 10.8-11.5%, generating a net grain output of 33 million tonnes (Mt). At the same time, application of nitrogen decreased by 14.7-18.1%, saving 1.2 Mt of nitrogen fertilizers. The increased grain output and decreased nitrogen fertilizer use were equivalent to US$12.2 billion. Estimated reactive nitrogen losses averaged 4.5-4.7 kg nitrogen per Megagram (Mg) with the intervention compared to 6.0-6.4 kg nitrogen per Mg without. Greenhouse gas emissions were 328 kg, 812 kg and 434 kg CO 2 equivalent per Mg of maize, rice and wheat produced, respectively, compared to 422 kg, 941 kg and 549 kg CO 2 equivalent per Mg without the intervention. On the basis of a large-scale survey (8.6 million farmer participants) and scenario analyses, we further demonstrate the potential impacts of implementing the enhanced management practices on China's food security and sustainability outlook.

  7. Pursuing sustainable productivity with millions of smallholder farmers

    NASA Astrophysics Data System (ADS)

    Cui, Zhenling; Zhang, Hongyan; Chen, Xinping; Zhang, Chaochun; Ma, Wenqi; Huang, Chengdong; Zhang, Weifeng; Mi, Guohua; Miao, Yuxin; Li, Xiaolin; Gao, Qiang; Yang, Jianchang; Wang, Zhaohui; Ye, Youliang; Guo, Shiwei; Lu, Jianwei; Huang, Jianliang; Lv, Shihua; Sun, Yixiang; Liu, Yuanying; Peng, Xianlong; Ren, Jun; Li, Shiqing; Deng, Xiping; Shi, Xiaojun; Zhang, Qiang; Yang, Zhiping; Tang, Li; Wei, Changzhou; Jia, Liangliang; Zhang, Jiwang; He, Mingrong; Tong, Yanan; Tang, Qiyuan; Zhong, Xuhua; Liu, Zhaohui; Cao, Ning; Kou, Changlin; Ying, Hao; Yin, Yulong; Jiao, Xiaoqiang; Zhang, Qingsong; Fan, Mingsheng; Jiang, Rongfeng; Zhang, Fusuo; Dou, Zhengxia

    2018-03-01

    Sustainably feeding a growing population is a grand challenge, and one that is particularly difficult in regions that are dominated by smallholder farming. Despite local successes, mobilizing vast smallholder communities with science- and evidence-based management practices to simultaneously address production and pollution problems has been infeasible. Here we report the outcome of concerted efforts in engaging millions of Chinese smallholder farmers to adopt enhanced management practices for greater yield and environmental performance. First, we conducted field trials across China’s major agroecological zones to develop locally applicable recommendations using a comprehensive decision-support program. Engaging farmers to adopt those recommendations involved the collaboration of a core network of 1,152 researchers with numerous extension agents and agribusiness personnel. From 2005 to 2015, about 20.9 million farmers in 452 counties adopted enhanced management practices in fields with a total of 37.7 million cumulative hectares over the years. Average yields (maize, rice and wheat) increased by 10.8–11.5%, generating a net grain output of 33 million tonnes (Mt). At the same time, application of nitrogen decreased by 14.7–18.1%, saving 1.2 Mt of nitrogen fertilizers. The increased grain output and decreased nitrogen fertilizer use were equivalent to US$12.2 billion. Estimated reactive nitrogen losses averaged 4.5–4.7 kg nitrogen per Megagram (Mg) with the intervention compared to 6.0–6.4 kg nitrogen per Mg without. Greenhouse gas emissions were 328 kg, 812 kg and 434 kg CO2 equivalent per Mg of maize, rice and wheat produced, respectively, compared to 422 kg, 941 kg and 549 kg CO2 equivalent per Mg without the intervention. On the basis of a large-scale survey (8.6 million farmer participants) and scenario analyses, we further demonstrate the potential impacts of implementing the enhanced management practices on China’s food security and sustainability outlook.

  8. Spatial Dynamics Methods for Solitary Waves on a Ferrofluid Jet

    NASA Astrophysics Data System (ADS)

    Groves, M. D.; Nilsson, D. V.

    2018-04-01

    This paper presents existence theories for several families of axisymmetric solitary waves on the surface of an otherwise cylindrical ferrofluid jet surrounding a stationary metal rod. The ferrofluid, which is governed by a general (nonlinear) magnetisation law, is subject to an azimuthal magnetic field generated by an electric current flowing along the rod. The ferrohydrodynamic problem for axisymmetric travelling waves is formulated as an infinite-dimensional Hamiltonian system in which the axial direction is the time-like variable. A centre-manifold reduction technique is employed to reduce the system to a locally equivalent Hamiltonian system with a finite number of degrees of freedom, and homoclinic solutions to the reduced system, which correspond to solitary waves, are detected by dynamical-systems methods.

  9. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  10. Computationally Efficient Nonlinear Bell Inequalities for Quantum Networks

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing

    2018-04-01

    The correlations in quantum networks have attracted strong interest with new types of violations of the locality. The standard Bell inequalities cannot characterize the multipartite correlations that are generated by multiple sources. The main problem is that no computationally efficient method is available for constructing useful Bell inequalities for general quantum networks. In this work, we show a significant improvement by presenting new, explicit Bell-type inequalities for general networks including cyclic networks. These nonlinear inequalities are related to the matching problem of an equivalent unweighted bipartite graph that allows constructing a polynomial-time algorithm. For the quantum resources consisting of bipartite entangled pure states and generalized Greenberger-Horne-Zeilinger (GHZ) states, we prove the generic nonmultilocality of quantum networks with multiple independent observers using new Bell inequalities. The violations are maximal with respect to the presented Tsirelson's bound for Einstein-Podolsky-Rosen states and GHZ states. Moreover, these violations hold for Werner states or some general noisy states. Our results suggest that the presented Bell inequalities can be used to characterize experimental quantum networks.

  11. Preconditioned implicit solvers for the Navier-Stokes equations on distributed-memory machines

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liou, Meng-Sing; Dyson, Rodger W.

    1994-01-01

    The GMRES method is parallelized, and combined with local preconditioning to construct an implicit parallel solver to obtain steady-state solutions for the Navier-Stokes equations of fluid flow on distributed-memory machines. The new implicit parallel solver is designed to preserve the convergence rate of the equivalent 'serial' solver. A static domain-decomposition is used to partition the computational domain amongst the available processing nodes of the parallel machine. The SPMD (Single-Program Multiple-Data) programming model is combined with message-passing tools to develop the parallel code on a 32-node Intel Hypercube and a 512-node Intel Delta machine. The implicit parallel solver is validated for internal and external flow problems, and is found to compare identically with flow solutions obtained on a Cray Y-MP/8. A peak computational speed of 2300 MFlops/sec has been achieved on 512 nodes of the Intel Delta machine,k for a problem size of 1024 K equations (256 K grid points).

  12. Invariant graphs of a family of non-uniformly expanding skew products over Markov maps

    NASA Astrophysics Data System (ADS)

    Walkden, C. P.; Withers, T.

    2018-06-01

    We consider a family of skew-products of the form where T is a continuous, expanding, locally eventually onto Markov map and is a family of homeomorphisms of . A function is said to be an invariant graph if is an invariant set for the skew-product; equivalently, u(T(x))  =  g x (u(x)). A well-studied problem is to consider the existence, regularity and dimension-theoretic properties of such functions, usually under strong contraction or expansion conditions (in terms of Lyapunov exponents or partial hyperbolicity) in the fibre direction. Here we consider such problems in a setting where the Lyapunov exponent in the fibre direction is zero on a set of periodic orbits but expands except on a neighbourhood of these periodic orbits. We prove that u either has the structure of a ‘quasi-graph’ (or ‘bony graph’) or is as smooth as the dynamics, and we give a criteria for this to happen.

  13. Surface-admittance equivalence principle for nonradiating and cloaking problems

    NASA Astrophysics Data System (ADS)

    Labate, Giuseppe; Alù, Andrea; Matekovits, Ladislau

    2017-06-01

    In this paper, we address nonradiating and cloaking problems exploiting the surface equivalence principle, by imposing at any arbitrary boundary the control of the admittance discontinuity between the overall object (with or without cloak) and the background. After a rigorous demonstration, we apply this model to a nonradiating problem, appealing for anapole modes and metamolecules modeling, and to a cloaking problem, appealing for non-Foster metasurface design. A straightforward analytical condition is obtained for controlling the scattering of a dielectric object over a surface boundary of interest. Previous quasistatic results are confirmed and a general closed-form solution beyond the subwavelength regime is provided. In addition, this formulation can be extended to other wave phenomena once the proper admittance function is defined (thermal, acoustics, elastomechanics, etc.).

  14. Markov-chain model of classified atomistic transition states for discrete kinetic Monte Carlo simulations.

    PubMed

    Numazawa, Satoshi; Smith, Roger

    2011-10-01

    Classical harmonic transition state theory is considered and applied in discrete lattice cells with hierarchical transition levels. The scheme is then used to determine transitions that can be applied in a lattice-based kinetic Monte Carlo (KMC) atomistic simulation model. The model results in an effective reduction of KMC simulation steps by utilizing a classification scheme of transition levels for thermally activated atomistic diffusion processes. Thermally activated atomistic movements are considered as local transition events constrained in potential energy wells over certain local time periods. These processes are represented by Markov chains of multidimensional Boolean valued functions in three-dimensional lattice space. The events inhibited by the barriers under a certain level are regarded as thermal fluctuations of the canonical ensemble and accepted freely. Consequently, the fluctuating system evolution process is implemented as a Markov chain of equivalence class objects. It is shown that the process can be characterized by the acceptance of metastable local transitions. The method is applied to a problem of Au and Ag cluster growth on a rippled surface. The simulation predicts the existence of a morphology-dependent transition time limit from a local metastable to stable state for subsequent cluster growth by accretion. Excellent agreement with observed experimental results is obtained.

  15. Dark matter and the equivalence principle

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gradwohl, Ben-Ami

    1993-01-01

    A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.

  16. Generalization of Equivalent Crystal Theory to Include Angular Dependence

    NASA Technical Reports Server (NTRS)

    Ferrante, John; Zypman, Fredy R.

    2004-01-01

    In the original Equivalent Crystal Theory, each atomic site in the real crystal is assigned an equivalent lattice constant, in general different from the ground state one. This parameter corresponds to a local compression or expansion of the lattice. The basic method considers these volumetric transformations and, in addition, introduces the possibility that the reference lattice is anisotropically distorted. These distortions however, were introduced ad-hoc. In this work, we generalize the original Equivalent Crystal Theory by systematically introducing site-dependent directional distortions of the lattice, whose corresponding distortions account for the dependence of the energy on anisotropic local density variations. This is done in the spirit of the original framework, but including a gradient term in the density. This approach is introduced to correct a deficiency in the original Equivalent Crystal Theory and other semiempirical methods in quantitatively obtaining the correct ratios of the surface energies of low index planes of cubic metals (100), (110), and (111). We develop here the basic framework, and apply it to the calculation of Fe (110) and Fe (111) surface energy formation. The results, compared with first principles calculations, show an improvement over previous semiempirical approaches.

  17. 34 CFR 489.5 - What definitions apply?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., DEPARTMENT OF EDUCATION FUNCTIONAL LITERACY FOR STATE AND LOCAL PRISONERS PROGRAM General § 489.5 What...— Functional literacy means at least an eighth grade equivalence, or a functional criterion score, on a nationally recognized literacy assessment. Local correctional agency means any agency of local government...

  18. Boundary conditions at the gas sectors of superhydrophobic grooves

    NASA Astrophysics Data System (ADS)

    Dubov, Alexander L.; Nizkaya, Tatiana V.; Asmolov, Evgeny S.; Vinogradova, Olga I.

    2018-01-01

    The hydrodynamics of liquid flowing past gas sectors of unidirectional superhydrophobic surfaces is revisited. Attention is focused on the local slip boundary condition at the liquid-gas interface, which is equivalent to the effect of a gas cavity on liquid flow. The system is characterized by a large viscosity contrast between liquid and gas μ /μg≫1 . We interpret earlier results, namely, the dependence of the local slip length on the flow direction, in terms of a tensorial local slip boundary condition and relate the eigenvalues of the local local slip tensor to the texture parameters, such as the width of the groove δ and the local depth of the groove e (y ,α ) . The latter varies in the direction y , orthogonal to the orientation of stripes, and depends on the bevel angle of the groove's edges, π /2 -α , at the point where three phases meet. Our theory demonstrates that when grooves are sufficiently deep their eigenvalues of the local slip length tensor depend only on μ /μg ,δ , and α , but not on the depth. The eigenvalues of the local slip length of shallow grooves depend on μ /μg and e (y ,α ) , although the contribution of the bevel angle is moderate. In order to assess the validity of our theory we propose an approach to solve the two-phase hydrodynamic problem, which significantly facilitates and accelerates calculations compared to conventional numerical schemes. The numerical results show that our simple analytical description obtained for limiting cases of deep and shallow grooves remains valid for various unidirectional textures.

  19. Shock front distortion and Richtmyer-Meshkov-type growth caused by a small preshock nonuniformity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Velikovich, A. L.; Wouchuk, J. G.; Huete Ruiz de Lira, C.

    The response of a shock front to small preshock nonuniformities of density, pressure, and velocity is studied theoretically and numerically. These preshock nonuniformities emulate imperfections of a laser target, due either to its manufacturing, like joints or feeding tubes, or to preshock perturbation seeding/growth, as well as density fluctuations in foam targets, ''thermal layers'' near heated surfaces, etc. Similarly to the shock-wave interaction with a small nonuniformity localized at a material interface, which triggers a classical Richtmyer-Meshkov (RM) instability, interaction of a shock wave with periodic or localized preshock perturbations distributed in the volume distorts the shape of the shockmore » front and can cause a RM-type instability growth. Explicit asymptotic formulas describing distortion of the shock front and the rate of RM-type growth are presented. These formulas are favorably compared both to the exact solutions of the corresponding initial-boundary-value problem and to numerical simulations. It is demonstrated that a small density modulation localized sufficiently close to a flat target surface produces the same perturbation growth as an 'equivalent' ripple on the surface of a uniform target, characterized by the same initial areal mass modulation amplitude.« less

  20. On structural identifiability analysis of the cascaded linear dynamic systems in isotopically non-stationary 13C labelling experiments.

    PubMed

    Lin, Weilu; Wang, Zejian; Huang, Mingzhi; Zhuang, Yingping; Zhang, Siliang

    2018-06-01

    The isotopically non-stationary 13C labelling experiments, as an emerging experimental technique, can estimate the intracellular fluxes of the cell culture under an isotopic transient period. However, to the best of our knowledge, the issue of the structural identifiability analysis of non-stationary isotope experiments is not well addressed in the literature. In this work, the local structural identifiability analysis for non-stationary cumomer balance equations is conducted based on the Taylor series approach. The numerical rank of the Jacobian matrices of the finite extended time derivatives of the measured fractions with respect to the free parameters is taken as the criterion. It turns out that only one single time point is necessary to achieve the structural identifiability analysis of the cascaded linear dynamic system of non-stationary isotope experiments. The equivalence between the local structural identifiability of the cascaded linear dynamic systems and the local optimum condition of the nonlinear least squares problem is elucidated in the work. Optimal measurements sets can then be determined for the metabolic network. Two simulated metabolic networks are adopted to demonstrate the utility of the proposed method. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Intrinsic space charge layers and field enhancement in ferroelectric nanojunctions

    DOE PAGES

    Cao, Ye; Ievlev, Anton V.; Morozovska, Anna N.; ...

    2015-07-13

    The conducting characteristics of topological defects in the ferroelectric materials, such as charged domain walls in ferroelectric materials, engendered broad interest and extensive study on their scientific merit and the possibility of novel applications utilizing domain engineering. At the same time, the problem of electron transport in ferroelectrics themselves still remains full of unanswered questions, and becomes still more relevant over the impending revival of interest in ferroelectric semiconductors and new improper ferroelectric materials. We have employed self-consistent phase-field modeling to investigate the physical properties of a local metal-ferroelectric (Pb(Zr 0.2Ti 0.8)O3) junction in applied electric field. We revealed anmore » up to 10-fold local field enhancement realized by large polarization gradient and over-polarization effects once the inherent non-linear dielectric properties of PZT are considered. The effect is independent of bias polarity and maintains its strength prior, during and after ferroelectric switching. The local field enhancement can be considered equivalent to increase of doping level, which will give rise to reduction of the switching bias and significantly smaller voltages to charge injection and electronic injection, electrochemical and photoelectrochemical processes.« less

  2. What's the Problem? Familiarity Working Memory, and Transfer in a Problem-Solving Task.

    PubMed

    Kole, James A; Snyder, Hannah R; Brojde, Chandra L; Friend, Angela

    2015-01-01

    The contributions of familiarity and working memory to transfer were examined in the Tower of Hanoi task. Participants completed 3 different versions of the task: a standard 3-disk version, a clothing exchange task that included familiar semantic content, and a tea ceremony task that included unfamiliar semantic content. The constraints on moves were equivalent across tasks, and each could be solved with the same sequence of movements. Working memory demands were manipulated by the provision of a (static or dynamic) visual representation of the problem. Performance was equivalent for the standard Tower of Hanoi and clothing exchange tasks but worse for the tea ceremony task, and it decreased with increasing working memory demands. Furthermore, the standard Tower of Hanoi task and clothing exchange tasks independently, additively, and equivalently transferred to subsequent tasks, whereas the tea ceremony task did not. The results suggest that both familiarity and working memory demands determine overall level of performance, whereas familiarity influences transfer.

  3. Non-ambiguous recovery of Biot poroelastic parameters of cellular panels using ultrasonicwaves

    NASA Astrophysics Data System (ADS)

    Ogam, Erick; Fellah, Z. E. A.; Sebaa, Naima; Groby, J.-P.

    2011-03-01

    The inverse problem of the recovery of the poroelastic parameters of open-cell soft plastic foam panels is solved by employing transmitted ultrasonic waves (USW) and the Biot-Johnson-Koplik-Champoux-Allard (BJKCA) model. It is shown by constructing the objective functional given by the total square of the difference between predictions from the BJKCA interaction model and experimental data obtained with transmitted USW that the inverse problem is ill-posed, since the functional exhibits several local minima and maxima. In order to solve this problem, which is beyond the capability of most off-the-shelf iterative nonlinear least squares optimization algorithms (such as the Levenberg Marquadt or Nelder-Mead simplex methods), simple strategies are developed. The recovered acoustic parameters are compared with those obtained using simpler interaction models and a method employing asymptotic phase velocity of the transmitted USW. The retrieved elastic moduli are validated by solving an inverse vibration spectroscopy problem with data obtained from beam-like specimens cut from the panels using an equivalent solid elastodynamic model as estimator. The phase velocities are reconstructed using computed, measured resonance frequencies and a time-frequency decomposition of transient waves induced in the beam specimen. These confirm that the elastic parameters recovered using vibration are valid over the frequency range ofstudy.

  4. A coupled-mode model for the hydroelastic analysis of large floating bodies over variable bathymetry regions

    NASA Astrophysics Data System (ADS)

    Belibassakis, K. A.; Athanassoulis, G. A.

    2005-05-01

    The consistent coupled-mode theory (Athanassoulis & Belibassakis, J. Fluid Mech. vol. 389, 1999, p. 275) is extended and applied to the hydroelastic analysis of large floating bodies of shallow draught or ice sheets of small and uniform thickness, lying over variable bathymetry regions. A parallel-contour bathymetry is assumed, characterized by a continuous depth function of the form h( {x,y}) {=} h( x ), attaining constant, but possibly different, values in the semi-infinite regions x {<} a and x {>} b. We consider the scattering problem of harmonic, obliquely incident, surface waves, under the combined effects of variable bathymetry and a floating elastic plate, extending from x {=} a to x {=} b and {-} infty {<} y{<}infty . Under the assumption of small-amplitude incident waves and small plate deflections, the hydroelastic problem is formulated within the context of linearized water-wave and thin-elastic-plate theory. The problem is reformulated as a transition problem in a bounded domain, for which an equivalent, Luke-type (unconstrained), variational principle is given. In order to consistently treat the wave field beneath the elastic floating plate, down to the sloping bottom boundary, a complete, local, hydroelastic-mode series expansion of the wave field is used, enhanced by an appropriate sloping-bottom mode. The latter enables the consistent satisfaction of the Neumann bottom-boundary condition on a general topography. By introducing this expansion into the variational principle, an equivalent coupled-mode system of horizontal equations in the plate region (a {≤} x {≤} b) is derived. Boundary conditions are also provided by the variational principle, ensuring the complete matching of the wave field at the vertical interfaces (x{=}a and x{=}b), and the requirements that the edges of the plate are free of moment and shear force. Numerical results concerning floating structures lying over flat, shoaling and corrugated seabeds are presented and compared, and the effects of wave direction, bottom slope and bottom corrugations on the hydroelastic response are presented and discussed. The present method can be easily extended to the fully three-dimensional hydroelastic problem, including bodies or structures characterized by variable thickness (draught), flexural rigidity and mass distributions.

  5. 14 CFR 171.263 - Localizer automatic monitor system.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...

  6. 14 CFR 171.263 - Localizer automatic monitor system.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...

  7. 14 CFR 171.263 - Localizer automatic monitor system.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...

  8. 14 CFR 171.263 - Localizer automatic monitor system.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...

  9. 14 CFR 171.263 - Localizer automatic monitor system.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... (CONTINUED) NAVIGATIONAL FACILITIES NON-FEDERAL NAVIGATION FACILITIES Interim Standard Microwave Landing... provide an automatic monitor system that transmits a warning to designated local and remote control points... centerline equivalent to more than 0.015 DDM at the ISMLS reference datum. (2) For localizers in which the...

  10. 24 CFR 115.201 - The two phases of substantial equivalency certification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... AND URBAN DEVELOPMENT FAIR HOUSING CERTIFICATION AND FUNDING OF STATE AND LOCAL FAIR HOUSING... 24 Housing and Urban Development 1 2010-04-01 2010-04-01 false The two phases of substantial equivalency certification. 115.201 Section 115.201 Housing and Urban Development Regulations Relating to...

  11. 24 CFR 115.201 - The two phases of substantial equivalency certification.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... AND URBAN DEVELOPMENT FAIR HOUSING CERTIFICATION AND FUNDING OF STATE AND LOCAL FAIR HOUSING... 24 Housing and Urban Development 1 2014-04-01 2014-04-01 false The two phases of substantial equivalency certification. 115.201 Section 115.201 Housing and Urban Development Regulations Relating to...

  12. 24 CFR 115.201 - The two phases of substantial equivalency certification.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... AND URBAN DEVELOPMENT FAIR HOUSING CERTIFICATION AND FUNDING OF STATE AND LOCAL FAIR HOUSING... 24 Housing and Urban Development 1 2013-04-01 2013-04-01 false The two phases of substantial equivalency certification. 115.201 Section 115.201 Housing and Urban Development Regulations Relating to...

  13. 24 CFR 115.201 - The two phases of substantial equivalency certification.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... AND URBAN DEVELOPMENT FAIR HOUSING CERTIFICATION AND FUNDING OF STATE AND LOCAL FAIR HOUSING... 24 Housing and Urban Development 1 2012-04-01 2012-04-01 false The two phases of substantial equivalency certification. 115.201 Section 115.201 Housing and Urban Development Regulations Relating to...

  14. 24 CFR 115.201 - The two phases of substantial equivalency certification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... AND URBAN DEVELOPMENT FAIR HOUSING CERTIFICATION AND FUNDING OF STATE AND LOCAL FAIR HOUSING... 24 Housing and Urban Development 1 2011-04-01 2011-04-01 false The two phases of substantial equivalency certification. 115.201 Section 115.201 Housing and Urban Development Regulations Relating to...

  15. Schooling and Bilingualization in a Highland Guatemalan Community.

    ERIC Educational Resources Information Center

    Richards, Julia Becker

    To examine the process of language shift (bilingualization) in an area where there is a local dialect equivalent to a "language of solidarity" and a national language equivalent to a "language of power," language interactions in the impoverished village of San Marcos in the highlands of Guatemala were examined. Although Spanish…

  16. The Interference of Stereotype Threat with Women's Generation of Mathematical Problem-Solving Strategies.

    ERIC Educational Resources Information Center

    Quinn, Diane M.; Spencer, Steven J.

    2001-01-01

    Investigated whether stereotype threat would depress college women's math performance. In one test, men outperformed women when solving word problems, though women performed equally when problems were converted into numerical equivalents. In another test, participants solved difficult problems in high or reduced stereotype threat conditions. Women…

  17. Closed-Loop Control and Advisory Mode Evaluation of an Artificial Pancreatic β Cell: Use of Proportional–Integral–Derivative Equivalent Model-Based Controllers

    PubMed Central

    Percival, Matthew W.; Zisser, Howard; Jovanovič, Lois; Doyle, Francis J.

    2008-01-01

    Background Using currently available technology, it is possible to apply modern control theory to produce a closed-loop artificial β cell. Novel use of established control techniques would improve glycemic control, thereby reducing the complications of diabetes. Two popular controller structures, proportional–integral–derivative (PID) and model predictive control (MPC), are compared first in a theoretical sense and then in two applications. Methods The Bergman model is transformed for use in a PID equivalent model-based controller. The internal model control (IMC) structure, which makes explicit use of the model, is compared with the PID controller structure in the transfer function domain. An MPC controller is then developed as an optimization problem with restrictions on its tuning parameters and is shown to be equivalent to an IMC controller. The controllers are tuned for equivalent performance and evaluated in a simulation study as a closed-loop controller and in an advisory mode scenario on retrospective clinical data. Results Theoretical development shows conditions under which PID and MPC controllers produce equivalent output via IMC. The simulation study showed that the single tuning parameter for the equivalent controllers relates directly to the closed-loop speed of response and robustness, an important result considering system uncertainty. The risk metric allowed easy identification of instances of inadequate control. Results of the advisory mode simulation showed that suitable tuning produces consistently appropriate delivery recommendations. Conclusion The conditions under which PID and MPC are equivalent have been derived. The MPC framework is more suitable given the extensions necessary for a fully closed-loop artificial β cell, such as consideration of controller constraints. Formulation of the control problem in risk space is attractive, as it explicitly addresses the asymmetry of the problem; this is done easily with MPC. PMID:19885240

  18. Flow in horizontally anisotropic multilayered aquifer systems with leaky wells and aquitards

    EPA Science Inventory

    Flow problems in an anisotropic domain can be transformed into ones in an equivalent isotropic domain by coordinate transformations. Once analytical solutions are obtained for the equivalent isotropic domain, they can be back transformed to the original anisotropic domain. The ex...

  19. Dimensional discontinuity in quantum communication complexity at dimension seven

    NASA Astrophysics Data System (ADS)

    Tavakoli, Armin; Pawłowski, Marcin; Żukowski, Marek; Bourennane, Mohamed

    2017-02-01

    Entanglement-assisted classical communication and transmission of a quantum system are the two quantum resources for information processing. Many information tasks can be performed using either quantum resource. However, this equivalence is not always present since entanglement-assisted classical communication is sometimes known to be the better performing resource. Here, we show not only the opposite phenomenon, that there exist tasks for which transmission of a quantum system is a more powerful resource than entanglement-assisted classical communication, but also that such phenomena can have a surprisingly strong dependence on the dimension of Hilbert space. We introduce a family of communication complexity problems parametrized by the dimension of Hilbert space and study the performance of each quantum resource. Under an additional assumption of a linear strategy for the receiving party, we find that for low dimensions the two resources perform equally well, whereas for dimension seven and above the equivalence is suddenly broken and transmission of a quantum system becomes more powerful than entanglement-assisted classical communication. Moreover, we find that transmission of a quantum system may even outperform classical communication assisted by the stronger-than-quantum correlations obtained from the principle of macroscopic locality.

  20. C III] Emission in Star-forming Galaxies Near and Far

    NASA Astrophysics Data System (ADS)

    Rigby, J. R.; Bayliss, M. B.; Gladders, M. D.; Sharon, K.; Wuyts, E.; Dahle, H.; Johnson, T.; Peña-Guerrero, M.

    2015-11-01

    We measure [C iii] 1907, C iii] 1909 Å emission lines in 11 gravitationally lensed star-forming galaxies at z ˜ 1.6-3, finding much lower equivalent widths than previously reported for fainter lensed galaxies. While it is not yet clear what causes some galaxies to be strong C iii] emitters, C iii] emission is not a universal property of distant star-forming galaxies. We also examine C iii] emission in 46 star-forming galaxies in the local universe, using archival spectra from GHRS, FOS, and STIS on HST and IUE. Twenty percent of these local galaxies show strong C iii] emission, with equivalent widths < -5 Å. Three nearby galaxies show C iii] emission equivalent widths as large as the most extreme emitters yet observed in the distant universe; all three are Wolf-Rayet galaxies. At all redshifts, strong C iii] emission may pick out low-metallicity galaxies experiencing intense bursts of star formation. Such local C iii] emitters may shed light on the conditions of star formation in certain extreme high-redshift galaxies.

  1. C III] Emission in Star-Forming Galaxies Near and Far

    NASA Technical Reports Server (NTRS)

    Rigby, J, R.; Bayliss, M. B.; Gladders, M. D.; Sharon, K.; Wuyts, E.; Dahle, H.; Johnson, T.; Pena-Guerrero, M.

    2015-01-01

    We measure C III Lambda Lambda 1907, 1909 Angstrom emission lines in eleven gravitationally-lensed star-forming galaxies at zeta at approximately 1.6-3, finding much lower equivalent widths than previously reported for fainter lensed galaxies (Stark et al. 2014). While it is not yet clear what causes some galaxies to be strong C III] emitters, C III] emission is not a universal property of distant star-forming galaxies. We also examine C III] emission in 46 star-forming galaxies in the local universe, using archival spectra from GHRS, FOS, and STIS on HST, and IUE. Twenty percent of these local galaxies show strong C III] emission, with equivalent widths less than -5 Angstrom. Three nearby galaxies show C III] emission equivalent widths as large as the most extreme emitters yet observed in the distant universe; all three are Wolf-Rayet galaxies. At all redshifts, strong C III] emission may pick out low-metallicity galaxies experiencing intense bursts of star formation. Such local C III] emitters may shed light on the conditions of star formation in certain extreme high-redshift galaxies.

  2. Development of Proportional Reasoning: Where Young Children Go Wrong

    PubMed Central

    Boyer, Ty W.; Levine, Susan C.; Huttenlocher, Janellen

    2008-01-01

    Previous studies have found that children have difficulty solving proportional reasoning problems involving discrete units until 10- to 12-years of age, but can solve parallel problems involving continuous quantities by 6-years of age. The present studies examine where children go wrong in processing proportions that involve discrete quantities. A computerized proportional equivalence choice task was administered to kindergartners through fourth-graders in Study 1, and to first- and third-graders in Study 2. Both studies involved four between-subjects conditions that were formed by pairing continuous and discrete target proportions with continuous and discrete choice alternatives. In Study 1, target and choice alternatives were presented simultaneously and in Study 2 target and choice alternatives were presented sequentially. In both studies, children performed significantly worse when both the target and choice alternatives were represented with discrete quantities than when either or both of the proportions involved continuous quantities. Taken together, these findings indicate that children go astray on proportional reasoning problems involving discrete units only when a numerical match is possible, suggesting that their difficulty is due to an overextension of numerical equivalence concepts to proportional equivalence problems. PMID:18793078

  3. The second Eshelby problem and its solvability

    NASA Astrophysics Data System (ADS)

    Zou, Wen-Nan; Zheng, Quan-Shui

    2012-10-01

    It is still a challenge to clarify the dependence of overall elastic properties of heterogeneous materials on the microstructures of non-elliposodal inhomogeneities (cracks, pores, foreign particles). From the theory of elasticity, the formulation of the perturbance elastic fields, coming from a non-ellipsoidal inhomogeneity embedded in an infinitely extended material with remote constant loading, inevitably involve one or more integral equations. Up to now, due to the mathematical difficulty, there is almost no explicit analytical solution obtained except for the ellipsoidal inhomogeneity. In this paper, we point out the impossibility to transform this inhomogeneity problem into a conventional Eshelby problem by the equivalent inclusion method even if the eigenstrain is chosen to be non-uniform. We also build up an equivalent model, called the second Eshelby problem, to investigate the perturbance stress. It is probably a better template to make use of the profound methods and results of conventional Eshelby problems of non-ellipsoidal inclusions.

  4. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy

    PubMed Central

    Cohen, E. A. K.; Ober, R. J.

    2014-01-01

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data. PMID:24634573

  5. Iterative framework radiation hybrid mapping

    USDA-ARS?s Scientific Manuscript database

    Building comprehensive radiation hybrid maps for large sets of markers is a computationally expensive process, since the basic mapping problem is equivalent to the traveling salesman problem. The mapping problem is also susceptible to noise, and as a result, it is often beneficial to remove markers ...

  6. On the equivalence between traction- and stress-based approaches for the modeling of localized failure in solids

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying; Cervera, Miguel

    2015-09-01

    This work investigates systematically traction- and stress-based approaches for the modeling of strong and regularized discontinuities induced by localized failure in solids. Two complementary methodologies, i.e., discontinuities localized in an elastic solid and strain localization of an inelastic softening solid, are addressed. In the former it is assumed a priori that the discontinuity forms with a continuous stress field and along the known orientation. A traction-based failure criterion is introduced to characterize the discontinuity and the orientation is determined from Mohr's maximization postulate. If the displacement jumps are retained as independent variables, the strong/regularized discontinuity approaches follow, requiring constitutive models for both the bulk and discontinuity. Elimination of the displacement jumps at the material point level results in the embedded/smeared discontinuity approaches in which an overall inelastic constitutive model fulfilling the static constraint suffices. The second methodology is then adopted to check whether the assumed strain localization can occur and identify its consequences on the resulting approaches. The kinematic constraint guaranteeing stress boundedness and continuity upon strain localization is established for general inelastic softening solids. Application to a unified stress-based elastoplastic damage model naturally yields all the ingredients of a localized model for the discontinuity (band), justifying the first methodology. Two dual but not necessarily equivalent approaches, i.e., the traction-based elastoplastic damage model and the stress-based projected discontinuity model, are identified. The former is equivalent to the embedded and smeared discontinuity approaches, whereas in the later the discontinuity orientation and associated failure criterion are determined consistently from the kinematic constraint rather than given a priori. The bi-directional connections and equivalence conditions between the traction- and stress-based approaches are classified. Closed-form results under plane stress condition are also given. A generic failure criterion of either elliptic, parabolic or hyperbolic type is analyzed in a unified manner, with the classical von Mises (J2), Drucker-Prager, Mohr-Coulomb and many other frequently employed criteria recovered as its particular cases.

  7. Students’ misconception on equal sign

    NASA Astrophysics Data System (ADS)

    Kusuma, N. F.; Subanti, S.; Usodo, B.

    2018-04-01

    Equivalence is a very general relation in mathematics. The focus of this article is narrowed specifically to an equal sign in the context of equations. The equal sign is a symbol of mathematical equivalence. Studies have found that many students do not have a deep understanding of equivalence. Students often misinterpret the equal sign as an operational rather than a symbol of mathematical equivalence. This misinterpretation of the equal sign will be label as a misconception. It is important to discuss and must resolve immediately because it can lead to the problems in students’ understanding. The purpose of this research is to describe students’ misconception about the meaning of equal sign on equal matrices. Descriptive method was used in this study involving five students of Senior High School in Boyolali who were taking Equal Matrices course. The result of this study shows that all of the students had the misconception about the meaning of the equal sign. They interpret the equal sign as an operational symbol rather than a symbol of mathematical equivalence. Students merely solve the problem only single way, which is a computational method, so that students stuck in a monotonous way of thinking and unable to develop their creativity.

  8. Equivalent Colorings with "Maple"

    ERIC Educational Resources Information Center

    Cecil, David R.; Wang, Rongdong

    2005-01-01

    Many counting problems can be modeled as "colorings" and solved by considering symmetries and Polya's cycle index polynomial. This paper presents a "Maple 7" program link http://users.tamuk.edu/kfdrc00/ that, given Polya's cycle index polynomial, determines all possible associated colorings and their partitioning into equivalence classes. These…

  9. An equivalent domain integral method in the two-dimensional analysis of mixed mode crack problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1990-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented.

  10. Chinese Algebra: Using Historical Problems to Think about Current Curricula

    ERIC Educational Resources Information Center

    Tillema, Erik

    2005-01-01

    The Chinese used the idea of generating equivalent expressions for solving problems where the problems from a historical Chinese text are studied to understand the ways in which the ideas can lead into algebraic calculations and help students to learn algebra. The texts unify algebraic problem solving through complex algebraic thought and afford…

  11. 41 CFR 102-80.125 - Who has the responsibility for determining the acceptability of each equivalent level of safety...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... departments or other local authorities for use in developing pre-fire plans. ... Fire Prevention Equivalent Level of Safety Analysis § 102-80.125 Who has the responsibility for... acceptability must include a review of the fire protection engineer's qualifications, the appropriateness of the...

  12. 41 CFR 102-80.125 - Who has the responsibility for determining the acceptability of each equivalent level of safety...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... departments or other local authorities for use in developing pre-fire plans. ... Fire Prevention Equivalent Level of Safety Analysis § 102-80.125 Who has the responsibility for... acceptability must include a review of the fire protection engineer's qualifications, the appropriateness of the...

  13. 41 CFR 102-80.125 - Who has the responsibility for determining the acceptability of each equivalent level of safety...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... departments or other local authorities for use in developing pre-fire plans. ... Fire Prevention Equivalent Level of Safety Analysis § 102-80.125 Who has the responsibility for... acceptability must include a review of the fire protection engineer's qualifications, the appropriateness of the...

  14. 41 CFR 102-80.125 - Who has the responsibility for determining the acceptability of each equivalent level of safety...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... departments or other local authorities for use in developing pre-fire plans. ... Fire Prevention Equivalent Level of Safety Analysis § 102-80.125 Who has the responsibility for... acceptability must include a review of the fire protection engineer's qualifications, the appropriateness of the...

  15. 41 CFR 102-80.125 - Who has the responsibility for determining the acceptability of each equivalent level of safety...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... departments or other local authorities for use in developing pre-fire plans. ... Fire Prevention Equivalent Level of Safety Analysis § 102-80.125 Who has the responsibility for... acceptability must include a review of the fire protection engineer's qualifications, the appropriateness of the...

  16. General RMP Guidance - Chapter 10: Implementation

    EPA Pesticide Factsheets

    The implementing agency is the federal, state, or local agency taking the lead for implementation and enforcement of part 68 (risk management program) or the state or local equivalent. They review RMPs, select some for audits, and conduct inspections.

  17. The characterization of weighted local hardy spaces on domains and its application.

    PubMed

    Wang, Heng-geng; Yang, Xiao-ming

    2004-09-01

    In this paper, we give the four equivalent characterizations for the weighted local hardy spaces on Lipschitz domains. Also, we give their application for the harmonic function defined in bounded Lipschitz domains.

  18. The fauna of the so-called Dakota formation of northern central Colorado and its equivalent in southeastern Wyoming

    USGS Publications Warehouse

    Reeside, John B.

    1923-01-01

    This paper describes a small fauna from beds in northern central Colorado that have long been designated the Dakota formation, often with doubt that all the beds so named were really equivalent to the typical Dakota sandstone of eastern Nebraska. The upper part of the equivalent beds in southeastern Wyoming was referred by some writers to the Benton Shale and the lower part of the Cloverly formation. This so-called Dakota formation of northern central Colorado and its equivalent in southeastern Wyoming consist of cherty conglomerate, brown quartzose sandstone, and dark shale. The conglomerate is usually at the base of the series and at many localities is overlain by a single shale unit and that in turn by a sandstone. At other localities, however, there are several alternations of sandstone and shale above the basal conglomeratic layer. The fossil described in this paper, except one specimen, were obtained from the shales of the middle part of the formation. The single specimen, an ammonite, came from the uppermost sandstone.

  19. Concept Learning versus Problem Solving: Is There a Difference?

    ERIC Educational Resources Information Center

    Nurrenbern, Susan C.; Pickering, Miles

    1987-01-01

    Reports on a study into the relationship between a student's ability to solve problems in chemistry and his/her understanding of molecular concepts. Argues that teaching students to solve problems about chemistry is not equivalent to teaching about the nature of matter. (TW)

  20. Topological transitions and freezing in XY models and Coulomb gases with quenched disorder: renormalization via traveling waves

    NASA Astrophysics Data System (ADS)

    Carpentier, David; Le Doussal, Pierre

    2000-11-01

    We study the two dimensional XY model with quenched random phases and its Coulomb gas formulation. A novel renormalization group (RG) method is developed which allows to study perturbatively the glassy low temperature XY phase and the transition at which frozen topological defects (vortices) proliferate. This RG approach is constructed both from the replicated Coulomb gas and, equivalently without the use of replicas, using the probability distribution of the local disorder (random defect core energy). By taking into account the fusion of environments (i.e., charge fusion in the replicated Coulomb gas) this distribution is shown to obey a Kolmogorov's type (KPP) non linear RG equation which admits traveling wave solutions and exhibits a freezing phenomenon analogous to glassy freezing in Derrida's random energy models. The resulting physical picture is that the distribution of local disorder becomes broad below a freezing temperature and that the transition is controlled by rare favorable regions for the defects, the density of which can be used as the new perturbative parameter. The determination of marginal directions at the disorder induced transition is shown to be related to the well studied front velocity selection problem in the KPP equation and the universality of the novel critical behaviour obtained here to the known universality of the corrections to the front velocity. Applications to other two dimensional problems are mentioned at the end.

  1. Continuous-variable quantum key distribution based on a plug-and-play dual-phase-modulated coherent-states protocol

    NASA Astrophysics Data System (ADS)

    Huang, Duan; Huang, Peng; Wang, Tao; Li, Huasheng; Zhou, Yingming; Zeng, Guihua

    2016-09-01

    We propose and experimentally demonstrate a continuous-variable quantum key distribution (CV-QKD) protocol using dual-phase-modulated coherent states. We show that the modulation scheme of our protocol works equivalently to that of the Gaussian-modulated coherent-states (GMCS) protocol, but shows better experimental feasibility in the plug-and-play configuration. Besides, it waives the necessity of propagation of a local oscillator (LO) between legitimate users and generates a real local LO for quantum measurement. Our protocol is proposed independent of the one-way GMCS QKD without sending a LO [Opt. Lett. 40, 3695 (2015), 10.1364/OL.40.003695; Phys. Rev. X 5, 041009 (2015), 10.1103/PhysRevX.5.041009; Phys. Rev. X 5, 041010 (2015), 10.1103/PhysRevX.5.041010]. In those recent works, the system stability will suffer the impact of polarization drifts induced by environmental perturbations, and two independent frequency-locked laser sources are necessary to achieve reliable coherent detection. In the proposed protocol, these previous problems can be resolved. We derive the security bounds for our protocol against collective attacks, and we also perform a proof-of-principle experiment to confirm the utility of our proposal in real-life applications. Such an efficient scheme provides a way of removing the security loopholes associated with the transmitting LO, which have been a notoriously hard problem in continuous-variable quantum communication.

  2. Hawking radiation, Unruh radiation, and the equivalence principle.

    PubMed

    Singleton, Douglas; Wilburn, Steve

    2011-08-19

    We compare the response function of an Unruh-DeWitt detector for different space-times and different vacua and show that there is a detailed violation of the equivalence principle. In particular comparing the response of an accelerating detector to a detector at rest in a Schwarzschild space-time we find that both detectors register thermal radiation, but for a given, equivalent acceleration the fixed detector in the Schwarzschild space-time measures a higher temperature. This allows one to locally distinguish the two cases. As one approaches the horizon the two temperatures have the same limit so that the equivalence principle is restored at the horizon. © 2011 American Physical Society

  3. An Alternative Lattice Field Theory Formulation Inspired by Lattice Supersymmetry-Summary of the Formulation-

    NASA Astrophysics Data System (ADS)

    D'Adda, Alessandro; Kawamoto, Noboru; Saito, Jun

    2018-03-01

    We propose a lattice field theory formulation which overcomes some fundamental diffculties in realizing exact supersymmetry on the lattice. The Leibniz rule for the difference operator can be recovered by defining a new product on the lattice, the star product, and the chiral fermion species doublers degrees of freedom can be avoided consistently. This framework is general enough to formulate non-supersymmetric lattice field theory without chiral fermion problem. This lattice formulation has a nonlocal nature and is essentially equivalent to the corresponding continuum theory. We can show that the locality of the star product is recovered exponentially in the continuum limit. Possible regularization procedures are proposed.The associativity of the product and the lattice translational invariance of the formulation will be discussed.

  4. The Evolution of the Observed Hubble Sequence over the past 6Gyr

    NASA Astrophysics Data System (ADS)

    Delgado-Serrano, R.; Hammer, F.; Yang, Y. B.; Puech, M.; Flores, H.; Rodrigues, M.

    2011-10-01

    During the past years we have confronted serious problems of methodology concerning the morphological and kinematic classification of distant galaxies. This has forced us to create a new simple and effective morphological classification methodology, in order to guarantee a morpho-kinematic correlation, make the reproducibility easier and restrict the classification subjectivity. Giving the characteristic of our morphological classification, we have thus been able to apply the same methodology, using equivalent observations, to representative samples of local and distant galaxies. It has allowed us to derive, for the first time, the distant Hubble sequence (~6 Gyr ago), and determine a morphological evolution of galaxies over the past 6 Gyr. Our results strongly suggest that more than half of the present-day spirals had peculiar morphologies, 6 Gyr ago.

  5. On Learning to Talk: Are Principles Derived from the Learning Laboratory Applicable?

    ERIC Educational Resources Information Center

    Palermo, David S.

    While studies in learning and verbal behavior show that learning comes through paired-associate problems, they do not explain the acquisition of language. Three paradigms demonstrate mediation effect in paired-associate learning: response equivalence, stimulus equivalence, and chaining model. By reviewing children's language acquisition patterns…

  6. Equivalent Electromagnetic Constants for Microwave Application to Composite Materials for the Multi-Scale Problem

    PubMed Central

    Fujisaki, Keisuke; Ikeda, Tomoyuki

    2013-01-01

    To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395

  7. On the transformations of the dynamical equations

    NASA Astrophysics Data System (ADS)

    Levi-Civita, T.

    2009-08-01

    In this issue we bring to the reader’s attention a translation of Levi-Civita’s work “Sulle trasformazioni delle equazioni dinamiche”. This paper, written by Levi-Civita at the onset of his career, is remarkable in many respects. Both the main result and the method developed in the paper brought the author in line with the greatest mathematicians of his day and seriously influenced the further progress of geometry and the theory of integrable systems. Speaking modern language the main result of his paper is the deduction of the general geodesic equivalence equation in invariant form and local classification of geodesically equivalent Riemannian metrics in the case of arbitrary dimension, i.e., metrics having the same geodesics considered as unparameterized curves (this classification problem was formulated by Beltrami in 1865). Levi-Civita’s work produced a great impact on further development of the theory of geodesically equivalent metrics and geodesic mappings, and still remains one of the most important tools in this area of differential geometry. In this paper the author uses a new method based on the concept of Riemannian connection, which later has been also referred to as the Levi-Civita connection. This paper is truly a pioneering work in the sense that the real power of covariant differentiation techniques in solving a concrete and highly nontrivial problem from the theory of dynamical systems was demonstrated. The author skillfully operates and weaves together many of the most advanced (for that times) algebraic, geometric and analytic methods. Moreover, an attentive reader can also notice several forerunning ideas of the method of moving frames, which was developed a few decades later by E. Cartan. We hope that the reader will appreciate the style of exposition as well. This work, focused on the essence of the problem and free of manipulation with abstract mathematical terms, is a good example of a classical text of the late 19th century. Owing to this, the paper is easy to read and understand in spite of some different notation and terminology. The Editorial Board is very grateful to Professor Sergio Benenti for the translation of the original Italian text and valuable comments (see marginal notes at the end of the text, p. 612).

  8. Introductory Course Based on a Single Problem: Learning Nucleic Acid Biochemistry from AIDS Research

    ERIC Educational Resources Information Center

    Grover, Neena

    2004-01-01

    In departure from the standard approach of using several problems to cover specific topics in a class, I use a single problem to cover the contents of the entire semester-equivalent biochemistry classes. I have developed a problem-based service-learning (PBSL) problem on HIV/AIDS to cover nucleic acid concepts that are typically taught in the…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.

    An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems.more » Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.« less

  10. Constitutive Modeling of Nanotube/Polymer Composites with Various Nanotube Orientations

    NASA Technical Reports Server (NTRS)

    Odegard, Gregory M.; Gates, Thomas S.

    2002-01-01

    In this study, a technique has been proposed for developing constitutive models for polymer composite systems reinforced with single-walled carbon nanotubes (SWNT) with various orientations with respect to the bulk material coordinates. A nanotube, the local polymer adjacent to the nanotube, and the nanotube/polymer interface have been modeled as an equivalent-continuum fiber by using an equivalent-continuum modeling method. The equivalent-continuum fiber accounts for the local molecular structure and bonding information and serves as a means for incorporating micromechanical analyses for the prediction of bulk mechanical properties of SWNT/polymer composite. As an example, the proposed approach is used for the constitutive modeling of a SWNT/LaRC-SI (with a PmPV interface) composite system, with aligned nanotubes, three-dimensionally randomly oriented nanotubes, and nanotubes oriented with varying degrees of axisymmetry. It is shown that the Young s modulus is highly dependent on the SWNT orientation distribution.

  11. Structural Equivalence of Involvement in Problem Behavior by Adolescents across Racial Groups Using Multiple Group Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Williams, James H.; And Others

    1996-01-01

    Problem behavior theory predicts that adolescent problem behaviors are manifestations of a single behavioral syndrome. This study tested the validity of the theory across racial groups. Results indicate that multiple pathways are necessary to account for the problem behaviors and they support previous research indicating system response bias in…

  12. Equivalent Dynamic Models.

    PubMed

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  13. The physics of custody

    NASA Astrophysics Data System (ADS)

    Gomberoff, Andrés; Muñoz, Víctor; Romagnoli, Pierre Paul

    2014-02-01

    Divorced individuals face complex situations when they have children with different ex-partners, or even more, when their new partners have children of their own. In such cases, and when kids spend every other weekend with each parent, a practical problem emerges: is it possible to have such a custody arrangement that every couple has either all of the kids together or no kids at all? We show that in general, it is not possible, but that the number of couples that do can be maximized. The problem turns out to be equivalent to finding the ground state of a spin glass system, which is known to be equivalent to what is called a weighted max-cut problem in graph theory, and hence it is NP-complete.

  14. SEM (Symmetry Equivalent Molecules): a web-based GUI to generate and visualize the macromolecules

    PubMed Central

    Hussain, A. S. Z.; Kumar, Ch. Kiran; Rajesh, C. K.; Sheik, S. S.; Sekar, K.

    2003-01-01

    SEM, Symmetry Equivalent Molecules, is a web-based graphical user interface to generate and visualize the symmetry equivalent molecules (proteins and nucleic acids). In addition, the program allows the users to save the three-dimensional atomic coordinates of the symmetry equivalent molecules in the local machine. The widely recognized graphics program RasMol has been deployed to visualize the reference (input atomic coordinates) and the symmetry equivalent molecules. This program is written using CGI/Perl scripts and has been interfaced with all the three-dimensional structures (solved using X-ray crystallography) available in the Protein Data Bank. The program, SEM, can be accessed over the World Wide Web interface at http://dicsoft2.physics.iisc.ernet.in/sem/ or http://144.16.71.11/sem/. PMID:12824326

  15. High- and low-dose-rate intraoperative radiotherapy for thoracic malignancies resected with close or positive margins.

    PubMed

    Fleming, Christopher; Rimner, Andreas; Cohen, Gil'ad N; Woo, Kaitlin M; Zhang, Zhigang; Rosenzweig, Kenneth E; Alektiar, Kaled M; Zelefsky, Michael J; Bains, Manjit S; Wu, Abraham J

    2016-01-01

    Local recurrence is a significant problem after surgical resection of thoracic tumors. As intraoperative radiotherapy (IORT) can deliver radiation directly to the threatened margin, we have used this therapy in an attempt to reduce local recurrence, using high-dose-rate (HDR) as well as low-dose-rate (LDR) techniques. We performed a retrospective review of patients undergoing LDR ((125)I) mesh placement or HDR ((192)Ir) afterloading therapy during lung tumor resection between 2001 and 2013 at our institution. Competing risks methods were used to estimate the cumulative incidence of local failure. We also assessed possible predictive factors of local failure. Fifty-nine procedures (41 LDR and 18 HDR) were performed on 58 patients. Median follow-up was 55.1 months. Cumulative incidence of local failure at 1, 2, and 3 years was 28.5%, 34.2%, and 34.2%, respectively. Median overall survival was 39.9 months. There was no significant difference in local failure according to margin status, HDR vs. LDR, use of adjuvant external beam radiotherapy, or metastatic vs. primary tumor. Two patients (3.4%) experienced Grade 3+ toxicities likely related to brachytherapy. Additionally, 7 patients experienced Grade 3+ postsurgical complications unlikely related to brachytherapy. IORT is associated with good local control after resection of thoracic tumors otherwise at very high risk for local recurrence. There is a low incidence of severe toxicity attributable to brachytherapy. HDR-IORT appears to have equivalent outcomes to LDR-IORT. HDR or LDR-IORT can, therefore, be considered in situations where the oncologic completeness of thoracic tumor resection is in doubt. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  16. 40 CFR 455.41 - Special definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... being used contains the appropriate pollution control technologies (or equivalent systems/pesticide... appropriate permitting authority, e.g., the local Control Authority (the POTW) or NPDES permit writer which... written submission to the appropriate permitting authority, e.g., the local Control Authority (the POTW...

  17. 40 CFR 455.41 - Special definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... being used contains the appropriate pollution control technologies (or equivalent systems/pesticide... appropriate permitting authority, e.g., the local Control Authority (the POTW) or NPDES permit writer which... written submission to the appropriate permitting authority, e.g., the local Control Authority (the POTW...

  18. Capillary wave Hamiltonian for the Landau-Ginzburg-Wilson density functional

    NASA Astrophysics Data System (ADS)

    Chacón, Enrique; Tarazona, Pedro

    2016-06-01

    We study the link between the density functional (DF) formalism and the capillary wave theory (CWT) for liquid surfaces, focused on the Landau-Ginzburg-Wilson (LGW) model, or square gradient DF expansion, with a symmetric double parabola free energy, which has been extensively used in theoretical studies of this problem. We show the equivalence between the non-local DF results of Parry and coworkers and the direct evaluation of the mean square fluctuations of the intrinsic surface, as is done in the intrinsic sampling method for computer simulations. The definition of effective wave-vector dependent surface tensions is reviewed and we obtain new proposals for the LGW model. The surface weight proposed by Blokhuis and the surface mode analysis proposed by Stecki provide consistent and optimal effective definitions for the extended CWT Hamiltonian associated to the DF model. A non-local, or coarse-grained, definition of the intrinsic surface provides the missing element to get the mesoscopic surface Hamiltonian from the molecular DF description, as had been proposed a long time ago by Dietrich and coworkers.

  19. Numerical 3+1 General Relativistic Magnetohydrodynamics: A Local Characteristic Approach

    NASA Astrophysics Data System (ADS)

    Antón, Luis; Zanotti, Olindo; Miralles, Juan A.; Martí, José M.; Ibáñez, José M.; Font, José A.; Pons, José A.

    2006-01-01

    We present a general procedure to solve numerically the general relativistic magnetohydrodynamics (GRMHD) equations within the framework of the 3+1 formalism. The work reported here extends our previous investigation in general relativistic hydrodynamics (Banyuls et al. 1997) where magnetic fields were not considered. The GRMHD equations are written in conservative form to exploit their hyperbolic character in the solution procedure. All theoretical ingredients necessary to build up high-resolution shock-capturing schemes based on the solution of local Riemann problems (i.e., Godunov-type schemes) are described. In particular, we use a renormalized set of regular eigenvectors of the flux Jacobians of the relativistic MHD equations. In addition, the paper describes a procedure based on the equivalence principle of general relativity that allows the use of Riemann solvers designed for special relativistic MHD in GRMHD. Our formulation and numerical methodology are assessed by performing various test simulations recently considered by different authors. These include magnetized shock tubes, spherical accretion onto a Schwarzschild black hole, equatorial accretion onto a Kerr black hole, and magnetized thick disks accreting onto a black hole and subject to the magnetorotational instability.

  20. Capillary wave Hamiltonian for the Landau-Ginzburg-Wilson density functional.

    PubMed

    Chacón, Enrique; Tarazona, Pedro

    2016-06-22

    We study the link between the density functional (DF) formalism and the capillary wave theory (CWT) for liquid surfaces, focused on the Landau-Ginzburg-Wilson (LGW) model, or square gradient DF expansion, with a symmetric double parabola free energy, which has been extensively used in theoretical studies of this problem. We show the equivalence between the non-local DF results of Parry and coworkers and the direct evaluation of the mean square fluctuations of the intrinsic surface, as is done in the intrinsic sampling method for computer simulations. The definition of effective wave-vector dependent surface tensions is reviewed and we obtain new proposals for the LGW model. The surface weight proposed by Blokhuis and the surface mode analysis proposed by Stecki provide consistent and optimal effective definitions for the extended CWT Hamiltonian associated to the DF model. A non-local, or coarse-grained, definition of the intrinsic surface provides the missing element to get the mesoscopic surface Hamiltonian from the molecular DF description, as had been proposed a long time ago by Dietrich and coworkers.

  1. Direct localization of poles of a meromorphic function from measurements on an incomplete boundary

    NASA Astrophysics Data System (ADS)

    Nara, Takaaki; Ando, Shigeru

    2010-01-01

    This paper proposes an algebraic method to reconstruct the positions of multiple poles in a meromorphic function field from measurements on an arbitrary simple arc in it. A novel issue is the exactness of the algorithm depending on whether the arc is open or closed, and whether it encloses or does not enclose the poles. We first obtain a differential equation that can equivalently determine the meromorphic function field. From it, we derive linear equations that relate the elementary symmetric polynomials of the pole positions to weighted integrals of the field along the simple arc and end-point terms of the arc when it is an open one. Eliminating the end-point terms based on an appropriate choice of weighting functions and a combination of the linear equations, we obtain a simple system of linear equations for solving the elementary symmetric polynomials. We also show that our algorithm can be applied to a 2D electric impedance tomography problem. The effects of the proximity of the poles, the number of measurements and noise on the localization accuracy are numerically examined.

  2. Arithmetic Practice Can Be Modified to Promote Understanding of Mathematical Equivalence

    ERIC Educational Resources Information Center

    McNeil, Nicole M.; Fyfe, Emily R.; Dunwiddie, April E.

    2015-01-01

    This experiment tested if a modified version of arithmetic practice facilitates understanding of math equivalence. Children within 2nd-grade classrooms (N = 166) were randomly assigned to practice single-digit addition facts using 1 of 2 workbooks. In the control workbook, problems were presented in the traditional "operations = answer"…

  3. After Decentralization: Delimitations and Possibilities within New Fields

    ERIC Educational Resources Information Center

    Wahlstrom, Ninni

    2008-01-01

    The shift from a centralized to a decentralized school system can be seen as a solution to an uncertain problem. Through analysing the displacements in the concept of equivalence within Sweden's decentralized school system, this study illustrates how the meaning of the concept of equivalence shifts over time, from a more collective target…

  4. Writing on the Threshold: Investigating New Media Concerns in Composition Textbooks

    ERIC Educational Resources Information Center

    Etlinger, Sarah A.

    2012-01-01

    This dissertation examines three recent first-year composition textbooks' treatments of new media. These textbooks treat new media as equivalent to print media; I offer "media equivalency" to describe the problem. This concept suggests that one medium is understood by the same methods as another. I argue that the media equivalency…

  5. Estrus response and fertility of Menz and crossbred ewes to single prostaglandin injection protocol.

    PubMed

    Mekuriaw, Zeleke; Assefa, Habtemariam; Tegegne, Azage; Muluneh, Dagne

    2016-01-01

    Natural lambing in sheep in Ethiopia occurs throughout the year in a scattered manner negatively affecting survival and growth rates of the lambs born during the unfavorable season of the year. Thus, controlling the time of mating artificially using exogenous source of hormones is considered as one of the ways to mitigated problems related to haphazard lambing. To this end, an experiment was conducted to evaluate efficacy of prostaglandin-based estrus synchronization protocol in local and crossbred ewes. A total of 160 ewes (80 local and 80 crossbreds) which lambed at least once and aged 3-5 years were used. Lutalyse® (dinoprost tromethamine sterile solution equivalent to 5 mg dinoprost per ml) and its analog, Synchromate® (cloprostenol sodium equivalent to 0.250 mg cloprostenol per ml), were tested at different doses. The treatments used were intramuscular injection of (1) 2.50 ml of Lutalyse® (12.5 mg dinoprost tromethamine), (2) 2 ml of Lutalyse® (10.0 mg dinoprost tromethamine), (3) 1 ml of Synchromate® (0.25 mg of cloprostenol Sodium), and (4) 0.8 ml of Synchromate® (0.20 mg of cloprostenol Sodium). Forty ewes (20 local and 20 crossbreds) were allocated per treatment. Following injection of the respective hormones, rams of known fertility were introduced into the flock for the duration of 96 h at the ratio of one ram to 10 ewes. All estrus synchronization protocols except treatment 4 (0.8 ml of Synchromate®) induced estrus (heat) in majority (55-65%) of local and crossbred ewes within 96 h post-hormone injection. The time interval from hormone administration to onset of estrus was also more or less similar for all treatment groups except for treatment group 4 which showed heat quicker. The highest lambing rate was recorded in local ewes (84.62% (11/13) treated with 2.5 ml of Lutalyse®, whereas the least was obtained in crossbreds (33.33% (3/9) treated with 0.8 ml Synchromate®. In conclusion, even though 2.5 ml and 2 ml of Lutalyse® or 1 ml of Synchromate® were able to induce heat in majority of local and crossbred ewes, the highest lambing percentage was obtained from ewes treated with 2.5 ml of Lutalyse®. Therefore, the use of 2.5 ml Lutalyse® is recommended to synchronize estrus in local and crossbred ewes under Ethiopian smallholder sheep production system for the benefit of improved lambing rate.

  6. Equivalence between contextuality and negativity of the Wigner function for qudits

    NASA Astrophysics Data System (ADS)

    Delfosse, Nicolas; Okay, Cihan; Bermejo-Vega, Juan; Browne, Dan E.; Raussendorf, Robert

    2017-12-01

    Understanding what distinguishes quantum mechanics from classical mechanics is crucial for quantum information processing applications. In this work, we consider two notions of non-classicality for quantum systems, negativity of the Wigner function and contextuality for Pauli measurements. We prove that these two notions are equivalent for multi-qudit systems with odd local dimension. For a single qudit, the equivalence breaks down. We show that there exist single qudit states that admit a non-contextual hidden variable model description and whose Wigner functions are negative.

  7. Generic pure quantum states as steady states of quasi-local dissipative dynamics

    NASA Astrophysics Data System (ADS)

    Karuvade, Salini; Johnson, Peter D.; Ticozzi, Francesco; Viola, Lorenza

    2018-04-01

    We investigate whether a generic pure state on a multipartite quantum system can be the unique asymptotic steady state of locality-constrained purely dissipative Markovian dynamics. In the tripartite setting, we show that the problem is equivalent to characterizing the solution space of a set of linear equations and establish that the set of pure states obeying the above property has either measure zero or measure one, solely depending on the subsystems’ dimension. A complete analytical characterization is given when the central subsystem is a qubit. In the N-partite case, we provide conditions on the subsystems’ size and the nature of the locality constraint, under which random pure states cannot be quasi-locally stabilized generically. Also, allowing for the possibility to approximately stabilize entangled pure states that cannot be exact steady states in settings where stabilizability is generic, our results offer insights into the extent to which random pure states may arise as unique ground states of frustration-free parent Hamiltonians. We further argue that, to a high probability, pure quantum states sampled from a t-design enjoy the same stabilizability properties of Haar-random ones as long as suitable dimension constraints are obeyed and t is sufficiently large. Lastly, we demonstrate a connection between the tasks of quasi-local state stabilization and unique state reconstruction from local tomographic information, and provide a constructive procedure for determining a generic N-partite pure state based only on knowledge of the support of any two of the reduced density matrices of about half the parties, improving over existing results.

  8. Static and free-vibration analyses of cracks in thin-shell structures based on an isogeometric-meshfree coupling approach

    NASA Astrophysics Data System (ADS)

    Nguyen-Thanh, Nhon; Li, Weidong; Zhou, Kun

    2018-03-01

    This paper develops a coupling approach which integrates the meshfree method and isogeometric analysis (IGA) for static and free-vibration analyses of cracks in thin-shell structures. In this approach, the domain surrounding the cracks is represented by the meshfree method while the rest domain is meshed by IGA. The present approach is capable of preserving geometry exactness and high continuity of IGA. The local refinement is achieved by adding the nodes along the background cells in the meshfree domain. Moreover, the equivalent domain integral technique for three-dimensional problems is derived from the additional Kirchhoff-Love theory to compute the J-integral for the thin-shell model. The proposed approach is able to address the problems involving through-the-thickness cracks without using additional rotational degrees of freedom, which facilitates the enrichment strategy for crack tips. The crack tip enrichment effects and the stress distribution and displacements around the crack tips are investigated. Free vibrations of cracks in thin shells are also analyzed. Numerical examples are presented to demonstrate the accuracy and computational efficiency of the coupling approach.

  9. A flexible motif search technique based on generalized profiles.

    PubMed

    Bucher, P; Karplus, K; Moeri, N; Hofmann, K

    1996-03-01

    A flexible motif search technique is presented which has two major components: (1) a generalized profile syntax serving as a motif definition language; and (2) a motif search method specifically adapted to the problem of finding multiple instances of a motif in the same sequence. The new profile structure, which is the core of the generalized profile syntax, combines the functions of a variety of motif descriptors implemented in other methods, including regular expression-like patterns, weight matrices, previously used profiles, and certain types of hidden Markov models (HMMs). The relationship between generalized profiles and other biomolecular motif descriptors is analyzed in detail, with special attention to HMMs. Generalized profiles are shown to be equivalent to a particular class of HMMs, and conversion procedures in both directions are given. The conversion procedures provide an interpretation for local alignment in the framework of stochastic models, allowing for clear, simple significance tests. A mathematical statement of the motif search problem defines the new method exactly without linking it to a specific algorithmic solution. Part of the definition includes a new definition of disjointness of alignments.

  10. Elements of orbit-determination theory - Textbook

    NASA Technical Reports Server (NTRS)

    Solloway, C. B.

    1971-01-01

    Text applies to solution of various optimization problems. Concepts are logically introduced and refinements and complexities for computerized numerical solutions are avoided. Specific topics and essential equivalence of several different approaches to various aspects of the problem are given.

  11. On the Mathematical Modeling of Single and Multiple Scattering of Ultrasonic Guided Waves by Small Scatterers: A Structural Health Monitoring Measurement Model

    NASA Astrophysics Data System (ADS)

    Strom, Brandon William

    In an effort to assist in the paradigm shift from schedule based maintenance to conditioned based maintenance, we derive measurement models to be used within structural health monitoring algorithms. Our models are physics based, and use scattered Lamb waves to detect and quantify pitting corrosion. After covering the basics of Lamb waves and the reciprocity theorem, we develop a technique for the scattered wave solution. The first application is two-dimensional, and is employed in two different ways. The first approach integrates a traction distribution and replaces it by an equivalent force. The second approach is higher order and uses the actual traction distribution. We find that the equivalent force version of the solution technique holds well for small pits at low frequencies. The second application is three-dimensional. The equivalent force caused by the scattered wave of an arbitrary equivalent force is calculated. We obtain functions for the scattered wave displacements as a function of equivalent forces, equivalent forces as a function of incident wave, and scattered wave amplitudes as a function of incident amplitude. The third application uses self-consistency to derive governing equations for the scattered waves due to multiple corrosion pits. We decouple the implicit set of equations and solve explicitly by using a recursive series solution. Alternatively, we solve via an undetermined coefficient method which results in an interaction operator and solution via matrix inversion. The general solution is given for N pits including mode conversion. We show that the two approaches are equivalent, and give a solution for three pits. Various approximations are advanced to simplify the problem while retaining the leading order physics. As a final application, we use the multiple scattering model to investigate resonance of Lamb waves. We begin with a one-dimensional problem and progress to a three-dimensional problem. A directed graph enables interpretation of the interaction operator, and we show that a series solution converges due to loss of energy in the system. We see that there are four causes of resonance and plot the modulation depth as a function of spacing between the pits.

  12. The alarming problems of confounding equivalence using logistic regression models in the perspective of causal diagrams.

    PubMed

    Yu, Yuanyuan; Li, Hongkai; Sun, Xiaoru; Su, Ping; Wang, Tingting; Liu, Yi; Yuan, Zhongshang; Liu, Yanxun; Xue, Fuzhong

    2017-12-28

    Confounders can produce spurious associations between exposure and outcome in observational studies. For majority of epidemiologists, adjusting for confounders using logistic regression model is their habitual method, though it has some problems in accuracy and precision. It is, therefore, important to highlight the problems of logistic regression and search the alternative method. Four causal diagram models were defined to summarize confounding equivalence. Both theoretical proofs and simulation studies were performed to verify whether conditioning on different confounding equivalence sets had the same bias-reducing potential and then to select the optimum adjusting strategy, in which logistic regression model and inverse probability weighting based marginal structural model (IPW-based-MSM) were compared. The "do-calculus" was used to calculate the true causal effect of exposure on outcome, then the bias and standard error were used to evaluate the performances of different strategies. Adjusting for different sets of confounding equivalence, as judged by identical Markov boundaries, produced different bias-reducing potential in the logistic regression model. For the sets satisfied G-admissibility, adjusting for the set including all the confounders reduced the equivalent bias to the one containing the parent nodes of the outcome, while the bias after adjusting for the parent nodes of exposure was not equivalent to them. In addition, all causal effect estimations through logistic regression were biased, although the estimation after adjusting for the parent nodes of exposure was nearest to the true causal effect. However, conditioning on different confounding equivalence sets had the same bias-reducing potential under IPW-based-MSM. Compared with logistic regression, the IPW-based-MSM could obtain unbiased causal effect estimation when the adjusted confounders satisfied G-admissibility and the optimal strategy was to adjust for the parent nodes of outcome, which obtained the highest precision. All adjustment strategies through logistic regression were biased for causal effect estimation, while IPW-based-MSM could always obtain unbiased estimation when the adjusted set satisfied G-admissibility. Thus, IPW-based-MSM was recommended to adjust for confounders set.

  13. Serial Network Flow Monitor

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Tate-Brown, Judy M.

    2009-01-01

    Using a commercial software CD and minimal up-mass, SNFM monitors the Payload local area network (LAN) to analyze and troubleshoot LAN data traffic. Validating LAN traffic models may allow for faster and more reliable computer networks to sustain systems and science on future space missions. Research Summary: This experiment studies the function of the computer network onboard the ISS. On-orbit packet statistics are captured and used to validate ground based medium rate data link models and enhance the way that the local area network (LAN) is monitored. This information will allow monitoring and improvement in the data transfer capabilities of on-orbit computer networks. The Serial Network Flow Monitor (SNFM) experiment attempts to characterize the network equivalent of traffic jams on board ISS. The SNFM team is able to specifically target historical problem areas including the SAMS (Space Acceleration Measurement System) communication issues, data transmissions from the ISS to the ground teams, and multiple users on the network at the same time. By looking at how various users interact with each other on the network, conflicts can be identified and work can begin on solutions. SNFM is comprised of a commercial off the shelf software package that monitors packet traffic through the payload Ethernet LANs (local area networks) on board ISS.

  14. Equivalent isotropic scattering formulation for transient short-pulse radiative transfer in anisotropic scattering planar media.

    PubMed

    Guo, Z; Kumar, S

    2000-08-20

    An isotropic scaling formulation is evaluated for transient radiative transfer in a one-dimensional planar slab subject to collimated and/or diffuse irradiation. The Monte Carlo method is used to implement the equivalent scattering and exact simulations of the transient short-pulse radiation transport through forward and backward anisotropic scattering planar media. The scaled equivalent isotropic scattering results are compared with predictions of anisotropic scattering in various problems. It is found that the equivalent isotropic scaling law is not appropriate for backward-scattering media in transient radiative transfer. Even for an optically diffuse medium, the differences in temporal transmittance and reflectance profiles between predictions of backward anisotropic scattering and equivalent isotropic scattering are large. Additionally, for both forward and backward anisotropic scattering media, the transient equivalent isotropic results are strongly affected by the change of photon flight time, owing to the change of flight direction associated with the isotropic scaling technique.

  15. Comparing Future Teachers' Beliefs across Countries: Approximate Measurement Invariance with Bayesian Elastic Constraints for Local Item Dependence and Differential Item Functioning

    ERIC Educational Resources Information Center

    Braeken, Johan; Blömeke, Sigrid

    2016-01-01

    Using data from the international Teacher Education and Development Study: Learning to Teach Mathematics (TEDS-M), the measurement equivalence of teachers' beliefs across countries is investigated for the case of "mathematics-as-a fixed-ability". Measurement equivalence is a crucial topic in all international large-scale assessments and…

  16. Lagrangian particles with mixing. I. Simulating scalar transport

    NASA Astrophysics Data System (ADS)

    Klimenko, A. Y.

    2009-06-01

    The physical similarity and mathematical equivalence of continuous diffusion and particle random walk forms one of the cornerstones of modern physics and the theory of stochastic processes. The randomly walking particles do not need to posses any properties other than location in physical space. However, particles used in many models dealing with simulating turbulent transport and turbulent combustion do posses a set of scalar properties and mixing between particle properties is performed to reflect the dissipative nature of the diffusion processes. We show that the continuous scalar transport and diffusion can be accurately specified by means of localized mixing between randomly walking Lagrangian particles with scalar properties and assess errors associated with this scheme. Particles with scalar properties and localized mixing represent an alternative formulation for the process, which is selected to represent the continuous diffusion. Simulating diffusion by Lagrangian particles with mixing involves three main competing requirements: minimizing stochastic uncertainty, minimizing bias introduced by numerical diffusion, and preserving independence of particles. These requirements are analyzed for two limited cases of mixing between two particles and mixing between a large number of particles. The problem of possible dependences between particles is most complicated. This problem is analyzed using a coupled chain of equations that has similarities with Bogolubov-Born-Green-Kirkwood-Yvon chain in statistical physics. Dependences between particles can be significant in close proximity of the particles resulting in a reduced rate of mixing. This work develops further ideas introduced in the previously published letter [Phys. Fluids 19, 031702 (2007)]. Paper I of this work is followed by Paper II [Phys. Fluids 19, 065102 (2009)] where modeling of turbulent reacting flows by Lagrangian particles with localized mixing is specifically considered.

  17. Discrete-Trial Functional Analysis and Functional Communication Training with Three Individuals with Autism and Severe Problem Behavior

    ERIC Educational Resources Information Center

    Schmidt, Jonathan D.; Drasgow, Erik; Halle, James W.; Martin, Christian A.; Bliss, Sacha A.

    2014-01-01

    Discrete-trial functional analysis (DTFA) is an experimental method for determining the variables maintaining problem behavior in the context of natural routines. Functional communication training (FCT) is an effective method for replacing problem behavior, once identified, with a functionally equivalent response. We implemented these procedures…

  18. Localization of Unitary Braid Group Representations

    NASA Astrophysics Data System (ADS)

    Rowell, Eric C.; Wang, Zhenghan

    2012-05-01

    Governed by locality, we explore a connection between unitary braid group representations associated to a unitary R-matrix and to a simple object in a unitary braided fusion category. Unitary R-matrices, namely unitary solutions to the Yang-Baxter equation, afford explicitly local unitary representations of braid groups. Inspired by topological quantum computation, we study whether or not it is possible to reassemble the irreducible summands appearing in the unitary braid group representations from a unitary braided fusion category with possibly different positive multiplicities to get representations that are uniformly equivalent to the ones from a unitary R-matrix. Such an equivalence will be called a localization of the unitary braid group representations. We show that the q = e π i/6 specialization of the unitary Jones representation of the braid groups can be localized by a unitary 9 × 9 R-matrix. Actually this Jones representation is the first one in a family of theories ( SO( N), 2) for an odd prime N > 1, which are conjectured to be localizable. We formulate several general conjectures and discuss possible connections to physics and computer science.

  19. An equivalent domain integral for analysis of two-dimensional mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies subjected to mixed mode loading is presented. The total and product integrals consist of the sum of an area or domain integral and line integrals on the crack faces. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all the problems analyzed.

  20. Exploring the Structure of Equivalence Items in an Assessment of Elementary Grades

    ERIC Educational Resources Information Center

    Singh, Rashmi; Kosko, Karl W.

    2017-01-01

    This study is focused on the structure of equivalence problem to probe the evolution from operational to relational view of students' understanding of the equals sign. We propose a modified construct map which incorporates the intermediate levels in such a transition which were previously ignored. Our findings suggest that the structure of number…

  1. Measurement Equivalence across Racial/Ethnic Groups of the Mood and Feelings Questionnaire for Childhood Depression

    ERIC Educational Resources Information Center

    Banh, My K.; Crane, Paul K.; Rhew, Isaac; Gudmundsen, Gretchen; Stoep, Ann Vander; Lyon, Aaron; McCauley, Elizabeth

    2012-01-01

    As research continues to document differences in the prevalence of mental health problems such as depression across racial/ethnic groups, the issue of measurement equivalence becomes increasingly important to address. The Mood and Feelings Questionnaire (MFQ) is a widely used screening tool for child and adolescent depression. This study applied a…

  2. 28 CFR 36.601 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.601... means a State law or local building code or similar ordinance, or part thereof, that establishes... designee. Certification of equivalency means a final certification that a code meets or exceeds the minimum...

  3. 28 CFR 36.601 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.601... means a State law or local building code or similar ordinance, or part thereof, that establishes... designee. Certification of equivalency means a final certification that a code meets or exceeds the minimum...

  4. 28 CFR 36.601 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes § 36.601... means a State law or local building code or similar ordinance, or part thereof, that establishes... designee. Certification of equivalency means a final certification that a code meets or exceeds the minimum...

  5. The social essentials of learning: an experimental investigation of collaborative problem solving and knowledge construction in mathematics classrooms in Australia and China

    NASA Astrophysics Data System (ADS)

    Chan, Man Ching Esther; Clarke, David; Cao, Yiming

    2018-03-01

    Interactive problem solving and learning are priorities in contemporary education, but these complex processes have proved difficult to research. This project addresses the question "How do we optimise social interaction for the promotion of learning in a mathematics classroom?" Employing the logic of multi-theoretic research design, this project uses the newly built Science of Learning Research Classroom (ARC-SR120300015) at The University of Melbourne and equivalent facilities in China to investigate classroom learning and social interactions, focusing on collaborative small group problem solving as a way to make the social aspects of learning visible. In Australia and China, intact classes of local year 7 students with their usual teacher will be brought into the research classroom facilities with built-in video cameras and audio recording equipment to participate in purposefully designed activities in mathematics. The students will undertake a sequence of tasks in the social units of individual, pair, small group (typically four students) and whole class. The conditions for student collaborative problem solving and learning will be manipulated so that student and teacher contributions to that learning process can be distinguished. Parallel and comparative analyses will identify culture-specific interactive patterns and provide the basis for hypotheses about the learning characteristics underlying collaborative problem solving performance documented in the research classrooms in each country. The ultimate goals of the project are to generate, develop and test more sophisticated hypotheses for the optimisation of social interaction in the mathematics classroom in the interest of improving learning and, particularly, student collaborative problem solving.

  6. Nanoparticles in wound healing; from hope to promise, from promise to routine.

    PubMed

    Naderi, Naghmeh; Karponis, Dimitrios; Mosahebi, Afshin; Seifalian, Alexander M

    2018-01-01

    Chronic non-healing wounds represent a growing problem due to their high morbidity and cost. Despite recent advances in wound healing, several systemic and local factors can disrupt the weighed physiologic healing process. This paper critically reviews and discusses the role of nanotechnology in promoting the wound healing process. Nanotechnology-based materials have physicochemical, optical and biological properties unique from their bulk equivalent. These nanoparticles can be incorporated into scaffolds to create nanocomposite smart materials, which promote wound healing through their antimicrobial, as well as selective anti- and pro-inflammatory, and pro-angiogenic properties. Owed to their high surface area, nanoparticles have also been used for drug delivery as well as gene delivery vectors. In addition, nanoparticles affect wound healing by influencing collagen deposition and realignment and provide approaches for skin regeneration and wound healing.

  7. A Geometrical-Statistical Approach to Outlier Removal for TDOA Measurements

    NASA Astrophysics Data System (ADS)

    Compagnoni, Marco; Pini, Alessia; Canclini, Antonio; Bestagini, Paolo; Antonacci, Fabio; Tubaro, Stefano; Sarti, Augusto

    2017-08-01

    The curse of outlier measurements in estimation problems is a well known issue in a variety of fields. Therefore, outlier removal procedures, which enables the identification of spurious measurements within a set, have been developed for many different scenarios and applications. In this paper, we propose a statistically motivated outlier removal algorithm for time differences of arrival (TDOAs), or equivalently range differences (RD), acquired at sensor arrays. The method exploits the TDOA-space formalism and works by only knowing relative sensor positions. As the proposed method is completely independent from the application for which measurements are used, it can be reliably used to identify outliers within a set of TDOA/RD measurements in different fields (e.g. acoustic source localization, sensor synchronization, radar, remote sensing, etc.). The proposed outlier removal algorithm is validated by means of synthetic simulations and real experiments.

  8. Weak Compactness and Control Measures in the Space of Unbounded Measures

    PubMed Central

    Brooks, James K.; Dinculeanu, Nicolae

    1972-01-01

    We present a synthesis theorem for a family of locally equivalent measures defined on a ring of sets. This theorem is then used to exhibit a control measure for weakly compact sets of unbounded measures. In addition, the existence of a local control measure for locally strongly bounded vector measures is proved by means of the synthesis theorem. PMID:16591980

  9. The Normals to a Parabola and the Real Roots of a Cubic

    ERIC Educational Resources Information Center

    Bains, Majinder S.; Thoo, J. B.

    2007-01-01

    The geometric problem of finding the number of normals to the parabola y = x[squared] through a given point is equivalent to the algebraic problem of finding the number of distinct real roots of a cubic equation. Apollonius solved the former problem, and Cardano gave a solution to the latter. The two problems are bridged by Neil's (semi-cubical)…

  10. The Kadison–Singer Problem in mathematics and engineering

    PubMed Central

    Casazza, Peter G.; Tremain, Janet Crandell

    2006-01-01

    We will see that the famous intractible 1959 Kadison–Singer Problem in C*-algebras is equivalent to fundamental open problems in a dozen different areas of research in mathematics and engineering. This work gives all these areas common ground on which to interact as well as explaining why each area has volumes of literature on their respective problems without a satisfactory resolution. PMID:16461465

  11. You'll See What You Mean: Students Encode Equations Based on Their Knowledge of Arithmetic

    ERIC Educational Resources Information Center

    McNeil, Nicole M.; Alibali, Martha W.

    2004-01-01

    This study investigated the roles of problem structure and strategy use in problem encoding. Fourth-grade students solved and explained a set of typical addition problems (e.g., 5 + 4 + 9 + 5 = ?) and mathematical equivalence problems (e.g., 4 + 3 + 6 = 4 + ? or 6 + 4 + 5 = ? + 5). Next, they completed an encoding task in which they reconstructed…

  12. Inducing mental set constrains procedural flexibility and conceptual understanding in mathematics.

    PubMed

    DeCaro, Marci S

    2016-10-01

    An important goal in mathematics is to flexibly use and apply multiple, efficient procedures to solve problems and to understand why these procedures work. One factor that may limit individuals' ability to notice and flexibly apply strategies is the mental set induced by the problem context. Undergraduate (N = 41, Experiment 1) and fifth- and sixth-grade students (N = 87, Experiment 2) solved mathematical equivalence problems in one of two set-inducing conditions. Participants in the complex-first condition solved problems without a repeated addend on both sides of the equal sign (e.g., 7 + 5 + 9 = 3 + _), which required multistep strategies. Then these students solved problems with a repeated addend (e.g., 7 + 5 + 9 = 7 + _), for which a shortcut strategy could be readily used (i.e., adding 5 + 9). Participants in the shortcut-first condition solved the same problem set but began with the shortcut problems. Consistent with laboratory studies of mental set, participants in the complex-first condition were less likely to use the more efficient shortcut strategy when possible. In addition, these participants were less likely to demonstrate procedural flexibility and conceptual understanding on a subsequent assessment of mathematical equivalence knowledge. These findings suggest that certain problem-solving contexts can help or hinder both flexibility in strategy use and deeper conceptual thinking about the problems.

  13. Development of parallel algorithms for electrical power management in space applications

    NASA Technical Reports Server (NTRS)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  14. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  15. Cost-effectiveness Analysis with Influence Diagrams.

    PubMed

    Arias, M; Díez, F J

    2015-01-01

    Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.

  16. Matrix product state representation of quasielectron wave functions

    NASA Astrophysics Data System (ADS)

    Kjäll, J.; Ardonne, E.; Dwivedi, V.; Hermanns, M.; Hansson, T. H.

    2018-05-01

    Matrix product state techniques provide a very efficient way to numerically evaluate certain classes of quantum Hall wave functions that can be written as correlators in two-dimensional conformal field theories. Important examples are the Laughlin and Moore-Read ground states and their quasihole excitations. In this paper, we extend the matrix product state techniques to evaluate quasielectron wave functions, a more complex task because the corresponding conformal field theory operator is not local. We use our method to obtain density profiles for states with multiple quasielectrons and quasiholes, and to calculate the (mutual) statistical phases of the excitations with high precision. The wave functions we study are subject to a known difficulty: the position of a quasielectron depends on the presence of other quasiparticles, even when their separation is large compared to the magnetic length. Quasielectron wave functions constructed using the composite fermion picture, which are topologically equivalent to the quasielectrons we study, have the same problem. This flaw is serious in that it gives wrong results for the statistical phases obtained by braiding distant quasiparticles. We analyze this problem in detail and show that it originates from an incomplete screening of the topological charges, which invalidates the plasma analogy. We demonstrate that this can be remedied in the case when the separation between the quasiparticles is large, which allows us to obtain the correct statistical phases. Finally, we propose that a modification of the Laughlin state, that allows for local quasielectron operators, should have good topological properties for arbitrary configurations of excitations.

  17. Experimental Observation of Two-Dimensional Anderson Localization with the Atomic Kicked Rotor.

    PubMed

    Manai, Isam; Clément, Jean-François; Chicireanu, Radu; Hainaut, Clément; Garreau, Jean Claude; Szriftgiser, Pascal; Delande, Dominique

    2015-12-11

    Dimension 2 is expected to be the lower critical dimension for Anderson localization in a time-reversal-invariant disordered quantum system. Using an atomic quasiperiodic kicked rotor-equivalent to a two-dimensional Anderson-like model-we experimentally study Anderson localization in dimension 2 and we observe localized wave function dynamics. We also show that the localization length depends exponentially on the disorder strength and anisotropy and is in quantitative agreement with the predictions of the self-consistent theory for the 2D Anderson localization.

  18. Item Analysis and Differential Item Functioning of a Brief Conduct Problem Screen

    ERIC Educational Resources Information Center

    Wu, Johnny; King, Kevin M.; Witkiewitz, Katie; Racz, Sarah Jensen; McMahon, Robert J.

    2012-01-01

    Research has shown that boys display higher levels of childhood conduct problems than girls, and Black children display higher levels than White children, but few studies have tested for scalar equivalence of conduct problems across gender and race. The authors conducted a 2-parameter item response theory (IRT) model to examine item…

  19. Effect of Instructional Strategy on Critical Thinking and Content Knowledge: Using Problem-Based Learning in the Secondary Classroom

    ERIC Educational Resources Information Center

    Burris, Scott; Garton, Bryan L.

    2007-01-01

    The purpose of the study was to determine the effect of problem-based learning (PBL) on critical thinking ability and content knowledge among selected secondary agriculture students in Missouri. The study employed a quasi-experimental, non-equivalent comparison group design. The treatment consisted of two instructional strategies: problem-based…

  20. A Generalization of the Euler-Fermat Theorem

    ERIC Educational Resources Information Center

    Harger, Robert T.; Harvey, Melinda E.

    2003-01-01

    This note considers the problem of determining, for fixed k and m, all values of r, 0 [less than] r [less than] [empty set](m), such that k[superscript [empty set](m)+1] [equivalent to] k[superscript r](mod m). More generally, if k, m and c are given, necessary and sufficient conditions are given for k[superscript c] [equivalent to] k[superscript…

  1. The complexity of proving chaoticity and the Church-Turing thesis

    NASA Astrophysics Data System (ADS)

    Calude, Cristian S.; Calude, Elena; Svozil, Karl

    2010-09-01

    Proving the chaoticity of some dynamical systems is equivalent to solving the hardest problems in mathematics. Conversely, classical physical systems may "compute the hard or even the incomputable" by measuring observables which correspond to computationally hard or even incomputable problems.

  2. 42 CFR 422.252 - Terminology.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... § 422.252 Terminology. Annual MA capitation rate means a county payment rate for an MA local area... to refer to the annual MA capitation rate. MA local area means a payment area consisting of county or equivalent area specified by CMS. MA monthly basic beneficiary premium means the premium amount an MA plan...

  3. Final Report: Resolving and Discriminating Overlapping Anomalies from Multiple Objects in Cluttered Environments

    DTIC Science & Technology

    2015-12-15

    UXO community . NAME Total Number: PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Irma Shamatava 0.50 0.50 1 Resolving and Discriminating...Distinguishing an object of interest from innocuous items is the main problem that the UXO community is facing currently. This inverse problem...innocuous items is the main problem that the UXO community is facing currently. This inverse problem demands fast and accurate representation of

  4. N-person differential games. Part 1: Duality-finite element methods

    NASA Technical Reports Server (NTRS)

    Chen, G.; Zheng, Q.

    1983-01-01

    The duality approach, which is motivated by computational needs and is done by introducing N + 1 Language multipliers is addressed. For N-person linear quadratic games, the primal min-max problem is shown to be equivalent to the dual min-max problem.

  5. Reliable Radiation Hybrid Maps: An Efficient Scalable Clustering-based Approach

    USDA-ARS?s Scientific Manuscript database

    The process of mapping markers from radiation hybrid mapping (RHM) experiments is equivalent to the traveling salesman problem and, thereby, has combinatorial complexity. As an additional problem, experiments typically result in some unreliable markers that reduce the overall quality of the map. We ...

  6. A volumetric conformal mapping approach for clustering white matter fibers in the brain

    PubMed Central

    Gupta, Vikash; Prasad, Gautam; Thompson, Paul

    2017-01-01

    The human brain may be considered as a genus-0 shape, topologically equivalent to a sphere. Various methods have been used in the past to transform the brain surface to that of a sphere using harmonic energy minimization methods used for cortical surface matching. However, very few methods have studied volumetric parameterization of the brain using a spherical embedding. Volumetric parameterization is typically used for complicated geometric problems like shape matching, morphing and isogeometric analysis. Using conformal mapping techniques, we can establish a bijective mapping between the brain and the topologically equivalent sphere. Our hypothesis is that shape analysis problems are simplified when the shape is defined in an intrinsic coordinate system. Our goal is to establish such a coordinate system for the brain. The efficacy of the method is demonstrated with a white matter clustering problem. Initial results show promise for future investigation in these parameterization technique and its application to other problems related to computational anatomy like registration and segmentation. PMID:29177252

  7. An equivalent domain integral method for three-dimensional mixed-mode fracture problems

    NASA Technical Reports Server (NTRS)

    Shivakumar, K. N.; Raju, I. S.

    1991-01-01

    A general formulation of the equivalent domain integral (EDI) method for mixed mode fracture problems in cracked solids is presented. The method is discussed in the context of a 3-D finite element analysis. The J integral consists of two parts: the volume integral of the crack front potential over a torus enclosing the crack front and the crack surface integral due to the crack front potential plus the crack face loading. In mixed mode crack problems the total J integral is split into J sub I, J sub II, and J sub III representing the severity of the crack front in three modes of deformations. The direct and decomposition methods are used to separate the modes. These two methods were applied to several mixed mode fracture problems, were analyzed, and results were found to agree well with those available in the literature. The method lends itself to be used as a post-processing subroutine in a general purpose finite element program.

  8. An equivalent domain integral method for three-dimensional mixed-mode fracture problems

    NASA Technical Reports Server (NTRS)

    Shivakumar, K. N.; Raju, I. S.

    1992-01-01

    A general formulation of the equivalent domain integral (EDI) method for mixed mode fracture problems in cracked solids is presented. The method is discussed in the context of a 3-D finite element analysis. The J integral consists of two parts: the volume integral of the crack front potential over a torus enclosing the crack front and the crack surface integral due to the crack front potential plus the crack face loading. In mixed mode crack problems the total J integral is split into J sub I, J sub II, and J sub III representing the severity of the crack front in three modes of deformations. The direct and decomposition methods are used to separate the modes. These two methods were applied to several mixed mode fracture problems, were analyzed, and results were found to agree well with those available in the literature. The method lends itself to be used as a post-processing subroutine in a general purpose finite element program.

  9. The Same or Not the Same: Equivalence as an Issue in Educational Research

    NASA Astrophysics Data System (ADS)

    Lewis, Scott E.; Lewis, Jennifer E.

    2005-09-01

    In educational research, particularly in the sciences, a common research design calls for the establishment of a control and experimental group to determine the effectiveness of an intervention. As part of this design, it is often desirable to illustrate that the two groups were equivalent at the start of the intervention, based on measures such as standardized cognitive tests or student grades in prior courses. In this article we use SAT and ACT scores to illustrate a more robust way of testing equivalence. The method incorporates two one-sided t tests evaluating two null hypotheses, providing a stronger claim for equivalence than the standard method, which often does not address the possible problem of low statistical power. The two null hypotheses are based on the construction of an equivalence interval particular to the data, so the article also provides a rationale for and illustration of a procedure for constructing equivalence intervals. Our consideration of equivalence using this method also underscores the need to include sample sizes, standard deviations, and group means in published quantitative studies.

  10. Effect of thermal expansion on the stability of two-reactant flames

    NASA Technical Reports Server (NTRS)

    Jackson, T. L.

    1986-01-01

    The full problem of flame stability for the two-reactant model, which takes into account thermal expansion effects for all disturbance wave lengths, is examined. It is found that the stability problem for the class of two-reactant flames is equivalent to the stability problem for the class of one-reactant flames with an appropriate interpretation of Lewis numbers.

  11. The Timing of Feedback on Mathematics Problem Solving in a Classroom Setting

    ERIC Educational Resources Information Center

    Fyfe, Emily R.; Rittle-Johnson, Bethany

    2015-01-01

    Feedback is a ubiquitous learning tool that is theorized to help learners detect and correct their errors. The goal of this study was to examine the effects of feedback in a classroom context for children solving math equivalence problems (problems with operations on both sides of the equal sign). The authors worked with children in 7 second-grade…

  12. Studying the flow dynamics of a karst aquifer system with an equivalent porous medium model.

    PubMed

    Abusaada, Muath; Sauter, Martin

    2013-01-01

    The modeling of groundwater flow in karst aquifers is a challenge due to the extreme heterogeneity of its hydraulic parameters and the duality in their discharge behavior, that is, rapid response of highly conductive karst conduits and delayed drainage of the low-permeability fractured matrix after recharge events. There are a number of different modeling approaches for the simulation of the karst groundwater dynamics, applicable to different aquifer as well as modeling problem types, ranging from continuum models to double continuum models to discrete and hybrid models. This study presents the application of an equivalent porous model approach (EPM, single continuum model) to construct a steady-state numerical flow model for an important karst aquifer, that is, the Western Mountain Aquifer Basin (WMAB), shared by Israel and the West-Bank, using MODFLOW2000. The WMAB was used as a catchment since it is a well-constrained catchment with well-defined recharge and discharge components and therefore allows a control on the modeling approach, a very rare opportunity for karst aquifer modeling. The model demonstrates the applicability of equivalent porous medium models for the simulation of karst systems, despite their large contrast in hydraulic conductivities. As long as the simulated saturated volume is large enough to average out the local influence of karst conduits and as long as transport velocities are not an issue, EPM models excellently simulate the observed head distribution. The model serves as a starting basis that will be used as a reference for developing a long-term dynamic model for the WMAB, starting from the pre-development period (i.e., 1940s) up to date. © 2012, The Author(s). GroundWater © 2012, National Ground Water Association.

  13. Equivalent source modeling of the core magnetic field using magsat data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Estes, R. H.

    1983-01-01

    Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.

  14. REVERSAL LEARNING SET AND FUNCTIONAL EQUIVALENCE IN CHILDREN WITH AND WITHOUT AUTISM

    PubMed Central

    Lionello-DeNolf, Karen M.; McIlvane, William J.; Canovas, Daniela S.; de Souza, Deisy G.; Barros, Romariz S.

    2009-01-01

    To evaluate whether children with and without autism could exhibit (a) functional equivalence in the course of yoked repeated-reversal training and (b) reversal learning set, 6 children, in each of two experiments, were exposed to simple discrimination contingencies with three sets of stimuli. The discriminative functions of the set members were yoked and repeatedly reversed. In Experiment 1, all the children (of preschool age) showed gains in the efficiency of reversal learning across reversal problems and behavior that suggested formation of functional equivalence. In Experiment 2, 3 nonverbal children with autism exhibited strong evidence of reversal learning set and 2 showed evidence of functional equivalence. The data suggest a possible relationship between efficiency of reversal learning and functional equivalence test outcomes. Procedural variables may prove important in assessing the potential of young or nonverbal children to classify stimuli on the basis of shared discriminative functions. PMID:20186287

  15. Language Measurement Equivalence of the Ethnic Identity Scale With Mexican American Early Adolescents

    PubMed Central

    White, Rebecca M. B.; Umaña-Taylor, Adriana J.; Knight, George P.; Zeiders, Katharine H.

    2011-01-01

    The current study considers methodological challenges in developmental research with linguistically diverse samples of young adolescents. By empirically examining the cross-language measurement equivalence of a measure assessing three components of ethnic identity development (i.e., exploration, resolution, and affirmation) among Mexican American adolescents, the study both assesses the cross-language measurement equivalence of a common measure of ethnic identity and provides an appropriate conceptual and analytical model for researchers needing to evaluate measurement scales translated into multiple languages. Participants are 678 Mexican-origin early adolescents and their mothers. Measures of exploration and resolution achieve the highest levels of equivalence across language versions. The measure of affirmation achieves high levels of equivalence. Results highlight potential ways to correct for any problems of nonequivalence across language versions of the affirmation measure. Suggestions are made for how researchers working with linguistically diverse samples can use the highlighted techniques to evaluate their own translated measures. PMID:22116736

  16. Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2001-01-01

    This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.

  17. Computing the full spectrum of large sparse palindromic quadratic eigenvalue problems arising from surface Green's function calculations

    NASA Astrophysics Data System (ADS)

    Huang, Tsung-Ming; Lin, Wen-Wei; Tian, Heng; Chen, Guan-Hua

    2018-03-01

    Full spectrum of a large sparse ⊤-palindromic quadratic eigenvalue problem (⊤-PQEP) is considered arguably for the first time in this article. Such a problem is posed by calculation of surface Green's functions (SGFs) of mesoscopic transistors with a tremendous non-periodic cross-section. For this problem, general purpose eigensolvers are not efficient, nor is advisable to resort to the decimation method etc. to obtain the Wiener-Hopf factorization. After reviewing some rigorous understanding of SGF calculation from the perspective of ⊤-PQEP and nonlinear matrix equation, we present our new approach to this problem. In a nutshell, the unit disk where the spectrum of interest lies is broken down adaptively into pieces small enough that they each can be locally tackled by the generalized ⊤-skew-Hamiltonian implicitly restarted shift-and-invert Arnoldi (G⊤SHIRA) algorithm with suitable shifts and other parameters, and the eigenvalues missed by this divide-and-conquer strategy can be recovered thanks to the accurate estimation provided by our newly developed scheme. Notably the novel non-equivalence deflation is proposed to avoid as much as possible duplication of nearby known eigenvalues when a new shift of G⊤SHIRA is determined. We demonstrate our new approach by calculating the SGF of a realistic nanowire whose unit cell is described by a matrix of size 4000 × 4000 at the density functional tight binding level, corresponding to a 8 × 8nm2 cross-section. We believe that quantum transport simulation of realistic nano-devices in the mesoscopic regime will greatly benefit from this work.

  18. Ordinal optimization and its application to complex deterministic problems

    NASA Astrophysics Data System (ADS)

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  19. Analytical study of the effects of clouds on the light produced by lightning

    NASA Technical Reports Server (NTRS)

    Phanord, Dieudonne D.

    1990-01-01

    Researchers consider the scattering of visible and infrared light due to lightning by cubic, cylindrical and spherical clouds. The researchers extend to cloud physics the work by Twersky for single and multiple scattering of electromagnetic waves. They solve the interior problem separately to obtain the bulk parameters for the scatterer equivalent to the ensemble of spherical droplets. With the interior solution or the equivalent medium approach, the multiple scattering problem is reduced to that of a single scatterer in isolation. Hence, the computing methods of Wiscombe or Bohren specialized to Mie scattering with the possibility for absorption were used to generate numerical results in short computer time.

  20. Effects of polybrominated biphenyl on milk production, reproduction, and health problems in Holstein cows.

    PubMed Central

    Wastell, M E; Moody, D L; Plog, J F

    1978-01-01

    PBB found in relatively low levels among animals present on a cross-section of Michigan farms during the time PBB was inadvertantly added to dairy feeds had no effect upon these animals' milk production, body weight, weight gain, breeding and reproduction performance, incidence of commonly experienced health problems, calving rate, and the health of their calves. No significant differences in these vital areas could be seen between Michigan animals exposed to PBB and equivalent Wisconsin animals which had not been exposed to PBB when both groups were subjected to equivalent management practices. No pattern of gross of histopathological lesions was seen upon necropsy between test animals and control animals. PMID:210008

  1. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  2. Predicting the Velocity Dispersions of the Dwarf Satellite Galaxies of Andromeda

    NASA Astrophysics Data System (ADS)

    McGaugh, Stacy S.

    2016-05-01

    Dwarf Spheroidal galaxies in the Local Group are the faintest and most diffuse stellar systems known. They exhibit large mass discrepancies, making them popular laboratories for studying the missing mass problem. The PANDAS survey of M31 revealed dozens of new examples of such dwarfs. As these systems were discovered, it was possible to use the observed photometric properties to predict their stellar velocity dispersions with the modified gravity theory MOND. These predictions, made in advance of the observations, have since been largely confirmed. A unique feature of MOND is that a structurally identical dwarf will behave differently when it is or is not subject to the external field of a massive host like Andromeda. The role of this "external field effect" is critical in correctly predicting the velocity dispersions of dwarfs that deviate from empirical scaling relations. With continued improvement in the observational data, these systems could provide a test of the strong equivalence principle.

  3. Variational calculation of macrostate transition rates

    NASA Astrophysics Data System (ADS)

    Ulitsky, Alex; Shalloway, David

    1998-08-01

    We develop the macrostate variational method (MVM) for computing reaction rates of diffusive conformational transitions in multidimensional systems by a variational coarse-grained "macrostate" decomposition of the Smoluchowski equation. MVM uses multidimensional Gaussian packets to identify and focus computational effort on the "transition region," a localized, self-consistently determined region in conformational space positioned roughly between the macrostates. It also determines the "transition direction" which optimally specifies the projected potential of mean force for mean first-passage time calculations. MVM is complementary to variational transition state theory in that it can efficiently solve multidimensional problems but does not accommodate memory-friction effects. It has been tested on model 1- and 2-dimensional potentials and on the 12-dimensional conformational transition between the isoforms of a microcluster of six-atoms having only van der Waals interactions. Comparison with Brownian dynamics calculations shows that MVM obtains equivalent results at a fraction of the computational cost.

  4. TOWARD A NETWORK OF FAINT DA WHITE DWARFS AS HIGH-PRECISION SPECTROPHOTOMETRIC STANDARDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, G.; Matheson, T.; Saha, A.

    We present the initial results from a program aimed at establishing a network of hot DA white dwarfs to serve as spectrophotometric standards for present and future wide-field surveys. These stars span the equatorial zone and are faint enough to be conveniently observed throughout the year with large-aperture telescopes. The spectra of these white dwarfs are analyzed in order to generate a non-local-thermodynamic-equilibrium model atmosphere normalized to Hubble Space Telescope colors, including adjustments for wavelength-dependent interstellar extinction. Once established, this standard star network will serve ground-based observatories in both hemispheres as well as space-based instrumentation from the UV to themore » near IR. We demonstrate the effectiveness of this concept and show how two different approaches to the problem using somewhat different assumptions produce equivalent results. We discuss the lessons learned and the resulting corrective actions applied to our program.« less

  5. Image deblurring based on nonlocal regularization with a non-convex sparsity constraint

    NASA Astrophysics Data System (ADS)

    Zhu, Simiao; Su, Zhenming; Li, Lian; Yang, Yi

    2018-04-01

    In recent years, nonlocal regularization methods for image restoration (IR) have drawn more and more attention due to the promising results obtained when compared to the traditional local regularization methods. Despite the success of this technique, in order to obtain computational efficiency, a convex regularizing functional is exploited in most existing methods, which is equivalent to imposing a convex prior on the nonlocal difference operator output. However, our conducted experiment illustrates that the empirical distribution of the output of the nonlocal difference operator especially in the seminal work of Kheradmand et al. should be characterized with an extremely heavy-tailed distribution rather than a convex distribution. Therefore, in this paper, we propose a nonlocal regularization-based method with a non-convex sparsity constraint for image deblurring. Finally, an effective algorithm is developed to solve the corresponding non-convex optimization problem. The experimental results demonstrate the effectiveness of the proposed method.

  6. 76 FR 77939 - Proposed Provision of Navigation Services for the Next Generation Air Transportation System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-15

    ... navigation for en route through non-precision instrument approaches. GPS is an internationally accepted... Localizer Performance with Vertical guidance (LPV). These approaches are equivalent to Category I ILS, but... approach procedures with LPV or localizer performance (LP) non-precision lines of minima to all qualified...

  7. Double and multiple contacts of similar elastic materials

    NASA Astrophysics Data System (ADS)

    Sundaram, Narayan K.

    Ongoing fretting fatigue research has focussed on developing robust contact mechanics solutions for complicated load histories involving normal, shear, moment and bulk loads. For certain indenter profiles and applied loads, the contact patch separates into two disconnected regions. Existing Singular Integral Equation (SIE) techniques do not address these situations. A fast numerical tool is developed to solve such problems for similar elastic materials for a wide range of profiles and load paths including applied moments and remote bulk-stress effects. This tool is then used to investigate two problems in double contacts. The first, to determine the shear configuration space for a biquadratic punch for the generalized Cattaneo-Mindlin problem. The second, to obtain quantitative estimates of the interaction between neighboring cylindrical contacts for both the applied normal load and partial slip problems up to the limits of validity of the halfspace assumption. In double contact problems without symmetry, obtaining a unique solution requires the satisfaction of a condition relating the contact ends, rigid-body rotation and profile function. This condition has the interpretation that a rigid-rod connecting the inner contact ends of an equivalent frictionless double contact of a rigid indenter and halfspace may only undergo rigid body motions. It is also found that the ends of stick-zones, local slips and remote-applied strains in double contact problems are related by an equation expressing tangential surface-displacement continuity. This equation is essential to solve partial-slip problems without contact equivalents. Even when neighboring cylindrical contacts may be treated as non-interacting for the purpose of determining the pressure tractions, this is not generally true if a shear load is applied. The mutual influence of neighboring contacts in partial slip problems is largest at small shear load fractions. For both the pressure and partial slip problems, the interactions are stronger with increasing strength of loading and contact proximity. A new contact algorithm is developed and the SIE method extended to tackle contact problems with an arbitrary number of contact patches with no approximations made about contact interactions. In the case of multiple contact problems determining the correct contact configuration is significantly more complicated than in double contacts, necessitating a new approach. Both the normal contact and partial slip problems are solved. The tool is then used to study contacts of regular rough cylinders, a flat with rounded punch with superimposed sinusoidal roughness and is also applied to analyze the contact of an experimental rough surface with a halfspace. The partial slip results for multiple-contacts are generally consistent with Cattaneo-Mindlin continuum scale results, in that the outermost contacts tend to be in full sliding. Lastly, the influence of plasticity on frictionless multiple contact problems is studied using FEM for two common steel and aluminum alloys. The key findings are that the plasticity decreases the peak pressure and increases both real and apparent contact areas, thus 'blunting' the sharp pressures caused by the contact asperities in pure elasticity. Further, it is found that contact plasticity effects and load for onset of first yield are strongly dependent on roughness amplitude, with higher plasticity effects and lower yield-onset load at higher roughness amplitudes.

  8. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  9. The Effects of Modafinil and Ove-the-Counter Stimulants on Two- and Three- Dimensional Visual Localization

    DTIC Science & Technology

    2017-12-19

    information is accumulated (drift rate). Note that decision time is not equivalent to reaction time because reaction time includes non -decision time...countermeasures are not used. The magnitude of the performance loss is nearly equivalent to that measured using the psychomotor vigilance test, which is...model non -decision time parameter for Modafinil and No Modafinil groups as a function of measurement time for the 3D task

  10. Analysis and design of numerical schemes for gas dynamics 1: Artificial diffusion, upwind biasing, limiters and their effect on accuracy and multigrid convergence

    NASA Technical Reports Server (NTRS)

    Jameson, Antony

    1994-01-01

    The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.

  11. Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks.

    PubMed

    Liang, Junli; Yu, Guoyang; Chen, Badong; Zhao, Minghua

    2016-11-01

    This paper develops a novel decentralized dimensionality reduction algorithm for the distributed tensor data across sensor networks. The main contributions of this paper are as follows. First, conventional centralized methods, which utilize entire data to simultaneously determine all the vectors of the projection matrix along each tensor mode, are not suitable for the network environment. Here, we relax the simultaneous processing manner into the one-vector-by-one-vector (OVBOV) manner, i.e., determining the projection vectors (PVs) related to each tensor mode one by one. Second, we prove that in the OVBOV manner each PV can be determined without modifying any tensor data, which simplifies corresponding computations. Third, we cast the decentralized PV determination problem as a set of subproblems with consensus constraints, so that it can be solved in the network environment only by local computations and information communications among neighboring nodes. Fourth, we introduce the null space and transform the PV determination problem with complex orthogonality constraints into an equivalent hidden convex one without any orthogonality constraint, which can be solved by the Lagrange multiplier method. Finally, experimental results are given to show that the proposed algorithm is an effective dimensionality reduction scheme for the distributed tensor data across the sensor networks.

  12. Distributed Optimal Consensus Control for Multiagent Systems With Input Delay.

    PubMed

    Zhang, Huaipin; Yue, Dong; Zhao, Wei; Hu, Songlin; Dou, Chunxia; Huaipin Zhang; Dong Yue; Wei Zhao; Songlin Hu; Chunxia Dou; Hu, Songlin; Zhang, Huaipin; Dou, Chunxia; Yue, Dong; Zhao, Wei

    2018-06-01

    This paper addresses the problem of distributed optimal consensus control for a continuous-time heterogeneous linear multiagent system subject to time varying input delays. First, by discretization and model transformation, the continuous-time input-delayed system is converted into a discrete-time delay-free system. Two delicate performance index functions are defined for these two systems. It is shown that the performance index functions are equivalent and the optimal consensus control problem of the input-delayed system can be cast into that of the delay-free system. Second, by virtue of the Hamilton-Jacobi-Bellman (HJB) equations, an optimal control policy for each agent is designed based on the delay-free system and a novel value iteration algorithm is proposed to learn the solutions to the HJB equations online. The proposed adaptive dynamic programming algorithm is implemented on the basis of a critic-action neural network (NN) structure. Third, it is proved that local consensus errors of the two systems and weight estimation errors of the critic-action NNs are uniformly ultimately bounded while the approximated control policies converge to their target values. Finally, two simulation examples are presented to illustrate the effectiveness of the developed method.

  13. Fully decentralized estimation and control for a modular wheeled mobile robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mutambara, A.G.O.; Durrant-Whyte, H.F.

    2000-06-01

    In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less

  14. Scientific and Engineering Studies: Spectral Estimation

    DTIC Science & Technology

    1989-08-11

    PROBLEM SOLUTION Four different constrained problems will be addressed in this section: con- strained window duration L ; constrained equivalent...sm(frtp + C, ^ smk ) » 0. (B_18) (B-19) The simultaneous solution of (B-ll) and (B-18), with smallest *< , is then given by q =.?0n l^fi

  15. Cognitive Processes Embedded in Self-Explanations of Solving Technical Problems: Implications for Training

    ERIC Educational Resources Information Center

    Maughan, George R.

    2007-01-01

    This qualitative research examines the cognitive processes embedded in self-explanations of automobile and motorcycle service technicians performing troubleshooting tasks and solving technical problems. In-depth interviews were conducted with twelve service technicians who have obtained the designation of "master technician" or equivalent within…

  16. Concurrent Reinforcement Schedules for Problem Behavior and Appropriate Behavior: Experimental Applications of the Matching Law

    ERIC Educational Resources Information Center

    Borrero, Carrie S. W.; Vollmer, Timothy R.; Borrero, John C.; Bourret, Jason C.; Sloman, Kimberly N.; Samaha, Andrew L.; Dallery, Jesse

    2010-01-01

    This study evaluated how children who exhibited functionally equivalent problem and appropriate behavior allocate responding to experimentally arranged reinforcer rates. Relative reinforcer rates were arranged on concurrent variable-interval schedules and effects on relative response rates were interpreted using the generalized matching equation.…

  17. 25 CFR 47.10 - How is the local educational financial plan developed?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... notes any problem with the plan, he or she must: (i) Notify the local board and local supervisor of the problem within two weeks of receiving the plan; (ii) Make arrangements to assist the local school supervisor and board to correct the problem; and (iii) Refer the problem to the Director of the Office of...

  18. Sphere of equivalence--a novel target volume concept for intraoperative radiotherapy using low-energy X rays.

    PubMed

    Herskind, Carsten; Griebel, Jürgen; Kraus-Tiefenbacher, Uta; Wenz, Frederik

    2008-12-01

    Accelerated partial breast radiotherapy with low-energy photons from a miniature X-ray machine is undergoing a randomized clinical trial (Targeted Intra-operative Radiation Therapy [TARGIT]) in a selected subgroup of patients treated with breast-conserving surgery. The steep radial dose gradient implies reduced tumor cell control with increasing depth in the tumor bed. The purpose was to compare the expected risk of local recurrence in this nonuniform radiation field with that after conventional external beam radiotherapy. The relative biologic effectiveness of low-energy photons was modeled using the linear-quadratic formalism including repair of sublethal lesions during protracted irradiation. Doses of 50-kV X-rays (Intrabeam) were converted to equivalent fractionated doses, EQD2, as function of depth in the tumor bed. The probability of local control was estimated using a logistic dose-response relationship fitted to clinical data from fractionated radiotherapy. The model calculations show that, for a cohort of patients, the increase in local control in the high-dose region near the applicator partly compensates the reduction of local control at greater distances. Thus a "sphere of equivalence" exists within which the risk of recurrence is equal to that after external fractionated radiotherapy. The spatial distribution of recurrences inside this sphere will be different from that after conventional radiotherapy. A novel target volume concept is presented here. The incidence of recurrences arising in the tumor bed around the excised tumor will test the validity of this concept and the efficacy of the treatment. Recurrences elsewhere will have implications for the rationale of TARGIT.

  19. Breaking of Ensemble Equivalence in Networks

    NASA Astrophysics Data System (ADS)

    Squartini, Tiziano; de Mol, Joey; den Hollander, Frank; Garlaschelli, Diego

    2015-12-01

    It is generally believed that, in the thermodynamic limit, the microcanonical description as a function of energy coincides with the canonical description as a function of temperature. However, various examples of systems for which the microcanonical and canonical ensembles are not equivalent have been identified. A complete theory of this intriguing phenomenon is still missing. Here we show that ensemble nonequivalence can manifest itself also in random graphs with topological constraints. We find that, while graphs with a given number of links are ensemble equivalent, graphs with a given degree sequence are not. This result holds irrespective of whether the energy is nonadditive (as in unipartite graphs) or additive (as in bipartite graphs). In contrast with previous expectations, our results show that (1) physically, nonequivalence can be induced by an extensive number of local constraints, and not necessarily by long-range interactions or nonadditivity, (2) mathematically, nonequivalence is determined by a different large-deviation behavior of microcanonical and canonical probabilities for a single microstate, and not necessarily for almost all microstates. The latter criterion, which is entirely local, is not restricted to networks and holds in general.

  20. A Tree Locality-Sensitive Hash for Secure Software Testing

    DTIC Science & Technology

    2017-09-14

    errors, or to look for vulnerabilities that could allow a nefarious actor to use our software against us. Ultimately, all testing is designed to find...and an equivalent number of feasible paths discovered by Klee. 1.5 Summary This document the Tree Locality-Sensitive Hash (TLSH), a locality-senstive...performing two groups of tests that verify the accuracy and usefulness of TLSH. Chapter 5 summarizes the contents of the dissertation and lists avenues

  1. The special theory of Brownian relativity: equivalence principle for dynamic and static random paths and uncertainty relation for diffusion.

    PubMed

    Mezzasalma, Stefano A

    2007-03-15

    The theoretical basis of a recent theory of Brownian relativity for polymer solutions is deepened and reexamined. After the problem of relative diffusion in polymer solutions is addressed, its two postulates are formulated in all generality. The former builds a statistical equivalence between (uncorrelated) timelike and shapelike reference frames, that is, among dynamical trajectories of liquid molecules and static configurations of polymer chains. The latter defines the "diffusive horizon" as the invariant quantity to work with in the special version of the theory. Particularly, the concept of universality in polymer physics corresponds in Brownian relativity to that of covariance in the Einstein formulation. Here, a "universal" law consists of a privileged observation, performed from the laboratory rest frame and agreeing with any diffusive reference system. From the joint lack of covariance and simultaneity implied by the Brownian Lorentz-Poincaré transforms, a relative uncertainty arises, in a certain analogy with quantum mechanics. It is driven by the difference between local diffusion coefficients in the liquid solution. The same transformation class can be used to infer Fick's second law of diffusion, playing here the role of a gauge invariance preserving covariance of the spacetime increments. An overall, noteworthy conclusion emerging from this view concerns the statistics of (i) static macromolecular configurations and (ii) the motion of liquid molecules, which would be much more related than expected.

  2. Long-term outcome of cochlear implant in patients with chronic otitis media: one-stage surgery is equivalent to two-stage surgery.

    PubMed

    Jang, Jeong Hun; Park, Min-Hyun; Song, Jae-Jin; Lee, Jun Ho; Oh, Seung Ha; Kim, Chong-Sun; Chang, Sun O

    2015-01-01

    This study compared long-term speech performance after cochlear implantation (CI) between surgical strategies in patients with chronic otitis media (COM). Thirty patients with available open-set sentence scores measured more than 2 yr postoperatively were included: 17 who received one-stage surgeries (One-stage group), and the other 13 underwent two-stage surgeries (Two-stage group). Preoperative inflammatory status, intraoperative procedures, postoperative outcomes were compared. Among 17 patients in One-stage group, 12 underwent CI accompanied with the eradication of inflammation; CI without eradicating inflammation was performed on 3 patients; 2 underwent CIs via the transcanal approach. Thirteen patients in Two-stage group received the complete eradication of inflammation as first-stage surgery, and CI was performed as second-stage surgery after a mean interval of 8.2 months. Additional control of inflammation was performed in 2 patients at second-stage surgery for cavity problem and cholesteatoma, respectively. There were 2 cases of electrode exposure as postoperative complication in the two-stage group; new electrode arrays were inserted and covered by local flaps. The open-set sentence scores of Two-stage group were not significantly higher than those of One-stage group at 1, 2, 3, and 5 yr postoperatively. Postoperative long-term speech performance is equivalent when either of two surgical strategies is used to treat appropriately selected candidates.

  3. Influence of terrestrial radionuclides on environmental gamma exposure in a uranium deposit in Paraíba, Brazil.

    PubMed

    Araújo Dos Santos Júnior, José; Dos Santos Amaral, Romilton; Simões Cezar Menezes, Rômulo; Reinaldo Estevez Álvarez, Juan; Marques do Nascimento Santos, Josineide; Herrero Fernández, Zahily; Dias Bezerra, Jairo; Antônio da Silva, Alberto; Francys Rodrigues Damascena, Kennedy; de Almeida Maciel Neto, José

    2017-07-01

    One of the main natural uranium deposits in Brazil is located in the municipality of Espinharas, in the State of Paraíba. This area may present high levels of natural radioactivity due to the presence of these radionuclides. Since this is a populated area, there is need for a radioecological dosimetry assessment to investigate the possible risks to the population. Based on this problem, the objective of this study was to estimate the environmental effective dose outdoors in inhabited areas influenced by the uranium deposit, using the specific activities of equivalent uranium, equivalent thorium and 40 K and conversion factors. The environmental assessment was carried using gamma spectroscopy in sixty-two points within the municipality, with a high-resolution gamma spectrometer with HPGe semiconductor detector and Be window. The results obtained ranged from 0.01 to 19.11 mSv y -1 , with an average of 2.64 mSv y -1 . These levels are, on average, 23 times higher than UNSCEAR reference levels and up to 273 times the reference value of the earth's crust for primordial radionuclides. Therefore, given the high radioactivity levels found, we conclude that there is need for further investigation to evaluate the levels of radioactivity in indoor environments, which will reflect more closely the risks of the local population. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Chemiluminescence-based multivariate sensing of local equivalence ratios in premixed atmospheric methane-air flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.

    Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less

  5. On the mechanics of stress analysis of fiber-reinforced composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, V.G.

    A general mathematical formulation is developed for the three-dimensional inclusion and inhomogeneity problems, which are practically important in many engineering applications such as fiber pullout of reinforced composites, load transfer behavior in the stiffened structural components, and material defects and impurities existing in engineering materials. First, the displacement field (Green's function) for an elastic solid subjected to various distributions of ring loading is derived in closed form using the Papkovich-Neuber displacement potentials and the Hankel transforms. The Green's functions are used to derive the displacement and stress fields due to a finite cylindrical inclusion of prescribed dilatational eigenstrain such asmore » thermal expansion caused by an internal heat source. Unlike an elliptical inclusion, the interior stress field in the cylindrical inclusion is not uniform. Next, the three-dimensional inhomogeneity problem of a cylindrical fiber embedded in an infinite matrix of different material properties is considered to study load transfer of a finite fiber to an elastic medium. By using the equivalent inclusion method, the fiber is modeled as an inclusion with distributed eigenstrains of unknown strength, and the inhomogeneity problem can be treated as an equivalent inclusion problem. The eigenstrains are determined to simulate the disturbance due to the existing fiber. The equivalency of elastic field between inhomogeneity and inclusion problems leads to a set of integral equations. To solve the integral equations, the inclusion domain is discretized into a finite number of sub-inclusions with uniform eigenstrains, and the integral equations are reduced to a set of algebraic equations. The distributions of eigenstrains, interior stress field and axial force along the fiber are presented for various fiber lengths and the ratio of material properties of the fiber relative to the matrix.« less

  6. Entanglement classes of symmetric Werner states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyons, David W.; Walck, Scott N.

    2011-10-15

    The symmetric Werner states for n qubits, important in the study of quantum nonlocality and useful for applications in quantum information, have a surprisingly simple and elegant structure in terms of tensor products of Pauli matrices. Further, each of these states forms a unique local unitary equivalence class, that is, no two of these states are interconvertible by local unitary operations.

  7. Design of bent waveguide semiconductor lasers using nonlinear equivalent chirp

    NASA Astrophysics Data System (ADS)

    Li, Lianyan; Shi, Yuechun; Zhang, Yunshan; Chen, Xiangfei

    2018-01-01

    Reconstruction equivalent chirp (REC) technique is widely used in the design and fabrication of semiconductor laser arrays and tunable lasers with low cost and high wavelength accuracy. Bent waveguide is a promising method to suppress the zeroth order resonance, which is an intrinsic problem in REC technique. However, it may introduce basic grating chirp and deteriorate the single longitudinal mode (SLM) property of the laser. A nonlinear equivalent chirp pattern is proposed in this paper to compensate the grating chirp and improve the SLM property. It will benefit the realization of low-cost Distributed feedback (DFB) semiconductor laser arrays with accurate lasing wavelength.

  8. Entanglement and nonclassical properties of hypergraph states

    NASA Astrophysics Data System (ADS)

    Gühne, Otfried; Cuquet, Martí; Steinhoff, Frank E. S.; Moroder, Tobias; Rossi, Matteo; Bruß, Dagmar; Kraus, Barbara; Macchiavello, Chiara

    2014-08-01

    Hypergraph states are multiqubit states that form a subset of the locally maximally entangleable states and a generalization of the well-established notion of graph states. Mathematically, they can conveniently be described by a hypergraph that indicates a possible generation procedure of these states; alternatively, they can also be phrased in terms of a nonlocal stabilizer formalism. In this paper, we explore the entanglement properties and nonclassical features of hypergraph states. First, we identify the equivalence classes under local unitary transformations for up to four qubits, as well as important classes of five- and six-qubit states, and determine various entanglement properties of these classes. Second, we present general conditions under which the local unitary equivalence of hypergraph states can simply be decided by considering a finite set of transformations with a clear graph-theoretical interpretation. Finally, we consider the question of whether hypergraph states and their correlations can be used to reveal contradictions with classical hidden-variable theories. We demonstrate that various noncontextuality inequalities and Bell inequalities can be derived for hypergraph states.

  9. TOPICAL REVIEW: Nonlinear aspects of the renormalization group flows of Dyson's hierarchical model

    NASA Astrophysics Data System (ADS)

    Meurice, Y.

    2007-06-01

    We review recent results concerning the renormalization group (RG) transformation of Dyson's hierarchical model (HM). This model can be seen as an approximation of a scalar field theory on a lattice. We introduce the HM and show that its large group of symmetry simplifies drastically the blockspinning procedure. Several equivalent forms of the recursion formula are presented with unified notations. Rigourous and numerical results concerning the recursion formula are summarized. It is pointed out that the recursion formula of the HM is inequivalent to both Wilson's approximate recursion formula and Polchinski's equation in the local potential approximation (despite the very small difference with the exponents of the latter). We draw a comparison between the RG of the HM and functional RG equations in the local potential approximation. The construction of the linear and nonlinear scaling variables is discussed in an operational way. We describe the calculation of non-universal critical amplitudes in terms of the scaling variables of two fixed points. This question appears as a problem of interpolation between these fixed points. Universal amplitude ratios are calculated. We discuss the large-N limit and the complex singularities of the critical potential calculable in this limit. The interpolation between the HM and more conventional lattice models is presented as a symmetry breaking problem. We briefly introduce models with an approximate supersymmetry. One important goal of this review is to present a configuration space counterpart, suitable for lattice formulations, of functional RG equations formulated in momentum space (often called exact RG equations and abbreviated ERGE).

  10. A fast invariant imbedding method for multiple scattering calculations and an application to equivalent widths of CO2 lines on Venus

    NASA Technical Reports Server (NTRS)

    Sato, M.; Kawabata, K.; Hansen, J. E.

    1977-01-01

    The invariant imbedding method considered is based on an equation which describes the change in the reflected radiation when an optically thin layer is added to the top of the atmosphere. The equation is used to treat the problem of reflection from a planetary atmosphere as an initial value problem. A fast method is discussed for the solution of the invariant imbedding equation. The speed and accuracy of the new method are illustrated by comparing it with the doubling program published by Hansen and Travis (1974). Computations are performed of the equivalent widths of carbon dioxide absorption lines in solar radiation reflected by Venus for several models of the planetary atmosphere.

  11. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  12. Effectiveness of mathematics education in secondary schools to meet the local universities missions in producing quality engineering and science undergraduates

    NASA Astrophysics Data System (ADS)

    Bakar Hasan, Abu; Fatah Abdul, Abdul; Selamat, Zalilah

    2018-01-01

    Critical claims by certain quarters that our local undergraduates are not performing well in Mathematics, Statistics and Numerical Methods needs a serious thinking and actions. Yearly examinations results from the Sijil Pelajaran Malaysia (SPM equivalent to A-Level) and Sijil Tinggi Pelajaran Malaysia (STPM equivalent to O-Level) levels have been splendid whereby it is either increasing or decreasing in a very tight range. A good foundation in mathematics and additional mathematics will tremendously benefit these students when they enter their university education especially in engineering and science courses. This paper uses SPM results as the primary data, questionnaires as secondary, and apply the Fish Bones technique for analysis. The outcome shows that there is a clear correlation between the causes and effect.

  13. Partition-based discrete-time quantum walks

    NASA Astrophysics Data System (ADS)

    Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo

    2018-04-01

    We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.

  14. Experiment on building Sundanese lexical database based on WordNet

    NASA Astrophysics Data System (ADS)

    Dewi Budiwati, Sari; Nurani Setiawan, Novihana

    2018-03-01

    Sundanese language is the second biggest local language used in Indonesia. Currently, Sundanese language is rarely used since we have the Indonesian language in everyday conversation and as the national language. We built a Sundanese lexical database based on WordNet and Indonesian WordNet as an alternative way to preserve the language as one of local culture. WordNet was chosen because of Sundanese language has three levels of word delivery, called language code of conduct. Web user participant involved in this research for specifying Sundanese semantic relations, and an expert linguistic for validating the relations. The merge methodology was implemented in this experiment. Some words are equivalent with WordNet while another does not have its equivalence since some words are not exist in another culture.

  15. A numerical analysis of phase-change problems including natural convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Y.; Faghri, A.

    1990-08-01

    Fixed grid solutions for phase-change problems remove the need to satisfy conditions at the phase-change front and can be easily extended to multidimensional problems. The two most important and widely used methods are enthalpy methods and temperature-based equivalent heat capacity methods. Both methods in this group have advantages and disadvantages. Enthalpy methods (Shamsundar and Sparrow, 1975; Voller and Prakash, 1987; Cao et al., 1989) are flexible and can handle phase-change problems occurring both at a single temperature and over a temperature range. The drawback of this method is that although the predicted temperature distributions and melting fronts are reasonable, themore » predicted time history of the temperature at a typical grid point may have some oscillations. The temperature-based fixed grid methods (Morgan, 1981; Hsiao and Chung, 1984) have no such time history problems and are more convenient with conjugate problems involving an adjacent wall, but have to deal with the severe nonlinearity of the governing equations when the phase-change temperature range is small. In this paper, a new temperature-based fixed-grid formulation is proposed, and the reason that the original equivalent heat capacity model is subject to such restrictions on the time step, mesh size, and the phase-change temperature range will also be discussed.« less

  16. An efficient method for generalized linear multiplicative programming problem with multiplicative constraints.

    PubMed

    Zhao, Yingfeng; Liu, Sanyang

    2016-01-01

    We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.

  17. Effects of Graphic Organiser on Students' Achievement in Algebraic Word Problems

    ERIC Educational Resources Information Center

    Owolabi, Josiah; Adaramati, Tobiloba Faith

    2015-01-01

    This study investigated the effects of graphic organiser and gender on students' academic achievement in algebraic word problem. Three research questions and three null hypotheses were used in guiding this study. Quasi experimental research was employed and Non-equivalent pre and post test design was used. The study involved the Senior Secondary…

  18. Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm

    DTIC Science & Technology

    1978-09-01

    deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research

  19. Examining Temporal Associations between Perceived Maternal Psychological Control and Early Adolescent Internalizing Problems

    ERIC Educational Resources Information Center

    Loukas, Alexandra

    2009-01-01

    The present study examined a) the associations between adolescent-reported maternal psychological control and self-reported internalizing problems one year later, while simultaneously examining the opposite direction of effects and b) the equivalence of these associations across gender. Participants were 479 10-to-14-year old adolescents (55%…

  20. The Validity of Computer Audits of Simulated Cases Records.

    ERIC Educational Resources Information Center

    Rippey, Robert M.; And Others

    This paper describes the implementation of a computer-based approach to scoring open-ended problem lists constructed to evaluate student and practitioner clinical judgment from real or simulated records. Based on 62 previously administered and scored problem lists, the program was written in BASIC for a Heathkit H11A computer (equivalent to DEC…

  1. Eötvös, Baron Lóránd [Roland] von (1848-1919)

    NASA Astrophysics Data System (ADS)

    Murdin, P.

    2000-11-01

    Hungarian physicist, born in Pest (now part of Budapest), became professor of experimental physics there. Worked on wide range of physical problems including gravitation, and invented the Eötvös balance, a torsion balance. With it, he tested (in what became known as the Eötvös experiment) the equivalence principle that gravitational mass and inertial mass are equivalent; he found that they were i...

  2. Spacetime completeness of non-singular black holes in conformal gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bambi, Cosimo; Rachwał, Lesław; Modesto, Leonardo, E-mail: bambi@fudan.edu.cn, E-mail: lmodesto@sustc.edu.cn, E-mail: grzerach@gmail.com

    We explicitly prove that the Weyl conformal symmetry solves the black hole singularity problem, otherwise unavoidable in a generally covariant local or non-local gravitational theory. Moreover, we yield explicit examples of local and non-local theories enjoying Weyl and diffeomorphism symmetry (in short co-covariant theories). Following the seminal paper by Narlikar and Kembhavi, we provide an explicit construction of singularity-free spherically symmetric and axi-symmetric exact solutions for black hole spacetimes conformally equivalent to the Schwarzschild or the Kerr spacetime. We first check the absence of divergences in the Kretschmann invariant for the rescaled metrics. Afterwords, we show that the new typesmore » of black holes are geodesically complete and linked by a Newman-Janis transformation just as in standard general relativity (based on Einstein-Hilbert action). Furthermore, we argue that no massive or massless particles can reach the former Schwarzschild singularity or touch the former Kerr ring singularity in a finite amount of their proper time or of their affine parameter. Finally, we discuss the Raychaudhuri equation in a co-covariant theory and we show that the expansion parameter for congruences of both types of geodesics (for massless and massive particles) never reaches minus infinity. Actually, the null geodesics become parallel at the r =0 point in the Schwarzschild spacetime (the origin) and the focusing of geodesics is avoided. The arguments of regularity of curvature invariants, geodesic completeness, and finiteness of geodesics' expansion parameter ensure us that we are dealing with singularity-free and geodesically-complete black hole spacetimes.« less

  3. Ultrasound with neurostimulation compared with ultrasound guidance alone for lumbar plexus block: A randomised single blinded equivalence trial.

    PubMed

    Arnuntasupakul, Vanlapa; Chalachewa, Theerawat; Leurcharusmee, Prangmalee; Tiyaprasertkul, Worakamol; Finlayson, Roderick J; Tran, De Q

    2018-03-01

    Ultrasound-guided lumbar plexus blocks usually require confirmatory neurostimulation. A simpler alternative is to inject local anaesthetic inside the posteromedial quadrant of the psoas muscle under ultrasound guidance. We hypothesised that both techniques would result in similar total anaesthesia time, defined as the sum of performance and onset time. A randomised, observer-blinded, equivalence trial. Ramathibodi Hospital and Maharaj Nakorn Chiang Mai Hospital (Thailand) from 12 May 2016 to 10 January 2017. A total of 110 patients undergoing total hip or knee arthroplasty, who required lumbar plexus block for postoperative analgesia. In the combined ultrasonography-neurostimulation group, quadriceps-evoked motor response was sought at a current between 0.2 and 0.8 mA prior to local anaesthetic injection (30 ml of lidocaine 1% and levobupivacaine 0.25% with epinephrine 5 μg ml and 5 mg of dexamethasone). In the ultrasound guidance alone group, local anaesthetic was simply injected inside the posteromedial quadrant of the psoas muscle. We measured the total anaesthesia time, the success rate (at 30 min), the number of needle passes, block-related pain, cumulative opioid consumption (at 24 h) and adverse events (vascular puncture, paraesthesia, local anaesthetic spread to the epidural space). The equivalence margin was 7.4 min. Compared with ultrasound guidance alone, combined ultrasonography-neurostimulation resulted in decreased mean (±SD) total anaesthesia time [15.3 (±6.5) vs. 20.1 (±9.0) min; mean difference, -4.8; 95% confidence interval, -8.1 to -1.9; P = 0.005] and mean (±SD) onset time [10.2 (±5.6) vs. 15.5 (±9.0) min; P = 0.004). No inter-group differences were observed in terms of success rate, performance time, number of needle passes, block-related pain, opioid consumption or adverse events. Although the ultrasonography-neurostimulation technique results in a shorter total anaesthesia time compared with ultrasound guidance alone, this difference falls within our accepted equivalence margin (±7.4 min). www.clinicaltrials in the (Study ID: TCTR20160427003).

  4. 45 CFR 2522.910 - What basic qualifications must an AmeriCorps member have to serve as a tutor?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... (b) Is not considered to be an employee of the Local Education Agency or school, as determined by State law (1) High School diploma or its equivalent, or a higher degree; and (2) Successful completion... qualifications: (a) Is considered to be an employee of the Local Education Agency or school, as determined by...

  5. 45 CFR 2522.910 - What basic qualifications must an AmeriCorps member have to serve as a tutor?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... (b) Is not considered to be an employee of the Local Education Agency or school, as determined by State law (1) High School diploma or its equivalent, or a higher degree; and (2) Successful completion... qualifications: (a) Is considered to be an employee of the Local Education Agency or school, as determined by...

  6. 45 CFR 2522.910 - What basic qualifications must an AmeriCorps member have to serve as a tutor?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (b) Is not considered to be an employee of the Local Education Agency or school, as determined by State law (1) High School diploma or its equivalent, or a higher degree; and (2) Successful completion... qualifications: (a) Is considered to be an employee of the Local Education Agency or school, as determined by...

  7. 45 CFR 2522.910 - What basic qualifications must an AmeriCorps member have to serve as a tutor?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... (b) Is not considered to be an employee of the Local Education Agency or school, as determined by State law (1) High School diploma or its equivalent, or a higher degree; and (2) Successful completion... qualifications: (a) Is considered to be an employee of the Local Education Agency or school, as determined by...

  8. Effect of Damping and Yielding on the Seismic Response of 3D Steel Buildings with PMRF

    PubMed Central

    Haldar, Achintya; Rodelo-López, Ramon Eduardo; Bojórquez, Eden

    2014-01-01

    The effect of viscous damping and yielding, on the reduction of the seismic responses of steel buildings modeled as three-dimensional (3D) complex multidegree of freedom (MDOF) systems, is studied. The reduction produced by damping may be larger or smaller than that of yielding. This reduction can significantly vary from one structural idealization to another and is smaller for global than for local response parameters, which in turn depends on the particular local response parameter. The uncertainty in the estimation is significantly larger for local response parameter and decreases as damping increases. The results show the limitations of the commonly used static equivalent lateral force procedure where local and global response parameters are reduced in the same proportion. It is concluded that estimating the effect of damping and yielding on the seismic response of steel buildings by using simplified models may be a very crude approximation. Moreover, the effect of yielding should be explicitly calculated by using complex 3D MDOF models instead of estimating it in terms of equivalent viscous damping. The findings of this paper are for the particular models used in the study. Much more research is needed to reach more general conclusions. PMID:25097892

  9. Effect of damping and yielding on the seismic response of 3D steel buildings with PMRF.

    PubMed

    Reyes-Salazar, Alfredo; Haldar, Achintya; Rodelo-López, Ramon Eduardo; Bojórquez, Eden

    2014-01-01

    The effect of viscous damping and yielding, on the reduction of the seismic responses of steel buildings modeled as three-dimensional (3D) complex multidegree of freedom (MDOF) systems, is studied. The reduction produced by damping may be larger or smaller than that of yielding. This reduction can significantly vary from one structural idealization to another and is smaller for global than for local response parameters, which in turn depends on the particular local response parameter. The uncertainty in the estimation is significantly larger for local response parameter and decreases as damping increases. The results show the limitations of the commonly used static equivalent lateral force procedure where local and global response parameters are reduced in the same proportion. It is concluded that estimating the effect of damping and yielding on the seismic response of steel buildings by using simplified models may be a very crude approximation. Moreover, the effect of yielding should be explicitly calculated by using complex 3D MDOF models instead of estimating it in terms of equivalent viscous damping. The findings of this paper are for the particular models used in the study. Much more research is needed to reach more general conclusions.

  10. Axisymmetric charge-conservative electromagnetic particle simulation algorithm on unstructured grids: Application to microwave vacuum electronic devices

    NASA Astrophysics Data System (ADS)

    Na, Dong-Yeop; Omelchenko, Yuri A.; Moon, Haksu; Borges, Ben-Hur V.; Teixeira, Fernando L.

    2017-10-01

    We present a charge-conservative electromagnetic particle-in-cell (EM-PIC) algorithm optimized for the analysis of vacuum electronic devices (VEDs) with cylindrical symmetry (axisymmetry). We exploit the axisymmetry present in the device geometry, fields, and sources to reduce the dimensionality of the problem from 3D to 2D. Further, we employ 'transformation optics' principles to map the original problem in polar coordinates with metric tensor diag (1 ,ρ2 , 1) to an equivalent problem on a Cartesian metric tensor diag (1 , 1 , 1) with an effective (artificial) inhomogeneous medium introduced. The resulting problem in the meridian (ρz) plane is discretized using an unstructured 2D mesh considering TEϕ-polarized fields. Electromagnetic field and source (node-based charges and edge-based currents) variables are expressed as differential forms of various degrees, and discretized using Whitney forms. Using leapfrog time integration, we obtain a mixed E - B finite-element time-domain scheme for the full-discrete Maxwell's equations. We achieve a local and explicit time update for the field equations by employing the sparse approximate inverse (SPAI) algorithm. Interpolating field values to particles' positions for solving Newton-Lorentz equations of motion is also done via Whitney forms. Particles are advanced using the Boris algorithm with relativistic correction. A recently introduced charge-conserving scatter scheme tailored for 2D unstructured grids is used in the scatter step. The algorithm is validated considering cylindrical cavity and space-charge-limited cylindrical diode problems. We use the algorithm to investigate the physical performance of VEDs designed to harness particle bunching effects arising from the coherent (resonance) Cerenkov electron beam interactions within micro-machined slow wave structures.

  11. Testing of the European Union exposure-response relationships and annoyance equivalents model for annoyance due to transportation noises: The need of revised exposure-response relationships and annoyance equivalents model.

    PubMed

    Gille, Laure-Anne; Marquis-Favre, Catherine; Morel, Julien

    2016-09-01

    An in situ survey was performed in 8 French cities in 2012 to study the annoyance due to combined transportation noises. As the European Commission recommends to use the exposure-response relationships suggested by Miedema and Oudshoorn [Environmental Health Perspective, 2001] to predict annoyance due to single transportation noise, these exposure-response relationships were tested using the annoyance due to each transportation noise measured during the French survey. These relationships only enabled a good prediction in terms of the percentages of people highly annoyed by road traffic noise. For the percentages of people annoyed and a little annoyed by road traffic noise, the quality of prediction is weak. For aircraft and railway noises, prediction of annoyance is not satisfactory either. As a consequence, the annoyance equivalents model of Miedema [The Journal of the Acoustical Society of America, 2004], based on these exposure-response relationships did not enable a good prediction of annoyance due to combined transportation noises. Local exposure-response relationships were derived, following the whole computation suggested by Miedema and Oudshoorn [Environmental Health Perspective, 2001]. They led to a better calculation of annoyance due to each transportation noise in the French cities. A new version of the annoyance equivalents model was proposed using these new exposure-response relationships. This model enabled a better prediction of the total annoyance due to the combined transportation noises. These results encourage therefore to improve the annoyance prediction for noise in isolation with local or revised exposure-response relationships, which will also contribute to improve annoyance modeling for combined noises. With this aim in mind, a methodology is proposed to consider noise sensitivity in exposure-response relationships and in the annoyance equivalents model. The results showed that taking into account such variable did not enable to enhance both exposure-response relationships and the annoyance equivalents model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A Riemannian geometric mapping technique for identifying incompressible equivalents to subsonic potential flows

    NASA Astrophysics Data System (ADS)

    German, Brian Joseph

    This research develops a technique for the solution of incompressible equivalents to planar steady subsonic potential flows. Riemannian geometric formalism is used to develop a gauge transformation of the length measure followed by a curvilinear coordinate transformation to map the given subsonic flow into a canonical Laplacian flow with the same boundary conditions. The effect of the transformation is to distort both the immersed profile shape and the domain interior nonuniformly as a function of local flow properties. The method represents the full nonlinear generalization of the classical methods of Prandtl-Glauert and Karman-Tsien. Unlike the classical methods which are "corrections," this method gives exact results in the sense that the inverse mapping produces the subsonic full potential solution over the original airfoil, up to numerical accuracy. The motivation for this research was provided by an observed analogy between linear potential flow and the special theory of relativity that emerges from the invariance of the d'Alembert wave equation under Lorentz transformations. This analogy is well known in an operational sense, being leveraged widely in linear unsteady aerodynamics and acoustics, stemming largely from the work of Kussner. Whereas elements of the special theory can be invoked for compressibility effects that are linear and global in nature, the question posed in this work was whether other mathematical techniques from the realm of relativity theory could be used to similar advantage for effects that are nonlinear and local. This line of thought led to a transformation leveraging Riemannian geometric methods common to the general theory of relativity. A gauge transformation is used to geometrize compressibility through the metric tensor of the underlying space to produce an equivalent incompressible flow that lives not on a plane but on a curved surface. In this sense, forces owing to compressibility can be ascribed to the geometry of space in much the same way that general relativity ascribes gravitational forces to the curvature of space-time. Although the analogy with general relativity is fruitful, it is important not to overstate the similarities between compressibility and the physics of gravity, as the interest for this thesis is primarily in the mathematical framework and not physical phenomenology or epistemology. The thesis presents the philosophy and theory for the transformation method followed by a numerical method for practical solutions of equivalent incompressible flows over arbitrary closed profiles. The numerical method employs an iterative approach involving the solution of the equivalent incompressible flow with a panel method, the calculation of the metric tensor for the gauge transformation, and the solution of the curvilinear coordinate mapping to the canonical flow with a finite difference approach for the elliptic boundary value problem. This method is demonstrated for non-circulatory flow over a circular cylinder and both symmetric and lifting flows over a NACA 0012 profile. Results are validated with accepted subcritical full potential test cases available in the literature. For chord-preserving mapping boundary conditions, the results indicate that the equivalent incompressible profiles thicken with Mach number and develop a leading edge droop with increased angle of attack. Two promising areas of potential applicability of the method have been identified. The first is in airfoil inverse design methods leveraging incompressible flow knowledge including heuristics and empirical data for the potential field effects on viscous phenomena such as boundary layer transition and separation. The second is in aerodynamic testing using distorted similarity-scaled models.

  13. Cloud computing and validation of expandable in silico livers.

    PubMed

    Ropella, Glen E P; Hunt, C Anthony

    2010-12-03

    In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware.

  14. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    PubMed

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  15. Evaluation of the equivalence ratio of the reacting mixture using intensity ratio of chemiluminescence in laminar partially premixed CH{sub 4}-air flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Yong Ki; Jeon, Chung Hwan; Chang, Young June

    An experimental study was performed to investigate the effects of partially premixing, varying the equivalence ratios from 0.79 to 9.52, on OH*, CH* and C{sub 2}* in laminar partially premixed flames. The signals from the electronically excited states of OH*, CH* and C{sub 2}* were detected through interference filters using a photo multiplier tube, which were processed to the intensity ratios (C{sub 2}*/CH*, C{sub 2}*/OH* and CH*/OH*) to determine a correlation with the local equivalence ratios. Furthermore, the consistency between the results of the tomographic reconstruction; Abel inversion technique, image with CCD (Couple Charged Detector) camera and the local radicalmore » intensity with PMT was investigated. The results demonstrated that (1) the flames at F=<1.36 exhibited classical double flame structure, at F>=4.76, the flames exhibited non-premixed-like flame structure and the intermediate flames at 1.36

  16. Recent Development of Multigrid Algorithms for Mixed and Noncomforming Methods for Second Order Elliptical Problems

    NASA Technical Reports Server (NTRS)

    Chen, Zhangxin; Ewing, Richard E.

    1996-01-01

    Multigrid algorithms for nonconforming and mixed finite element methods for second order elliptic problems on triangular and rectangular finite elements are considered. The construction of several coarse-to-fine intergrid transfer operators for nonconforming multigrid algorithms is discussed. The equivalence between the nonconforming and mixed finite element methods with and without projection of the coefficient of the differential problems into finite element spaces is described.

  17. Solving fully fuzzy transportation problem using pentagonal fuzzy numbers

    NASA Astrophysics Data System (ADS)

    Maheswari, P. Uma; Ganesan, K.

    2018-04-01

    In this paper, we propose a simple approach for the solution of fuzzy transportation problem under fuzzy environment in which the transportation costs, supplies at sources and demands at destinations are represented by pentagonal fuzzy numbers. The fuzzy transportation problem is solved without converting to its equivalent crisp form using a robust ranking technique and a new fuzzy arithmetic on pentagonal fuzzy numbers. To illustrate the proposed approach a numerical example is provided.

  18. Control topologies for deep space formation flying spacecraft

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Smith, R. S.

    2002-01-01

    This paper gives a characterization of the equivalent topologies and uses this approach to show that there exists a control topology which achieves a global tracking objective using only local controllers.

  19. Tripartite-to-Bipartite Entanglement Transformation by Stochastic Local Operations and Classical Communication and the Structure of Matrix Spaces

    NASA Astrophysics Data System (ADS)

    Li, Yinan; Qiao, Youming; Wang, Xin; Duan, Runyao

    2018-03-01

    We study the problem of transforming a tripartite pure state to a bipartite one using stochastic local operations and classical communication (SLOCC). It is known that the tripartite-to-bipartite SLOCC convertibility is characterized by the maximal Schmidt rank of the given tripartite state, i.e. the largest Schmidt rank over those bipartite states lying in the support of the reduced density operator. In this paper, we further study this problem and exhibit novel results in both multi-copy and asymptotic settings, utilizing powerful results from the structure of matrix spaces. In the multi-copy regime, we observe that the maximal Schmidt rank is strictly super-multiplicative, i.e. the maximal Schmidt rank of the tensor product of two tripartite pure states can be strictly larger than the product of their maximal Schmidt ranks. We then provide a full characterization of those tripartite states whose maximal Schmidt rank is strictly super-multiplicative when taking tensor product with itself. Notice that such tripartite states admit strict advantages in tripartite-to-bipartite SLOCC transformation when multiple copies are provided. In the asymptotic setting, we focus on determining the tripartite-to-bipartite SLOCC entanglement transformation rate. Computing this rate turns out to be equivalent to computing the asymptotic maximal Schmidt rank of the tripartite state, defined as the regularization of its maximal Schmidt rank. Despite the difficulty caused by the super-multiplicative property, we provide explicit formulas for evaluating the asymptotic maximal Schmidt ranks of two important families of tripartite pure states by resorting to certain results of the structure of matrix spaces, including the study of matrix semi-invariants. These formulas turn out to be powerful enough to give a sufficient and necessary condition to determine whether a given tripartite pure state can be transformed to the bipartite maximally entangled state under SLOCC, in the asymptotic setting. Applying the recent progress on the non-commutative rank problem, we can verify this condition in deterministic polynomial time.

  20. European Heritage Landscapes. An Account of the Conference on Planning and Management in European Naturparke/Parcs Naturels/National Parks (U.K.) and Equivalent Category "C" Reserves (Losehill Hall, Castleton, England, September 26-30, 1977).

    ERIC Educational Resources Information Center

    Smith, Roland

    Presented are the proceedings of the Conference on Planning and Management in European National Parks and equivalent Category "C" reserves held at the Peak National Park Study Center, Castleton, England, in 1977. Fifty-two representatives from 16 countries focused practical solutions to management and planning problems in national parks. (BT)

  1. XFEM with equivalent eigenstrain for matrix-inclusion interfaces

    NASA Astrophysics Data System (ADS)

    Benvenuti, Elena

    2014-05-01

    Several engineering applications rely on particulate composite materials, and numerical modelling of the matrix-inclusion interface is therefore a crucial part of the design process. The focus of this work is on an original use of the equivalent eigenstrain concept in the development of a simplified eXtended Finite Element Method. Key points are: the replacement of the matrix-inclusion interface by a coating layer with small but finite thickness, and its simulation as an inclusion with an equivalent eigenstrain. For vanishing thickness, the model is consistent with a spring-like interface model. The problem of a spherical inclusion within a cylinder is solved. The results show that the proposed approach is effective and accurate.

  2. Teaching generatively: Learning about disorders and disabilities.

    PubMed

    Alter, Margaret M; Borrero, John C

    2015-01-01

    Stimulus equivalence procedures have been used to teach course material in higher education in the laboratory and in the classroom. The current study was a systematic replication of Walker, Rehfeldt, and Ninness (2010), who used a stimulus equivalence procedure to train information pertaining to 12 disorders. Specifically, we conducted (a) a written posttest immediately after each training unit and (b) booster training sessions for poor performers. Results showed immediate improvement from pretest to posttest scores after training, but problems with maintenance were noted in the final examination. Implications of poor maintenance are discussed in the context of the current study and stimulus equivalence research in higher education generally. © Society for the Experimental Analysis of Behavior.

  3. Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.

    1981-01-01

    To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.

  4. Effect of Polya Problem-Solving Model on Senior Secondary School Students' Performance in Current Electricity

    ERIC Educational Resources Information Center

    Olaniyan, Ademola Olatide; Omosewo, Esther O.; Nwankwo, Levi I.

    2015-01-01

    This study was designed to investigate the Effect of Polya Problem-Solving Model on Senior School Students' Performance in Current Electricity. It was a quasi experimental study of non- randomized, non equivalent pre-test post-test control group design. Three research questions were answered and corresponding three research hypotheses were tested…

  5. Effect of Problem-Based Learning on Senior Secondary School Students' Achievements in Further Mathematics

    ERIC Educational Resources Information Center

    Fatade, Alfred Olufemi; Mogari, David; Arigbabu, Abayomi Adelaja

    2013-01-01

    The study investigated the effect of Problem-based learning (PBL) on senior secondary school students' achievements in Further Mathematics (FM) in Nigeria within the blueprint of pretest-post-test non-equivalent control group quasi-experimental design. Intact classes were used and in all, 96 students participated in the study (42 in the…

  6. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.

  7. 21 CFR 880.5150 - Nonpowered flotation therapy mattress.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... materials that have the functionally equivalent effect of supporting a patient and avoiding excess pressure on local body areas. The device is intended to treat or prevent decubitus ulcers (bed sores). (b...

  8. 21 CFR 880.5150 - Nonpowered flotation therapy mattress.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... materials that have the functionally equivalent effect of supporting a patient and avoiding excess pressure on local body areas. The device is intended to treat or prevent decubitus ulcers (bed sores). (b...

  9. 21 CFR 880.5150 - Nonpowered flotation therapy mattress.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... materials that have the functionally equivalent effect of supporting a patient and avoiding excess pressure on local body areas. The device is intended to treat or prevent decubitus ulcers (bed sores). (b...

  10. 21 CFR 880.5150 - Nonpowered flotation therapy mattress.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... materials that have the functionally equivalent effect of supporting a patient and avoiding excess pressure on local body areas. The device is intended to treat or prevent decubitus ulcers (bed sores). (b...

  11. 21 CFR 880.5150 - Nonpowered flotation therapy mattress.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... materials that have the functionally equivalent effect of supporting a patient and avoiding excess pressure on local body areas. The device is intended to treat or prevent decubitus ulcers (bed sores). (b...

  12. Graph theory approach to the eigenvalue problem of large space structures

    NASA Technical Reports Server (NTRS)

    Reddy, A. S. S. R.; Bainum, P. M.

    1981-01-01

    Graph theory is used to obtain numerical solutions to eigenvalue problems of large space structures (LSS) characterized by a state vector of large dimensions. The LSS are considered as large, flexible systems requiring both orientation and surface shape control. Graphic interpretation of the determinant of a matrix is employed to reduce a higher dimensional matrix into combinations of smaller dimensional sub-matrices. The reduction is implemented by means of a Boolean equivalent of the original matrices formulated to obtain smaller dimensional equivalents of the original numerical matrix. Computation time becomes less and more accurate solutions are possible. An example is provided in the form of a free-free square plate. Linearized system equations and numerical values of a stiffness matrix are presented, featuring a state vector with 16 components.

  13. Metric Properties of Relativistic Rotating Frames with Axial Symmetry

    NASA Astrophysics Data System (ADS)

    Torres, S. A.; Arenas, J. R.

    2017-07-01

    This abstract summarizes our poster contribution to the conference. We study the properties of an axially symmetric stationary gravitational field, by considering the spacetime properties of an uniformly rotating frame and the Einstein's Equivalence Principle (EEP). To undertake this, the weak field and slow-rotation limit of the kerr metric are determined, by making a first-order perturbation to the metric of a rotating frame. Also, we show a local connection between the effects of centrifugal and Coriolis forces with the effects of an axially symmetric stationary weak gravitational field, by calculating the geodesic equations of a free particle. It is observed that these geodesic, applying the (EEP), are locally equivalent to the geodesic equations of a free particle on a rotating frame. Furthermore, some aditional properties as the Lense-Thirring effect, the Sagnac effect, among others are studied.

  14. Compactly supported linearised observables in single-field inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fröob, Markus B.; Higuchi, Atsushi; Hack, Thomas-Paul, E-mail: mbf503@york.ac.uk, E-mail: thomas-paul.hack@itp.uni-leipzig.de, E-mail: atsushi.higuchi@york.ac.uk

    We investigate the gauge-invariant observables constructed by smearing the graviton and inflaton fields by compactly supported tensors at linear order in general single-field inflation. These observables correspond to gauge-invariant quantities that can be measured locally. In particular, we show that these observables are equivalent to (smeared) local gauge-invariant observables such as the linearised Weyl tensor, which have better infrared properties than the graviton and inflaton fields. Special cases include the equivalence between the compactly supported gauge-invariant graviton observable and the smeared linearised Weyl tensor in Minkowski and de Sitter spaces. Our results indicate that the infrared divergences in the tensormore » and scalar perturbations in single-field inflation have the same status as in de Sitter space and are both a gauge artefact, in a certain technical sense, at tree level.« less

  15. Equivalent circuit and characteristic simulation of a brushless electrically excited synchronous wind power generator

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Zhang, Fengge; Guan, Tao; Yu, Siyang

    2017-09-01

    A brushless electrically excited synchronous generator (BEESG) with a hybrid rotor is a novel electrically excited synchronous generator. The BEESG proposed in this paper is composed of a conventional stator with two different sets of windings with different pole numbers, and a hybrid rotor with powerful coupling capacity. The pole number of the rotor is different from those of the stator windings. Thus, an analysis method different from that applied to conventional generators should be applied to the BEESG. In view of this problem, the equivalent circuit and electromagnetic torque expression of the BEESG are derived on the basis of electromagnetic relation of the proposed generator. The generator is simulated and tested experimentally using the established equivalent circuit model. The experimental and simulation data are then analyzed and compared. Results show the validity of the equivalent circuit model.

  16. Self-calibration of robot-sensor system

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu

    1990-01-01

    The process of finding the coordinate transformation between a robot and an external sensor system has been addressed. This calibration is equivalent to solving a nonlinear optimization problem for the parameters that characterize the transformation. A two-step procedure is herein proposed for solving the problem. The first step involves finding a nominal solution that is a good approximation of the final solution. A varational problem is then generated to replace the original problem in the next step. With the assumption that the variational parameters are small compared to unity, the problem that can be more readily solved with relatively small computation effort.

  17. A Comparative Study of Behavior Problems among Left-Behind Children, Migrant Children and Local Children.

    PubMed

    Hu, Hongwei; Gao, Jiamin; Jiang, Haochen; Jiang, Haixia; Guo, Shaoyun; Chen, Kun; Jin, Kaili; Qi, Yingying

    2018-04-01

    This study aims to estimate the prevalence of behavioral problems among left-behind children, migrant children and local children in China, and to compare the risks of behavioral problems among the three types of children. Data on 4479 children aged 6-16 used in this study were from a survey conducted in China in 2017. The school-age version of the Children Behavior Checklist was used to measure children's behavioral problems. Descriptive analysis, correlation analysis, and logistic regressions were conducted. The prevalence of behavioral problems was 18.80% and 13.59% for left-behind children and migrant children, respectively, both of which were higher than that of local children. Logistic regression analysis showed that after adjustments for individual and environmental variables, the likelihood of total, internalizing and externalizing behavior problems for left-behind children and migrant children were higher than those for local children; left-behind children had a higher likelihood of internalizing problems than externalizing problems, while migrant children had a higher prevalence of externalizing problems. Left-behind children had a higher prevalence of each specific syndrome than migrant and local children. Both individual and environmental factors were associated with child behavioral problems, and family migration may contribute to the increased risks. Left-behind and migrant children were more vulnerable than local children to behavioral problems.

  18. A Comparative Study of Behavior Problems among Left-Behind Children, Migrant Children and Local Children

    PubMed Central

    Hu, Hongwei; Gao, Jiamin; Jiang, Haochen; Jiang, Haixia; Guo, Shaoyun; Chen, Kun; Jin, Kaili; Qi, Yingying

    2018-01-01

    This study aims to estimate the prevalence of behavioral problems among left-behind children, migrant children and local children in China, and to compare the risks of behavioral problems among the three types of children. Data on 4479 children aged 6–16 used in this study were from a survey conducted in China in 2017. The school-age version of the Children Behavior Checklist was used to measure children’s behavioral problems. Descriptive analysis, correlation analysis, and logistic regressions were conducted. The prevalence of behavioral problems was 18.80% and 13.59% for left-behind children and migrant children, respectively, both of which were higher than that of local children. Logistic regression analysis showed that after adjustments for individual and environmental variables, the likelihood of total, internalizing and externalizing behavior problems for left-behind children and migrant children were higher than those for local children; left-behind children had a higher likelihood of internalizing problems than externalizing problems, while migrant children had a higher prevalence of externalizing problems. Left-behind children had a higher prevalence of each specific syndrome than migrant and local children. Both individual and environmental factors were associated with child behavioral problems, and family migration may contribute to the increased risks. Left-behind and migrant children were more vulnerable than local children to behavioral problems. PMID:29614783

  19. Perceptual support promotes strategy generation: Evidence from equation solving.

    PubMed

    Alibali, Martha W; Crooks, Noelle M; McNeil, Nicole M

    2017-08-30

    Over time, children shift from using less optimal strategies for solving mathematics problems to using better ones. But why do children generate new strategies? We argue that they do so when they begin to encode problems more accurately; therefore, we hypothesized that perceptual support for correct encoding would foster strategy generation. Fourth-grade students solved mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __) in a pre-test. They were then randomly assigned to one of three perceptual support conditions or to a Control condition. Participants in all conditions completed three mathematical equivalence problems with feedback about correctness. Participants in the experimental conditions received perceptual support (i.e., highlighting in red ink) for accurately encoding the equal sign, the right side of the equation, or the numbers that could be added to obtain the correct solution. Following this intervention, participants completed a problem-solving post-test. Among participants who solved the problems incorrectly at pre-test, those who received perceptual support for correctly encoding the equal sign were more likely to generate new, correct strategies for solving the problems than were those who received feedback only. Thus, perceptual support for accurate encoding of a key problem feature promoted generation of new, correct strategies. Statement of Contribution What is already known on this subject? With age and experience, children shift to using more effective strategies for solving math problems. Problem encoding also improves with age and experience. What the present study adds? Support for encoding the equal sign led children to generate correct strategies for solving equations. Improvements in problem encoding are one source of new strategies. © 2017 The British Psychological Society.

  20. Forward-looking Assimilation of MODIS-derived Snow Covered Area into a Land Surface Model

    NASA Technical Reports Server (NTRS)

    Zaitchik, Benjamin F.; Rodell, Matthew

    2008-01-01

    Snow cover over land has a significant impact on the surface radiation budget, turbulent energy fluxes to the atmosphere, and local hydrological fluxes. For this reason, inaccuracies in the representation of snow covered area (SCA) within a land surface model (LSM) can lead to substantial errors in both offline and coupled simulations. Data assimilation algorithms have the potential to address this problem. However, the assimilation of SCA observations is complicated by an information deficit in the observation SCA indicates only the presence or absence of snow, and not snow volume and by the fact that assimilated SCA observations can introduce inconsistencies with atmospheric forcing data, leading to non-physical artifacts in the local water balance. In this paper we present a novel assimilation algorithm that introduces MODIS SCA observations to the Noah LSM in global, uncoupled simulations. The algorithm utilizes observations from up to 72 hours ahead of the model simulation in order to correct against emerging errors in the simulation of snow cover while preserving the local hydrologic balance. This is accomplished by using future snow observations to adjust air temperature and, when necessary, precipitation within the LSM. In global, offline integrations, this new assimilation algorithm provided improved simulation of SCA and snow water equivalent relative to open loop integrations and integrations that used an earlier SCA assimilation algorithm. These improvements, in turn, influenced the simulation of surface water and energy fluxes both during the snow season and, in some regions, on into the following spring.

  1. 45 CFR 2522.910 - What basic qualifications must an AmeriCorps member have to serve as a tutor?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (b) Is not considered to be an employee of the Local Education Agency or school, as determined by State law (1) High School diploma or its equivalent, or a higher degree; and(ii) Proficiency test, as... qualifications: (a) Is considered to be an employee of the Local Education Agency or school, as determined by...

  2. A new neural network model for solving random interval linear programming problems.

    PubMed

    Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza

    2017-05-01

    This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Spiking cortical model based non-local means method for despeckling multiframe optical coherence tomography data

    NASA Astrophysics Data System (ADS)

    Gu, Yameng; Zhang, Xuming

    2017-05-01

    Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).

  4. Einstein's equivalence principle in quantum mechanics revisited

    NASA Astrophysics Data System (ADS)

    Nauenberg, Michael

    2016-11-01

    The gravitational equivalence principle in quantum mechanics is of considerable importance, but it is generally not included in physics textbooks. In this note, we present a precise quantum formulation of this principle and comment on its verification in a neutron diffraction experiment. The solution of the time dependent Schrödinger equation for this problem also gives the wave function for the motion of a charged particle in a homogeneous electric field, which is also usually ignored in textbooks on quantum mechanics.

  5. 41 CFR 102-74.185 - What heating and cooling policy must Federal agencies follow in Federal facilities?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... overall energy efficient and economical manner; (b) Maintain temperatures to maximize customer satisfaction by conforming to local commercial equivalent temperature levels and operating practices; (c) Set...

  6. "Convenience Food."

    ERIC Educational Resources Information Center

    Lemieux, Colette

    1980-01-01

    Defines the meaning of the American expression "convenience food," quoting definitions given by dictionaries and specialized publications. Discusses the problem of finding the exact equivalent of this expression in French, and recommends some acceptable translations. (MES)

  7. Generalized graph states based on Hadamard matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Shawn X.; Yu, Nengkun; Department of Mathematics and Statistics, University of Guelph, Guelph, Ontario N1G 2W1

    2015-07-15

    Graph states are widely used in quantum information theory, including entanglement theory, quantum error correction, and one-way quantum computing. Graph states have a nice structure related to a certain graph, which is given by either a stabilizer group or an encoding circuit, both can be directly given by the graph. To generalize graph states, whose stabilizer groups are abelian subgroups of the Pauli group, one approach taken is to study non-abelian stabilizers. In this work, we propose to generalize graph states based on the encoding circuit, which is completely determined by the graph and a Hadamard matrix. We study themore » entanglement structures of these generalized graph states and show that they are all maximally mixed locally. We also explore the relationship between the equivalence of Hadamard matrices and local equivalence of the corresponding generalized graph states. This leads to a natural generalization of the Pauli (X, Z) pairs, which characterizes the local symmetries of these generalized graph states. Our approach is also naturally generalized to construct graph quantum codes which are beyond stabilizer codes.« less

  8. Reconnaissance for uranium in black shale, Northern Rocky Mountains and Great Plains, 1953

    USGS Publications Warehouse

    Mapel, W.J.

    1954-01-01

    Reconnaissance examinations for uranium in 22 formations containing black shale were conducted in parts of Montana, North Dakota, Utah, Idaho, and Oregon during 1953. About 150 samples from 80 outcrop localities and 5 oil and gas wells were submitted for uranium determinations. Most of the black shale deposits examined contain less than 0.003 percent uranium; however, thin beds of black shale at the base of the Mississippian system contain 0.005 percent uranium at 2 outcrop localities in southwestern Montana and as much as 0.007 percent uranium in a well in northeastern Montana. An eight-foot bed of phosphatic black shale at the base of the Brazer limestone of Late Mississippian age in Rich County, Utah, contains as much as 0.009 percent uranium. Commercial gamma ray logs of oil and gas wells drilled in Montana and adjacent parts of the Dakotas indicate that locally the Heath shale of Late Mississippian age contains as much as 0.01 percent equivalent uranium, and black shales of Late Cretaceous age contain as much as 0.008 percent equivalent uranium.

  9. Modelling and approaching pragmatic interoperability of distributed geoscience data

    NASA Astrophysics Data System (ADS)

    Ma, Xiaogang

    2010-05-01

    Interoperability of geodata, which is essential for sharing information and discovering insights within a cyberinfrastructure, is receiving increasing attention. A key requirement of interoperability in the context of geodata sharing is that data provided by local sources can be accessed, decoded, understood and appropriately used by external users. Various researchers have discussed that there are four levels in data interoperability issues: system, syntax, schematics and semantics, which respectively relate to the platform, encoding, structure and meaning of geodata. Ontology-driven approaches have been significantly studied addressing schematic and semantic interoperability issues of geodata in the last decade. There are different types, e.g. top-level ontologies, domain ontologies and application ontologies and display forms, e.g. glossaries, thesauri, conceptual schemas and logical theories. Many geodata providers are maintaining their identified local application ontologies in order to drive standardization in local databases. However, semantic heterogeneities often exist between these local ontologies, even though they are derived from equivalent disciplines. In contrast, common ontologies are being studied in different geoscience disciplines (e.g., NAMD, SWEET, etc.) as a standardization procedure to coordinate diverse local ontologies. Semantic mediation, e.g. mapping between local ontologies, or mapping local ontologies to common ontologies, has been studied as an effective way of achieving semantic interoperability between local ontologies thus reconciling semantic heterogeneities in multi-source geodata. Nevertheless, confusion still exists in the research field of semantic interoperability. One problem is caused by eliminating elements of local pragmatic contexts in semantic mediation. Comparing to the context-independent feature of a common domain ontology, local application ontologies are closely related to elements (e.g., people, time, location, intention, procedure, consequence, etc.) of local pragmatic contexts and thus context-dependent. Elimination of these elements will inevitably lead to information loss in semantic mediation between local ontologies. Correspondingly, understanding and effect of exchanged data in a new context may differ from that in its original context. Another problem is the dilemma on how to find a balance between flexibility and standardization of local ontologies, because ontologies are not fixed, but continuously evolving. It is commonly realized that we cannot use a unified ontology to replace all local ontologies because they are context-dependent and need flexibility. However, without coordination of standards, freely developed local ontologies and databases will bring enormous work of mediation between them. Finding a balance between standardization and flexibility for evolving ontologies, in a practical sense, requires negotiations (i.e. conversations, agreements and collaborations) between different local pragmatic contexts. The purpose of this work is to set up a computer-friendly model representing local pragmatic contexts (i.e. geodata sources), and propose a practical semantic negotiation procedure for approaching pragmatic interoperability between local pragmatic contexts. Information agents, objective facts and subjective dimensions are reviewed as elements of a conceptual model for representing pragmatic contexts. The author uses them to draw a practical semantic negotiation procedure approaching pragmatic interoperability of distributed geodata. The proposed conceptual model and semantic negotiation procedure were encoded with Description Logic, and then applied to analyze and manipulate semantic negotiations between different local ontologies within the National Mineral Resources Assessment (NMRA) project of China, which involves multi-source and multi-subject geodata sharing.

  10. Providing Access to Developmental Reading Courses at the Community College: An Evaluation of Three Presentation Modes

    ERIC Educational Resources Information Center

    Phillips, Susan K.

    2010-01-01

    Rural community colleges often face the problem of having to cancel classes due to low enrollment. To eliminate this problem one western community college developed several presentation modes for College Reading I (CR1) to combine low-enrollment classes. This study was a program evaluation on non-equivalent groups to determine which presentation…

  11. Effects of a Target-Task Problem-Solving Model on Senior Secondary School Students' Performance in Physics

    ERIC Educational Resources Information Center

    Olaniyan, A. O.; Omosewo, E. O.

    2015-01-01

    The study investigated the Effects of a Target-Task Problem-Solving Model on Senior Secondary School Students' Performance in Physics. The research design was a quasi-experimental, non-randomized, non-equivalent pretest, post-test using a control group. The study was conducted in two schools purposively selected and involved a total of 120 Senior…

  12. Associations between Mental Health Problems and Challenging Behavior in Adults with Intellectual Disabilities: A Test of the Behavioral Equivalents Hypothesis

    ERIC Educational Resources Information Center

    Painter, Jon; Hastings, Richard; Ingham, Barry; Trevithick, Liam; Roy, Ashok

    2018-01-01

    Introduction: Current research findings in the field of intellectual disabilities (ID) regarding the relationship between mental health problems and challenging behavior are inconclusive and/or contradictory. The aim of this study was to further investigate the putative association between these two highly prevalent phenomena in people with ID,…

  13. Geometry of Quantum Computation with Qudits

    PubMed Central

    Luo, Ming-Xing; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-01-01

    The circuit complexity of quantum qubit system evolution as a primitive problem in quantum computation has been discussed widely. We investigate this problem in terms of qudit system. Using the Riemannian geometry the optimal quantum circuits are equivalent to the geodetic evolutions in specially curved parametrization of SU(dn). And the quantum circuit complexity is explicitly dependent of controllable approximation error bound. PMID:24509710

  14. Hybrid annealing: Coupling a quantum simulator to a classical computer

    NASA Astrophysics Data System (ADS)

    Graß, Tobias; Lewenstein, Maciej

    2017-05-01

    Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Annealing strategies, either classical or quantum, explore the configuration space by evolving the system under the influence of thermal or quantum fluctuations. The thermal annealing dynamics can rapidly freeze the system into a low-energy configuration, and it can be simulated well on a classical computer, but it easily gets stuck in local minima. Quantum annealing, on the other hand, can be guaranteed to find the true ground state and can be implemented in modern quantum simulators; however, quantum adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here, we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such a hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configuration and enforces a lowering of the energy. We have simulated this algorithm for small instances of the random energy model, showing that it potentially outperforms both simulated thermal annealing and adiabatic quantum annealing. It becomes most efficient for problems involving many quasidegenerate ground states.

  15. A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping

    2013-01-01

    In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249

  16. Solving transient acoustic boundary value problems with equivalent sources using a lumped parameter approach.

    PubMed

    Fahnline, John B

    2016-12-01

    An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.

  17. Bottom-up modeling of damage in heterogeneous quasi-brittle solids

    NASA Astrophysics Data System (ADS)

    Rinaldi, Antonio

    2013-03-01

    The theoretical modeling of multisite cracking in quasi-brittle materials is a complex damage problem, hard to model with traditional methods of fracture mechanics due to its multiscale nature and to strain localization induced by microcracks interaction. Macroscale "effective" elastic models can be conveniently applied if a suitable Helmholtz free energy function is identified for a given material scenario. Del Piero and Truskinovsky (Continuum Mech Thermodyn 21:141-171, 2009), among other authors, investigated macroscale continuum solutions capable of matching—in a top-down view—the phenomenology of the damage process for quasi-brittle materials regardless of the microstructure. On the contrary, this paper features a physically based solution method that starts from the direct consideration of the microscale properties and, in a bottom-up view, recovers a continuum elastic description. This procedure is illustrated for a simple one-dimensional problem of this type, a bar modeled stretched by an axial displacement, where the bar is modeled as a 2D random lattice of decohesive spring elements of finite strength. The (microscale) data from simulations are used to identify the "exact" (macro-) damage parameter and to build up the (macro-) Helmholtz function for the equivalent elastic model, bridging the macroscale approach by Del Piero and Truskinovsky. The elastic approach, coupled with microstructural knowledge, becomes a more powerful tool to reproduce a broad class of macroscopic material responses by changing the convexity-concavity of the Helmholtz energy. The analysis points out that mean-field statistics are appropriate prior to damage localization but max-field statistics are better suited in the softening regime up to failure, where microstrain fluctuation needs to be incorporated in the continuum model. This observation is of consequence to revise mean-field damage models from literature and to calibrate Nth gradient continuum models.

  18. Affordance Equivalences in Robotics: A Formalism

    PubMed Central

    Andries, Mihai; Chavez-Garcia, Ricardo Omar; Chatila, Raja; Giusti, Alessandro; Gambardella, Luca Maria

    2018-01-01

    Automatic knowledge grounding is still an open problem in cognitive robotics. Recent research in developmental robotics suggests that a robot's interaction with its environment is a valuable source for collecting such knowledge about the effects of robot's actions. A useful concept for this process is that of an affordance, defined as a relationship between an actor, an action performed by this actor, an object on which the action is performed, and the resulting effect. This paper proposes a formalism for defining and identifying affordance equivalence. By comparing the elements of two affordances, we can identify equivalences between affordances, and thus acquire grounded knowledge for the robot. This is useful when changes occur in the set of actions or objects available to the robot, allowing to find alternative paths to reach goals. In the experimental validation phase we verify if the recorded interaction data is coherent with the identified affordance equivalences. This is done by querying a Bayesian Network that serves as container for the collected interaction data, and verifying that both affordances considered equivalent yield the same effect with a high probability. PMID:29937724

  19. Measurement theory in local quantum physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okamura, Kazuya, E-mail: okamura@math.cm.is.nagoya-u.ac.jp; Ozawa, Masanao, E-mail: ozawa@is.nagoya-u.ac.jp

    In this paper, we aim to establish foundations of measurement theory in local quantum physics. For this purpose, we discuss a representation theory of completely positive (CP) instruments on arbitrary von Neumann algebras. We introduce a condition called the normal extension property (NEP) and establish a one-to-one correspondence between CP instruments with the NEP and statistical equivalence classes of measuring processes. We show that every CP instrument on an atomic von Neumann algebra has the NEP, extending the well-known result for type I factors. Moreover, we show that every CP instrument on an injective von Neumann algebra is approximated bymore » CP instruments with the NEP. The concept of posterior states is also discussed to show that the NEP is equivalent to the existence of a strongly measurable family of posterior states for every normal state. Two examples of CP instruments without the NEP are obtained from this result. It is thus concluded that in local quantum physics not every CP instrument represents a measuring process, but in most of physically relevant cases every CP instrument can be realized by a measuring process within arbitrary error limits, as every approximately finite dimensional von Neumann algebra on a separable Hilbert space is injective. To conclude the paper, the concept of local measurement in algebraic quantum field theory is examined in our framework. In the setting of the Doplicher-Haag-Roberts and Doplicher-Roberts theory describing local excitations, we show that an instrument on a local algebra can be extended to a local instrument on the global algebra if and only if it is a CP instrument with the NEP, provided that the split property holds for the net of local algebras.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wintermeyer, Niklas; Winters, Andrew R., E-mail: awinters@math.uni-koeln.de; Gassner, Gregor J.

    We design an arbitrary high-order accurate nodal discontinuous Galerkin spectral element approximation for the non-linear two dimensional shallow water equations with non-constant, possibly discontinuous, bathymetry on unstructured, possibly curved, quadrilateral meshes. The scheme is derived from an equivalent flux differencing formulation of the split form of the equations. We prove that this discretization exactly preserves the local mass and momentum. Furthermore, combined with a special numerical interface flux function, the method exactly preserves the mathematical entropy, which is the total energy for the shallow water equations. By adding a specific form of interface dissipation to the baseline entropy conserving schememore » we create a provably entropy stable scheme. That is, the numerical scheme discretely satisfies the second law of thermodynamics. Finally, with a particular discretization of the bathymetry source term we prove that the numerical approximation is well-balanced. We provide numerical examples that verify the theoretical findings and furthermore provide an application of the scheme for a partial break of a curved dam test problem.« less

  1. The costs of housing developments on sites with elevated landslide risk in the UK

    NASA Astrophysics Data System (ADS)

    Barclay, K.; Heath, A.

    2015-09-01

    New housing targets are being set for local planning authorities resulting in more areas being zoned for development. There is currently no requirement for a landslide assessment prior to this zoning, and sites at elevated risk of landslides are being put forward for development without consideration of the additional costs and other impacts of building on these higher risk sites. This study aimed to reveal the increased financial, economic, social and environmental costs associated with these decisions. Case studies were focused on the city of Bath, an area of increasing population and “one of the most intensely landslipped areas in Britain’’. The case studies found the financial costs associated with building in a landslide risk area to be significantly higher than the equivalent construction in areas of greater geological stability. Furthermore, it was found that uncertainty in cost when developing in unstable areas exacerbates this problem as the final cost cannot be accurately predicted before construction.

  2. Identifying Ranges of Stellar Ages and Metallicities for Blue Supergiants in the Starburst Galaxy IC 10

    NASA Astrophysics Data System (ADS)

    Bordwell, Baylee; Ho, N.; Geha, M. C.; West, M.

    2014-01-01

    Dwarf galaxies transition from active star formation to relative quiescence after entering a dense environment such as a galaxy cluster. However, the mechanism behind this change is not fully understood. The problem is complicated by its heavy dependence on the initial conditions of the galaxy in question. To investigate the conditions of a galaxy prior to transition, we chose one of the best and nearest examples of a dwarf with active star formation, the Local Group member IC 10. We have obtained DEIMOS spectra of blue supergiants in this galaxy and determined the range of metallicities and ages for these stars using the equivalent width of the calcium triplet feature and isochrone fitting to photometry. By looking at the distribution of these metallicities in space and time we are able to gain insight into IC 10's recent evolutionary history and to get a clearer picture of the physical state of a dwarf galaxy prior to transition.

  3. Final report, PT IP-535-C: Test of smaller VSR`s in DR reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaughn, A.D.

    1963-04-17

    Because of rod-sticking problems at DR Reactor, a knuckle rod of B Reactor design was installed in vertical safety channel number 28. The substitute VSR, which has a smaller diameter than the original DR rod, has been tested for its operational feasibility including both drop time and reactivity effect. The reactivity effect of the rod was estimated by comparison of the reactivity transient caused by insertion of the specific B-type rod after scramming into the pile, with similar transients caused by normal vertical safety rod in an adjacent channel. This document lists the indicated relative control strength of the rodmore » as an empirical basis for future safety calculations. Results indicate that the B-type knuckel rod in DR reactor is about 80% as strong as a normal DR vertical safety rod if used in equivalent flux distribution and location; this makes it reasonable to assume that the local control strength of the B-type knuckel rod is 98 {mu}b.« less

  4. Study on Locally Confined Deposition of Si Nanocrystals in High-Aspect-Ratio Si Nano-Pillar Arrays for Nano-Electronic and Nano-Photonic Applications II

    DTIC Science & Technology

    2010-12-03

    photoluminescence characteristics of equivalent-size controlled silicon quantum dots by employing a nano-porous aluminum oxide membrane as the template for growing...synthesis of Si quantum dots (Si-QDs) embedded in low-temperature (500oC) annealed Si-rich SiOx nano-rod deposited in nano-porous anodic aluminum oxide ...characteristics of the equivalent-size controlled Si-QDs by employing the nano-porous AAO membrane as the template for growing Si-rich SiOx nano-rods

  5. Frequencies and Flutter Speed Estimation for Damaged Aircraft Wing Using Scaled Equivalent Plate Analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2010-01-01

    Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.

  6. Messen, Kalibrieren, Eichen in der Radiologie: Prinzipien und Praxis

    NASA Astrophysics Data System (ADS)

    Wagner, Siegfried R.

    Nach einleitender Erläuterung der unterschiedlichen Meßbedingungen in der Strahlentherapie und im Strahlenschutz werden die metrologischen Probleme am Beispiel der Größenkategorie Äquivalentdosis diskutiert. Als spezielle Größen werden effektive Äquivalentdosis und Umgebungs-Äquivalentdosis eingeführt. Es wird gezeigt, wie richtiges Messen durch ein konsistentes System von Bauartanforderungen an Meßgeräte, durch Kalibrieren und durch Eichen gewährleistet werden kann. Die Bedeutung von Meßunsicherheiten und Fehlergrenzen wird erläutert und ihre Auswirkung auf die Interpretation von Meßergebnissen behandelt.Translated AbstractMeasurements, Calibration, Verification in Radiology: Principles and PracticeThe different measuring conditions in radiotherapy and in radiation protection are discussed in the introduction. Then, the metrological problems are discussed exemplarily with the dose equivalent as a category of quantity. Effective dose equivalent and ambient dose equivalent are introduced as special quantities. It is demonstrated, how correct measurements can be secured by a consistent system of instrument pattern requirements, by calibration and verification. The importance of uncertainties of measurements and of error limits is illustrated and their influence on the interpretation of the results of measurements is treated.

  7. Analysis for nickel (3 and 4) in positive plates from nickel-cadmium cells

    NASA Technical Reports Server (NTRS)

    Lewis, Harlan L.

    1994-01-01

    The NASA-Goddard procedure for destructive physical analysis (DPA) of nickel-cadmium cells contains a method for analysis of residual charged nickel as NiOOH in the positive plates at complete cell discharge, also known as nickel precharge. In the method, the Ni(III) is treated with an excess of an Fe(II) reducing agent and then back titrated with permanganate. The Ni(III) content is the difference between Fe(II) equivalents and permanganate equivalents. Problems have arisen in analysis at NAVSURFWARCENDIV, Crane because for many types of cells, particularly AA-size and some 'space-qualified' cells, zero or negative Ni(III) contents are recorded for which the manufacturer claims 3-5 percent precharge. Our approach to this problem was to reexamine the procedure for the source of error, and correct it or develop an alternative method.

  8. Synthetic resistivity calculations for the canonical depth-to-bedrock problem: A critical examination of the thin interbed problem and electrical equivalence theories

    NASA Astrophysics Data System (ADS)

    Weiss, C. J.; Knight, R.

    2009-05-01

    One of the key factors in the sensible inference of subsurface geologic properties from both field and laboratory experiments is the ability to quantify the linkages between the inherently fine-scale structures, such as bedding planes and fracture sets, and their macroscopic expression through geophysical interrogation. Central to this idea is the concept of a "minimal sampling volume" over which a given geophysical method responds to an effective medium property whose value is dictated by the geometry and distribution of sub- volume heterogeneities as well as the experiment design. In this contribution we explore the concept of effective resistivity volumes for the canonical depth-to-bedrock problem subject to industry-standard DC resistivity survey designs. Four models representing a sedimentary overburden and flat bedrock interface were analyzed through numerical experiments of six different resistivity arrays. In each of the four models, the sedimentary overburden consists of a thinly interbedded resistive and conductive laminations, with equivalent volume-averaged resistivity but differing lamination thickness, geometry, and layering sequence. The numerical experiments show striking differences in the apparent resistivity pseudo-sections which belie the volume-averaged equivalence of the models. These models constitute the synthetic data set offered for inversion in this Back to Basics Resistivity Modeling session and offer the promise to further our understanding of how the sampling volume, as affected by survey design, can be constrained by joint-array inversion of resistivity data.

  9. An implementation problem for boson fields and quantum Girsanov transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Un Cig, E-mail: uncigji@chungbuk.ac.kr; Obata, Nobuaki, E-mail: obata@math.is.tohoku.ac.jp

    2016-08-15

    We study an implementation problem for quadratic functions of annihilation and creation operators on a boson field in terms of quantum white noise calculus. The implementation problem is shown to be equivalent to a linear differential equation for white noise operators containing quantum white noise derivatives. The solution is explicitly obtained and turns out to form a class of white noise operators including generalized Fourier–Gauss and Fourier–Mehler transforms, Bogoliubov transform, and a quantum extension of the Girsanov transform.

  10. On a numerical solving of random generated hexamatrix games

    NASA Astrophysics Data System (ADS)

    Orlov, Andrei; Strekalovskiy, Alexander

    2016-10-01

    In this paper, we develop a global search method for finding a Nash equilibrium in a hexamatrix game (polymatrix game of three players). The method, on the one hand, is based on the equivalence theorem of the problem of finding a Nash equilibrium in the game and a special mathematical optimization problem, and, on the other hand, on the usage of Global Search Theory for solving the latter problem. The efficiency of this approach is demonstrated by the results of computational testing.

  11. Singularities of the quad curl problem

    NASA Astrophysics Data System (ADS)

    Nicaise, Serge

    2018-04-01

    We consider the quad curl problem in smooth and non smooth domains of the space. We first give an augmented variational formulation equivalent to the one from [25] if the datum is divergence free. We describe the singularities of the variational space which correspond to the ones of the Maxwell system with perfectly conducting boundary conditions. The edge and corner singularities of the solution of the corresponding boundary value problem with smooth data are also characterized. We finally obtain some regularity results of the variational solution.

  12. Mimicking Celestial Mechanics in Metamaterials

    DTIC Science & Technology

    2009-09-01

    permittivities and permeabilities and could be related to light dynamics in curved space through the invariance of Maxwell’s equations under coordinate...transformations brings the equivalence between curved spacetime and local optical response through spatially dependent permeability and permittivity tensors...with local permeability and permittivity tensors given as µij = εij = δij h1h2h3 hi √g00 where hi= √gii are the Lame coefficients of the transformation

  13. Patterns of rural household energy use: a study in the White Nile province - the Sudan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdu, A.S.E.

    1985-01-01

    The study investigates rural household domestic energy consumption patterns in a semiarid area of the Sudan. It describes the socioeconomic and evironmental context of energy use, provides an estimation of local woody biomass production and evaluates ecological impacts of increased energy demand on the local resource base. It is based on findings derived from field surveys, a systematic questionnaire and participant observations. Findings indicate that households procure traditional fuels by self-collection and purchases. Household members spent on average 20% of their working time gathering fuels. Generally per caput and total annual expenditure and consumption of domestic fuels are determined bymore » household size, physical availability, storage, prices, income, conservation, substitution and competition among fuel resource uses. Households spend on average 16% of their annual income on traditional fuels. Aggregation of fuels on heat equivalent basis and calculation of their contribution shows that on average firewood provides 63%, charcoal 20.7%, dung 10.4%, crop residues 3.4% and kerosene/diesel 2.5% of the total demand for domestic purposes. Estimated total household woodfuel demand exceeds woody biomass available from the local forests. This demand is presently satisfied by a net depletion of the local forests and purchases from other areas. Degradation of the resource base is further exacerbated by development of irrigation along the White Nile River, increasing livestock numbers (overgrazing) and forest clearance for rainfed cultivation. The most promising relevant and appropriate strategies to alleviate rural household domestic energy problems include: conservation of the existing forest, augmentation through village woodlots and community forestry programmes and improvements in end-use (stoves) and conversion (wood to charcoal) technologies.« less

  14. New localization mechanism and Hodge duality for q -form field

    NASA Astrophysics Data System (ADS)

    Fu, Chun-E.; Liu, Yu-Xiao; Guo, Heng; Zhang, Sheng-Li

    2016-03-01

    In this paper, we investigate the problem of localization and the Hodge duality for a q -form field on a p -brane with codimension one. By a general Kaluza-Klein (KK) decomposition without gauge fixing, we obtain two Schrödinger-like equations for two types of KK modes of the bulk q -form field, which determine the localization and mass spectra of these KK modes. It is found that there are two types of zero modes (the 0-level modes): a q -form zero mode and a (q -1 )-form one, which cannot be localized on the brane at the same time. For the n -level KK modes, there are two interacting KK modes, a massive q -form KK mode and a massless (q -1 )-form one. By analyzing gauge invariance of the effective action and choosing a gauge condition, the n -level massive q -form KK mode decouples from the n -level massless (q -1 )-form one. It is also found that the Hodge duality in the bulk naturally becomes two dualities on the brane. The first one is the Hodge duality between a q -form zero mode and a (p -q -1 )-form one, or between a (q -1 )-form zero mode and a (p -q )-form one. The second duality is between two group KK modes: one is an n -level massive q -form KK mode with mass mn and an n -level massless (q -1 )-form mode; another is an n -level (p -q )-form one with the same mass mn and an n -level massless (p -q -1 )-form mode. Because of the dualities, the effective field theories on the brane for the KK modes of the two dual bulk form fields are physically equivalent.

  15. Simulations of Flame Acceleration and DDT in Mixture Composition Gradients

    NASA Astrophysics Data System (ADS)

    Zheng, Weilin; Kaplan, Carolyn; Houim, Ryan; Oran, Elaine

    2017-11-01

    Unsteady, multidimensional, fully compressible numerical simulations of methane-air in an obstructed channel with spatial gradients in equivalence ratios have been carried to determine the effects of the gradients on flame acceleration and transition to detonation. Results for gradients perpendicular to the propagation direction were considered here. A calibrated, optimized chemical-diffusive model that reproduces correct flame and detonation properties for methane-air over a range of equivalence ratios was derived from a combination of a genetic algorithm with a Nelder-Mead optimization scheme. Inhomogeneous mixtures of methane-air resulted in slower flame acceleration and longer distance to DDT. Detonations were more likely to decouple into a flame and a shock under sharper concentration gradients. Detailed analyses of temperature and equivalence ratio illustrated that vertical gradients can greatly affect the formation of hot spots that initiate detonation by changing the strength of leading shock wave and local equivalence ratio near the base of obstacles. This work is supported by the Alpha Foundation (Grant No. AFC215-20).

  16. Semantic relatedness for evaluation of course equivalencies

    NASA Astrophysics Data System (ADS)

    Yang, Beibei

    Semantic relatedness, or its inverse, semantic distance, measures the degree of closeness between two pieces of text determined by their meaning. Related work typically measures semantics based on a sparse knowledge base such as WordNet or Cyc that requires intensive manual efforts to build and maintain. Other work is based on a corpus such as the Brown corpus, or more recently, Wikipedia. This dissertation proposes two approaches to applying semantic relatedness to the problem of suggesting transfer course equivalencies. Two course descriptions are given as input to feed the proposed algorithms, which output a value that can be used to help determine if the courses are equivalent. The first proposed approach uses traditional knowledge sources such as WordNet and corpora for courses from multiple fields of study. The second approach uses Wikipedia, the openly-editable encyclopedia, and it focuses on courses from a technical field such as Computer Science. This work shows that it is promising to adapt semantic relatedness to the education field for matching equivalencies between transfer courses. A semantic relatedness measure using traditional knowledge sources such as WordNet performs relatively well on non-technical courses. However, due to the "knowledge acquisition bottleneck," such a resource is not ideal for technical courses, which use an extensive and growing set of technical terms. To address the problem, this work proposes a Wikipedia-based approach which is later shown to be more correlated to human judgment compared to previous work.

  17. Climate co-benefits of energy recovery from landfill gas in developing Asian cities: a case study in Bangkok.

    PubMed

    Menikpura, S N M; Sang-Arun, Janya; Bengtsson, Magnus

    2013-10-01

    Landfilling is the most common and cost-effective waste disposal method, and it is widely applied throughout the world. In developing countries in Asia there is currently a trend towards constructing sanitary landfills with gas recovery systems, not only as a solution to the waste problem and the associated local environmental pollution, but also to generate revenues through carbon markets and from the sale of electricity. This article presents a quantitative assessment of climate co-benefits from landfill gas (LFG) to energy projects, based on the case of Bangkok Metropolitan Administration, Thailand. Life cycle assessment was used for estimating net greenhouse gas (GHG) emissions, considering the whole lifespan of the landfill. The assessment found that the total GHG mitigation of the Bangkok project would be 471,763 tonnes (t) of carbon dioxide (CO(2))-equivalents (eq) over its 10-year LFG recovery period.This amount is equivalent to only 12% of the methane (CH(4)) generated over the whole lifespan of the landfill. An alternative scenario was devised to analyse possible improvement options for GHG mitigation through LFG-to-energy recovery projects. This scenario assumes that LFG recovery would commence in the second year of landfill operation and gas extraction continues throughout the 20-year peak production period. In this scenario, GHG mitigation potential amounted to 1,639,450 tCO(2)-eq during the 20-year project period, which is equivalent to 43% of the CH(4) generated throughout the life cycle. The results indicate that with careful planning, there is a high potential for improving the efficiency of existing LFG recovery projects which would enhance climate co-benefits, as well as economic benefits. However, the study also shows that even improved gas recovery systems have fairly low recovery rates and, in consequence, that emissions of GHG from such landfills sites are still considerable.

  18. Tuning the tetrahedrality of the hydrogen-bonded network of water: Comparison of the effects of pressure and added salts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, Saurav, E-mail: saurav7188@gmail.com, E-mail: cyz118212@chemistry.iitd.ac.in; Chakravarty, Charusita

    Experiments and simulations demonstrate some intriguing equivalences in the effect of pressure and electrolytes on the hydrogen-bonded network of water. Here, we examine the extent and nature of equivalence effects between pressure and salt concentration using relationships between structure, entropy, and transport properties based on two key ideas: first, the approximation of the excess entropy of the fluid by the contribution due to the atom-atom pair correlation functions and second, Rosenfeld-type excess entropy scaling relations for transport properties. We perform molecular dynamics simulations of LiCl–H{sub 2}O and bulk SPC/E water spanning the concentration range 0.025–0.300 molefraction of LiCl at 1more » atm and pressure range from 0 to 7 GPa, respectively. The temperature range considered was from 225 to 350 K for both the systems. To establish that the time-temperature-transformation behaviour of electrolyte solutions and water is equivalent, we use the additional observation based on our simulations that the pair entropy behaves as a near-linear function of pressure in bulk water and of composition in LiCl–H{sub 2}O. This allows for the alignment of pair entropy isotherms and allows for a simple mapping of pressure onto composition. Rosenfeld-scaling implies that pair entropy is semiquantitatively related to the transport properties. At a given temperature, equivalent state points in bulk H{sub 2}O and LiCl–H{sub 2}O (at 1 atm) are defined as those for which the pair entropy, diffusivity, and viscosity are nearly identical. The microscopic basis for this equivalence lies in the ability of both pressure and ions to convert the liquid phase into a pair-dominated fluid, as demonstrated by the O–O–O angular distribution within the first coordination shell of a water molecule. There are, however, sharp differences in local order and mechanisms for the breakdown of tetrahedral order by pressure and electrolytes. Increasing pressure increases orientational disorder within the first neighbour shell while addition of ions shifts local orientational order from tetrahedral to close-packed as water molecules get incorporated in ionic hydration shells. The variations in local order within the first hydration shell may underlie ion-specific effects, such as the Hofmeister series.« less

  19. Tuning the tetrahedrality of the hydrogen-bonded network of water: Comparison of the effects of pressure and added salts

    NASA Astrophysics Data System (ADS)

    Prasad, Saurav; Chakravarty, Charusita

    2016-06-01

    Experiments and simulations demonstrate some intriguing equivalences in the effect of pressure and electrolytes on the hydrogen-bonded network of water. Here, we examine the extent and nature of equivalence effects between pressure and salt concentration using relationships between structure, entropy, and transport properties based on two key ideas: first, the approximation of the excess entropy of the fluid by the contribution due to the atom-atom pair correlation functions and second, Rosenfeld-type excess entropy scaling relations for transport properties. We perform molecular dynamics simulations of LiCl-H2O and bulk SPC/E water spanning the concentration range 0.025-0.300 molefraction of LiCl at 1 atm and pressure range from 0 to 7 GPa, respectively. The temperature range considered was from 225 to 350 K for both the systems. To establish that the time-temperature-transformation behaviour of electrolyte solutions and water is equivalent, we use the additional observation based on our simulations that the pair entropy behaves as a near-linear function of pressure in bulk water and of composition in LiCl-H2O. This allows for the alignment of pair entropy isotherms and allows for a simple mapping of pressure onto composition. Rosenfeld-scaling implies that pair entropy is semiquantitatively related to the transport properties. At a given temperature, equivalent state points in bulk H2O and LiCl-H2O (at 1 atm) are defined as those for which the pair entropy, diffusivity, and viscosity are nearly identical. The microscopic basis for this equivalence lies in the ability of both pressure and ions to convert the liquid phase into a pair-dominated fluid, as demonstrated by the O-O-O angular distribution within the first coordination shell of a water molecule. There are, however, sharp differences in local order and mechanisms for the breakdown of tetrahedral order by pressure and electrolytes. Increasing pressure increases orientational disorder within the first neighbour shell while addition of ions shifts local orientational order from tetrahedral to close-packed as water molecules get incorporated in ionic hydration shells. The variations in local order within the first hydration shell may underlie ion-specific effects, such as the Hofmeister series.

  20. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  1. Existence and non-uniqueness of similarity solutions of a boundary-layer problem

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Lakin, W. D.

    1986-01-01

    A Blasius boundary value problem with inhomogeneous lower boundary conditions f(0) = 0 and f'(0) = - lambda with lambda strictly positive was considered. The Crocco variable formulation of this problem has a key term which changes sign in the interval of interest. It is shown that solutions of the boundary value problem do not exist for values of lambda larger than a positive critical value lambda. The existence of solutions is proven for 0 lambda lambda by considering an equivalent initial value problem. It is found however that for 0 lambda lambda, solutions of the boundary value problem are nonunique. Physically, this nonuniqueness is related to multiple values of the skin friction.

  2. Existence and non-uniqueness of similarity solutions of a boundary layer problem

    NASA Technical Reports Server (NTRS)

    Hussaini, M. Y.; Lakin, W. D.

    1984-01-01

    A Blasius boundary value problem with inhomogeneous lower boundary conditions f(0) = 0 and f'(0) = - lambda with lambda strictly positive was considered. The Crocco variable formulation of this problem has a key term which changes sign in the interval of interest. It is shown that solutions of the boundary value problem do not exist for values of lambda larger than a positive critical value lambda. The existence of solutions is proven for 0 lambda lambda by considering an equivalent initial value problem. It is found however that for 0 lambda lambda, solutions of the boundary value problem are nonunique. Physically, this nonuniqueness is related to multiple values of the skin friction.

  3. On the local fractional derivative of everywhere non-differentiable continuous functions on intervals

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-shi

    2017-01-01

    We first prove that for a continuous function f(x) defined on an open interval, the Kolvankar-Gangal's (or equivalently Chen-Yan-Zhang's) local fractional derivative f(α)(x) is not continuous, and then prove that it is impossible that the KG derivative f(α)(x) exists everywhere on the interval and satisfies f(α)(x) ≠ 0 in the same time. In addition, we give a criterion of the nonexistence of the local fractional derivative of everywhere non-differentiable continuous functions. Furthermore, we construct two simple nowhere differentiable continuous functions on (0, 1) and prove that they have no the local fractional derivatives everywhere.

  4. The Effect of Problem-Based Learning on Undergraduate Students' Learning about Solutions and Their Physical Properties and Scientific Processing Skills

    ERIC Educational Resources Information Center

    Tosun, Cemal; Taskesenligil, Yavuz

    2013-01-01

    The aim of this study was to investigate the effect of Problem-Based Learning (PBL) on undergraduate students' learning about solutions and their physical properties, and on their scientific processing skills. The quasi experimental study was carried out through non-equivalent control and comparison groups pre-post test design. The data were…

  5. Assessment of the Effects of Problem Solving Instructional Strategies on Students' Achievement and Retention in Chemistry with Respect to Location in Rivers State

    ERIC Educational Resources Information Center

    Nbina, Jacobson Barineka; Obomanu, B. Joseph

    2011-01-01

    We report a study focused on how problem-solving instructional strategies would affect students' achievement and retention in Chemistry with particular reference to River State. A pre-test, post-test, non-equivalent control group design was adopted. Two research questions and two hypotheses were respectively answered and tested. Purposive and…

  6. Self-growing neural network architecture using crisp and fuzzy entropy

    NASA Technical Reports Server (NTRS)

    Cios, Krzysztof J.

    1992-01-01

    The paper briefly describes the self-growing neural network algorithm, CID2, which makes decision trees equivalent to hidden layers of a neural network. The algorithm generates a feedforward architecture using crisp and fuzzy entropy measures. The results of a real-life recognition problem of distinguishing defects in a glass ribbon and of a benchmark problem of differentiating two spirals are shown and discussed.

  7. Combinational Optimal Stopping Problems

    DTIC Science & Technology

    2016-04-01

    such as final, technical, interim, memorandum, master’s thesis, progress, quarterly, research , special, group study, etc. 3. DATES COVERED...Vinel, A. and P. Krokhmal (2015) Certainty equivalent measures of risk, Annals of Operations Research , DOI:10.1007/s10479-015-1801-0. [3] Chernikov...Operations Research , 50(3):415–423, 2002. [16] I. Ljubi, P. Mutzel, and B. Zey. Stochastic survivable network design problems. Electronic Notes in Discrete

  8. Advances in Quantum Trajectory Approaches to Dynamics

    NASA Astrophysics Data System (ADS)

    Askar, Attila

    2001-03-01

    The quantum fluid dynamics (QFD) formulation is based on the separation of the amplitude and phase of the complex wave function in Schrodinger's equation. The approach leads to conservation laws for an equivalent "gas continuum". The Lagrangian [1] representation corresponds to following the particles of the fluid continuum, i. e. calculating "quantum trajectories". The Eulerian [2] representation on the other hand, amounts to observing the dynamics of the gas continuum at the points of a fixed coordinate frame. The combination of several factors leads to a most encouraging computational efficiency. QFD enables the numerical analysis to deal with near monotonic amplitude and phase functions. The Lagrangian description concentrates the computation effort to regions of highest probability as an optimal adaptive grid. The Eulerian representation allows the study of multi-coordinate problems as a set of one-dimensional problems within an alternating direction methodology. An explicit time integrator limits the increase in computational effort with the number of discrete points to linear. Discretization of the space via local finite elements [1,2] and global radial functions [3] will be discussed. Applications include wave packets in four-dimensional quadratic potentials and two coordinate photo-dissociation problems for NOCl and NO2. [1] "Quantum fluid dynamics (QFD) in the Lagrangian representation with applications to photo-dissociation problems", F. Sales, A. Askar and H. A. Rabitz, J. Chem. Phys. 11, 2423 (1999) [2] "Multidimensional wave-packet dynamics within the fluid dynamical formulation of the Schrodinger equation", B. Dey, A. Askar and H. A. Rabitz, J. Chem. Phys. 109, 8770 (1998) [3] "Solution of the quantum fluid dynamics equations with radial basis function interpolation", Xu-Guang Hu, Tak-San Ho, H. A. Rabitz and A. Askar, Phys. Rev. E. 61, 5967 (2000)

  9. Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT

    PubMed Central

    Nguyen, Thu L. N.; Shin, Yoan

    2016-01-01

    Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378

  10. Transformation of body force localized near the surface of a half-space into equivalent surface stresses.

    PubMed

    Rouge, Clémence; Lhémery, Alain; Ségur, Damien

    2013-10-01

    An electromagnetic acoustic transducer (EMAT) or a laser used to generate elastic waves in a component is often described as a source of body force confined in a layer close to the surface. On the other hand, models for elastic wave radiation more efficiently handle sources described as distributions of surface stresses. Equivalent surface stresses can be obtained by integrating the body force with respect to depth. They are assumed to generate the same field as the one that would be generated by the body force. Such an integration scheme can be applied to Lorentz force for conventional EMAT configuration. When applied to magnetostrictive force generated by an EMAT in a ferromagnetic material, the same scheme fails, predicting a null stress. Transforming body force into equivalent surface stresses therefore, requires taking into account higher order terms of the force moments, the zeroth order being the simple force integration over the depth. In this paper, such a transformation is derived up to the second order, assuming that body forces are localized at depths shorter than the ultrasonic wavelength. Two formulations are obtained, each having some advantages depending on the application sought. They apply regardless of the nature of the force considered.

  11. On the Local Equivalence Between the Canonical and the Microcanonical Ensembles for Quantum Spin Systems

    NASA Astrophysics Data System (ADS)

    Tasaki, Hal

    2018-06-01

    We study a quantum spin system on the d-dimensional hypercubic lattice Λ with N=L^d sites with periodic boundary conditions. We take an arbitrary translation invariant short-ranged Hamiltonian. For this system, we consider both the canonical ensemble with inverse temperature β _0 and the microcanonical ensemble with the corresponding energy U_N(β _0) . For an arbitrary self-adjoint operator \\hat{A} whose support is contained in a hypercubic block B inside Λ , we prove that the expectation values of \\hat{A} with respect to these two ensembles are close to each other for large N provided that β _0 is sufficiently small and the number of sites in B is o(N^{1/2}) . This establishes the equivalence of ensembles on the level of local states in a large but finite system. The result is essentially that of Brandao and Cramer (here restricted to the case of the canonical and the microcanonical ensembles), but we prove improved estimates in an elementary manner. We also review and prove standard results on the thermodynamic limits of thermodynamic functions and the equivalence of ensembles in terms of thermodynamic functions. The present paper assumes only elementary knowledge on quantum statistical mechanics and quantum spin systems.

  12. Gravitational Redshift in a Local Freely Falling Frame: A Proposed New Null Test of the Equivalence Principle

    NASA Technical Reports Server (NTRS)

    Krisher, Timothy P.

    1996-01-01

    We consider the gravitational redshift effect measured by an observer in a local freely failing frame (LFFF) in the gravitational field of a massive body. For purely metric theories of gravity, the metric in a LFFF is expected to differ from that of flat spacetime by only "tidal" terms of order (GM/c(exp 2)R)(r'/R )(exp 2), where R is the distance of the observer from the massive body, and r' is the coordinate separation relative to the origin of the LFFF. A simple derivation shows that a violation of the equivalence principle for certain types of "clocks" could lead to a larger apparent redshift effect of order (1 - alpha)(G M/c(exp 2)R)(r'/R), where alpha parametrizes the violation (alpha = 1 for purely metric theories, such as general relativity). Therefore, redshift experiments in a LFFF with separated clocks can provide a new null test of the equivalence principle. With presently available technology, it is possible to reach an accuracy of 0.01% in the gravitational field of the Sun using an atomic clock orbiting the Earth. A 1% test in the gravitational field of the galaxy would be possible if an atomic frequency standard were flown on a space mission to the outer solar system.

  13. Hamilton-Jacobi theory in multisymplectic classical field theories

    NASA Astrophysics Data System (ADS)

    de León, Manuel; Prieto-Martínez, Pedro Daniel; Román-Roy, Narciso; Vilariño, Silvia

    2017-09-01

    The geometric framework for the Hamilton-Jacobi theory developed in the studies of Cariñena et al. [Int. J. Geom. Methods Mod. Phys. 3(7), 1417-1458 (2006)], Cariñena et al. [Int. J. Geom. Methods Mod. Phys. 13(2), 1650017 (2015)], and de León et al. [Variations, Geometry and Physics (Nova Science Publishers, New York, 2009)] is extended for multisymplectic first-order classical field theories. The Hamilton-Jacobi problem is stated for the Lagrangian and the Hamiltonian formalisms of these theories as a particular case of a more general problem, and the classical Hamilton-Jacobi equation for field theories is recovered from this geometrical setting. Particular and complete solutions to these problems are defined and characterized in several equivalent ways in both formalisms, and the equivalence between them is proved. The use of distributions in jet bundles that represent the solutions to the field equations is the fundamental tool in this formulation. Some examples are analyzed and, in particular, the Hamilton-Jacobi equation for non-autonomous mechanical systems is obtained as a special case of our results.

  14. Radiative transfer calculated from a Markov chain formalism

    NASA Technical Reports Server (NTRS)

    Esposito, L. W.; House, L. L.

    1978-01-01

    The theory of Markov chains is used to formulate the radiative transport problem in a general way by modeling the successive interactions of a photon as a stochastic process. Under the minimal requirement that the stochastic process is a Markov chain, the determination of the diffuse reflection or transmission from a scattering atmosphere is equivalent to the solution of a system of linear equations. This treatment is mathematically equivalent to, and thus has many of the advantages of, Monte Carlo methods, but can be considerably more rapid than Monte Carlo algorithms for numerical calculations in particular applications. We have verified the speed and accuracy of this formalism for the standard problem of finding the intensity of scattered light from a homogeneous plane-parallel atmosphere with an arbitrary phase function for scattering. Accurate results over a wide range of parameters were obtained with computation times comparable to those of a standard 'doubling' routine. The generality of this formalism thus allows fast, direct solutions to problems that were previously soluble only by Monte Carlo methods. Some comparisons are made with respect to integral equation methods.

  15. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks.

    PubMed

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-05-21

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods.

  16. Efficient Convex Optimization for Energy-Based Acoustic Sensor Self-Localization and Source Localization in Sensor Networks

    PubMed Central

    Yan, Yongsheng; Wang, Haiyan; Shen, Xiaohong; Leng, Bing; Li, Shuangquan

    2018-01-01

    The energy reading has been an efficient and attractive measure for collaborative acoustic source localization in practical application due to its cost saving in both energy and computation capability. The maximum likelihood problems by fusing received acoustic energy readings transmitted from local sensors are derived. Aiming to efficiently solve the nonconvex objective of the optimization problem, we present an approximate estimator of the original problem. Then, a direct norm relaxation and semidefinite relaxation, respectively, are utilized to derive the second-order cone programming, semidefinite programming or mixture of them for both cases of sensor self-location and source localization. Furthermore, by taking the colored energy reading noise into account, several minimax optimization problems are formulated, which are also relaxed via the direct norm relaxation and semidefinite relaxation respectively into convex optimization problems. Performance comparison with the existing acoustic energy-based source localization methods is given, where the results show the validity of our proposed methods. PMID:29883410

  17. Implementation of equivalent domain integral method in the two-dimensional analysis of mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-intergrals for two-dimensional cracked elastic bodies is presented. The details of the method and its implementation are presented for isoparametric elements. The total and product integrals consist of the sum of an area of domain integral and line integrals on the crack faces. The line integrals vanish only when the crack faces are traction free and the loading is either pure mode 1 or pure mode 2 or a combination of both with only the square-root singular term in the stress field. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all problems analyzed. The EDI method when applied to a problem of an interface crack in two different materials showed that the mode 1 and mode 2 components are domain dependent while the total integral is not. This behavior is caused by the presence of the oscillatory part of the singularity in bimaterial crack problems. The EDI method, thus, shows behavior similar to the virtual crack closure method for bimaterial problems.

  18. Object-Image Correspondence for Algebraic Curves under Projections

    NASA Astrophysics Data System (ADS)

    Burdis, Joseph M.; Kogan, Irina A.; Hong, Hoon

    2013-03-01

    We present a novel algorithm for deciding whether a given planar curve is an image of a given spatial curve, obtained by a central or a parallel projection with unknown parameters. The motivation comes from the problem of establishing a correspondence between an object and an image, taken by a camera with unknown position and parameters. A straightforward approach to this problem consists of setting up a system of conditions on the projection parameters and then checking whether or not this system has a solution. The computational advantage of the algorithm presented here, in comparison to algorithms based on the straightforward approach, lies in a significant reduction of a number of real parameters that need to be eliminated in order to establish existence or non-existence of a projection that maps a given spatial curve to a given planar curve. Our algorithm is based on projection criteria that reduce the projection problem to a certain modification of the equivalence p! roblem of planar curves under affine and projective transformations. To solve the latter problem we make an algebraic adaptation of signature construction that has been used to solve the equivalence problems for smooth curves. We introduce a notion of a classifying set of rational differential invariants and produce explicit formulas for such invariants for the actions of the projective and the affine groups on the plane.

  19. Problems in abundance determination from UV spectra of hot supergiants

    NASA Astrophysics Data System (ADS)

    Deković, M. Sarta; Kotnik-Karuza, D.; Jurkić, T.; Dominis Prester, D.

    2010-03-01

    We present measurements of equivalent widths of the UV, presumably photospheric lines: C III 1247 Å, N III 1748 Å, N III 1752 Å, N IV 1718 Å and He II 1640 Å in high-resolution IUE spectra of 24 galactic OB supergiants. Equivalent widths measured from the observed spectra have been compared with their counterparts in the Tlusty NLTE synthetic spectra. We discuss possibilities of static plan-parallel model to reproduce observed UV spectra of hot massive stars and possible reasons why observations differ from the model so much.

  20. Local Limit Phenomena, Flow Compression, and Fuel Cracking Effects in High-Speed Turbulent Flames

    DTIC Science & Technology

    2015-06-01

    e.g. local extinction and re- ignition , interactions between flow compression and fast-reaction induced dilatation (reaction compression ), and to...time as a function of initial temperature in constant-pressure auto - ignition , and (b) the S-curves of perfectly stirred reactors (PSRs), for n...mechanism. The reduction covered auto - ignition and perfectly stirred reactors for equivalence ratio range of 0.5~1.5, initial temperature higher than

  1. Genetic Local Search for Optimum Multiuser Detection Problem in DS-CDMA Systems

    NASA Astrophysics Data System (ADS)

    Wang, Shaowei; Ji, Xiaoyong

    Optimum multiuser detection (OMD) in direct-sequence code-division multiple access (DS-CDMA) systems is an NP-complete problem. In this paper, we present a genetic local search algorithm, which consists of an evolution strategy framework and a local improvement procedure. The evolution strategy searches the space of feasible, locally optimal solutions only. A fast iterated local search algorithm, which employs the proprietary characteristics of the OMD problem, produces local optima with great efficiency. Computer simulations show the bit error rate (BER) performance of the GLS outperforms other multiuser detectors in all cases discussed. The computation time is polynomial complexity in the number of users.

  2. Measurement equivalence: a glossary for comparative population health research.

    PubMed

    Morris, Katherine Ann

    2018-03-06

    Comparative population health studies are becoming more common and are advancing solutions to crucial public health problems, but decades-old measurement equivalence issues remain without a common vocabulary to identify and address the biases that contribute to non-equivalence. This glossary defines sources of measurement non-equivalence. While drawing examples from both within-country and between-country studies, this glossary also defines methods of harmonisation and elucidates the unique opportunities in addition to the unique challenges of particular harmonisation methods. Its primary objective is to enable population health researchers to more clearly articulate their measurement assumptions and the implications of their findings for policy. It is also intended to provide scholars and policymakers across multiple areas of inquiry with tools to evaluate comparative research and thus contribute to urgent debates on how to ameliorate growing health disparities within and between countries. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  3. Einstein's First Steps Toward General Relativity: Gedanken Experiments and Axiomatics

    NASA Astrophysics Data System (ADS)

    Miller, A. I.

    1999-03-01

    Albert Einstein's 1907 Jahrbuch paper is an extraordinary document because it contains his first steps toward generalizing the 1905 relativity theory to include gravitation. Ignoring the apparent experimental disconfirmation of the 1905 relativity theory and his unsuccessful attempts to generalize the mass-energy equivalence, Einstein boldly raises the mass-energy equivalence to an axiom, invokes equality between gravitational and inertial masses, and then postulates the equivalence between a uniform gravitational field and an oppositely directed constant acceleration, the equivalence principle. How did this come about? What is at issue is scientific creativity. This necessitates broadening historical analysis to include aspects of cognitive science such as the role of visual imagery in Einstein's thinking, and the relation between conscious and unconscious modes of thought in problem solving. This method reveals the catalysts that sparked a Gedanken experiment that occurred to Einstein while working on the Jahrbuch paper. A mental model is presented to further explore Einstein's profound scientific discovery.

  4. A statistical estimation of Snow Water Equivalent coupling ground data and MODIS images

    NASA Astrophysics Data System (ADS)

    Bavera, D.; Bocchiola, D.; de Michele, C.

    2007-12-01

    The Snow Water Equivalent (SWE) is an important component of the hydrologic balance of mountain basins and snow fed areas in general. The total cumulated snow water equivalent at the end of the accumulation season represents the water availability at melt. Here, a statistical methodology to estimate the Snow Water Equivalent, at April 1st, is developed coupling ground data (snow depth and snow density measurements) and MODIS images. The methodology is applied to the Mallero river basin (about 320 km²) located in the Central Alps, northern Italy, where are available 11 snow gauges and a lot of sparse snow density measurements. The application covers 7 years from 2001 to 2007. The analysis has identified some problems in the MODIS information due to the cloud cover and misclassification for orographic shadow. The study is performed in the framework of AWARE (A tool for monitoring and forecasting Available WAter REsource in mountain environment) EU-project, a STREP Project in the VI F.P., GMES Initiative.

  5. Absorption line studies of reflection from horizontally inhomogeneous layers. [in cloudy planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Appleby, J. F.; Van Blerkom, D. J.

    1975-01-01

    The article details an inhomogeneous reflecting layer (IRFL) model designed to survey absorption line behavior from a Squires-like cloud cover (which is characterized by convection cell structure). Computational problems and procedures are discussed in detail. The results show trends usually opposite to those predicted by a simple reflecting layer model. Per cent equivalent width variations for the tower model are usually somewhat greater for weak than for relatively strong absorption lines, with differences of a factor of about two or three. IRFL equivalent width variations do not differ drastically as a function of geometry when the total volume of absorbing gas is held constant. The IRFL results are in many instances consistent with observed equivalent width variations of Jupiter, Saturn, and Venus.

  6. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    NASA Astrophysics Data System (ADS)

    Xavier, Marcelo A.; Trimboli, M. Scott

    2015-07-01

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggest significant performance improvements might be achieved by extending the result to electrochemical models.

  7. Measures against increased environmental radiation dose by the TEPCO Fukushima Dai-ichi NPP accident in some local governments in the Tokyo metropolitan area: focusing on examples of both Kashiwa and Nagareyama cities in Chiba prefecture.

    PubMed

    Iimoto, T; Fujii, H; Oda, S; Nakamura, T; Hayashi, R; Kuroda, R; Furusawa, M; Umekage, T; Ohkubo, Y

    2012-11-01

    The accident of the Fukushima Dai-ichi nuclear power plant of Tokyo Electric Power Cooperation (TEPCO) after the great east Japan earthquake (11 March 2011) elevated the background level of environmental radiation in Eastern Japan. Around the Tokyo metropolitan area, especially around Kashiwa and Nagareyama cities, the ambient dose equivalent rate has been significantly increased after the accident. Responding to strong requests from citizens, the local governments started to monitor the ambient dose equivalent rate precisely and officially, about 3 months after the accident had occurred. The two cities in cooperation with each other also organised a local forum supported by three radiation specialists. In this article, the activities of the local governments are introduced, with main focus on radiation monitoring and measurements. Topics are standardisation of environmental radiation measurements for ambient dose rate, dose mapping activity, investigation of foodstuff and drinking water, lending survey meters to citizens, etc. Based on the data and facts mainly gained by radiation monitoring, risk management and relating activity have been organised. 'Small consultation meetings in kindergartens', 'health consultation service for citizens', 'education meeting on radiation protection for teachers, medical staffs, local government staffs, and leaders of active volunteer parties' and 'decontamination activity', etc. are present key activities of the risk management and restoration around the Tokyo metropolitan area.

  8. Solvable Hydrodynamics of Quantum Integrable Systems

    NASA Astrophysics Data System (ADS)

    Bulchandani, Vir B.; Vasseur, Romain; Karrasch, Christoph; Moore, Joel E.

    2017-12-01

    The conventional theory of hydrodynamics describes the evolution in time of chaotic many-particle systems from local to global equilibrium. In a quantum integrable system, local equilibrium is characterized by a local generalized Gibbs ensemble or equivalently a local distribution of pseudomomenta. We study time evolution from local equilibria in such models by solving a certain kinetic equation, the "Bethe-Boltzmann" equation satisfied by the local pseudomomentum density. Explicit comparison with density matrix renormalization group time evolution of a thermal expansion in the XXZ model shows that hydrodynamical predictions from smooth initial conditions can be remarkably accurate, even for small system sizes. Solutions are also obtained in the Lieb-Liniger model for free expansion into vacuum and collisions between clouds of particles, which model experiments on ultracold one-dimensional Bose gases.

  9. Revised Modelling of the Addition of Synchronous Chemotherapy to Radiotherapy in Squamous Cell Carcinoma of the Head and Neck-A Low α/β?

    PubMed

    Best, James; Fong, Charles; Benghiat, Helen; Mehanna, Hisham; Glaholm, John; Hartley, Andrew

    2018-06-13

    Background: The effect of synchronous chemotherapy in squamous cell carcinoma of the head and neck (SCCHN) has been modelled as additional Biologically Effective Dose (BED) or as a prolonged tumour cell turnover time during accelerated repopulation. Such models may not accurately predict the local control seen when hypofractionated accelerated radiotherapy is used with synchronous chemotherapy. Methods: For the purposes of this study three isoeffect relationships were assumed: Firstly, from the RTOG 0129 trial, synchronous cisplatin chemotherapy with 70 Gy in 35 fractions over 46 days results in equivalent local control to synchronous cisplatin chemotherapy with 36 Gy in 18# followed by 36 Gy in 24# (2# per day) over a total of 39 days. Secondly, in line with primary local control outcomes from the PET-Neck study, synchronous cisplatin chemotherapy with 70 Gy in 35# over 46 days results in equivalent local control to synchronous cisplatin chemotherapy delivered with 65 Gy in 30# over 39 days. Thirdly, from meta-analysis data, 70 Gy in 35# over 46 days with synchronous cisplatin results in equivalent local control to 84 Gy in 70# over 46 days delivered without synchronous chemotherapy. Using the linear quadratic equation the above isoeffect relationships were expressed algebraically to determine values of α , α/β , and k for SCCHN when treated with synchronous cisplatin using standard parameters for the radiotherapy alone schedule ( α = 0.3 Gy −1 , α/β = 10 Gy, and k = 0.42 Gy 10 day −1 ). Results: The values derived for α/β , α and k were 2 Gy, 0.20 and 0.21 Gy −1 , and 0.65 and 0.71 Gy₂day −1 . Conclusions: Within the limitations of the assumptions made, this model suggests that accelerated repopulation may remain a significant factor when synchronous chemotherapy is delivered with radiotherapy in SCCHN. The finding of a low α/β for SCCHN treated with cisplatin suggests a greater tumour susceptibility to increasing dose per fraction and underlines the importance of the completion of randomized trials examining the role of hypofractionated acceleration in SCCHN.

  10. Cloud computing and validation of expandable in silico livers

    PubMed Central

    2010-01-01

    Background In Silico Livers (ISLs) are works in progress. They are used to challenge multilevel, multi-attribute, mechanistic hypotheses about the hepatic disposition of xenobiotics coupled with hepatic responses. To enhance ISL-to-liver mappings, we added discrete time metabolism, biliary elimination, and bolus dosing features to a previously validated ISL and initiated re-validated experiments that required scaling experiments to use more simulated lobules than previously, more than could be achieved using the local cluster technology. Rather than dramatically increasing the size of our local cluster we undertook the re-validation experiments using the Amazon EC2 cloud platform. So doing required demonstrating the efficacy of scaling a simulation to use more cluster nodes and assessing the scientific equivalence of local cluster validation experiments with those executed using the cloud platform. Results The local cluster technology was duplicated in the Amazon EC2 cloud platform. Synthetic modeling protocols were followed to identify a successful parameterization. Experiment sample sizes (number of simulated lobules) on both platforms were 49, 70, 84, and 152 (cloud only). Experimental indistinguishability was demonstrated for ISL outflow profiles of diltiazem using both platforms for experiments consisting of 84 or more samples. The process was analogous to demonstration of results equivalency from two different wet-labs. Conclusions The results provide additional evidence that disposition simulations using ISLs can cover the behavior space of liver experiments in distinct experimental contexts (there is in silico-to-wet-lab phenotype similarity). The scientific value of experimenting with multiscale biomedical models has been limited to research groups with access to computer clusters. The availability of cloud technology coupled with the evidence of scientific equivalency has lowered the barrier and will greatly facilitate model sharing as well as provide straightforward tools for scaling simulations to encompass greater detail with no extra investment in hardware. PMID:21129207

  11. Establishing an NP-staffed minor emergency area.

    PubMed

    Buchanan, L; Powers, R D

    1997-04-01

    Patients with problems of high acuity need fully trained emergency physicians and nurses. Some patients with nonurgent problems can be cared for within the emergency department (ED) in a lower-cost setting designed and staffed specifically for this purpose. Staffing a fast track or minor emergency area (MEA) with nurse practitioners (NPs) is one way to satisfy the ED's care needs. One site analysis of the effectiveness of NPs indicates that patients are satisfied with their care, that nurses' interpersonal skills are better than those of physicians, that technical skills are equivalent, that patient outcomes are equivalent or superior and that NPs improve access to care. A nurse practitioner-staffed minor emergency area provides high quality care for approximately 21% of this site's adult emergency department population. Patients are triaged based on set criteria, allowing for short treatment times. The physical layout, triage criteria, and the NPs' scope of practice in the level 1 trauma center's ED are detailed.

  12. The determination of the elastodynamic fields of an ellipsoidal inhomogeneity

    NASA Technical Reports Server (NTRS)

    Fu, L. S.; Mura, T.

    1983-01-01

    The determination of the elastodynamic fields of an ellipsoidal inhomogeneity is studied in detail via the eigenstrain approach. A complete formulation and a treatment of both types of eigenstrains for equivalence between the inhomogeneity problem and the inclusion problem are given. This approach is shown to be mathematically identical to other approaches such as the direct volume integral formulation. Expanding the eigenstrains and applied strains in the polynomial form in the position vector and satisfying the equivalence conditions at every point, the governing simultaneous algebraic equations for the unknown coefficients in the eigenstrain expansion are derived. The elastodynamic field outside an ellipsoidal inhomogeneity in a linear elastic isotropic medium is given as an example. The angular and frequency dependence of the induced displacement field, as well as the differential and total cross sections are formally given in series expansion form for the case of uniformly distributed eigenstrains.

  13. Locality-preserving logical operators in topological stabilizer codes

    NASA Astrophysics Data System (ADS)

    Webster, Paul; Bartlett, Stephen D.

    2018-01-01

    Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.

  14. Battling Arrow's Paradox to Discover Robust Water Management Alternatives

    NASA Astrophysics Data System (ADS)

    Kasprzyk, J. R.; Reed, P. M.; Hadka, D.

    2013-12-01

    This study explores whether or not Arrow's Impossibility Theorem, a theory of social choice, affects the formulation of water resources systems planning problems. The theorem discusses creating an aggregation function for voters choosing from more than three alternatives for society. The Impossibility Theorem is also called Arrow's Paradox, because when trying to add more voters, a single individual's preference will dictate the optimal group decision. In the context of water resources planning, our study is motivated by recent theoretical work that has generalized the insights for Arrow's Paradox to the design of complex engineered systems. In this framing of the paradox, states of society are equivalent to water planning or design alternatives, and the voters are equivalent to multiple planning objectives (e.g. minimizing cost or maximizing performance). Seen from this point of view, multi-objective water planning problems are functionally equivalent to the social choice problem described above. Traditional solutions to such multi-objective problems aggregate multiple performance measures into a single mathematical objective. The Theorem implies that a subset of performance concerns will inadvertently dictate the overall design evaluations in unpredictable ways using such an aggregation. We suggest that instead of aggregation, an explicit many-objective approach to water planning can help overcome the challenges posed by Arrow's Paradox. Many-objective planning explicitly disaggregates measures of performance while supporting the discovery of the planning tradeoffs, employing multiobjective evolutionary algorithms (MOEAs) to find solutions. Using MOEA-based search to address Arrow's Paradox requires that the MOEAs perform robustly with increasing problem complexity, such as adding additional objectives and/or decisions. This study uses comprehensive diagnostic evaluation of MOEA search performance across multiple problem formulations (both aggregated and many-objective) to show whether or not aggregating performance measures biases decision making. In this study, we explore this hypothesis using an urban water portfolio management case study in the Lower Rio Grande Valley. The diagnostic analysis shows that modern self-adaptive MOEA search is efficient, effective, and reliable for the more complex many-objective LRGV planning formulations. Results indicate that although many classical water systems planning frameworks seek to account for multiple objectives, the common practice of reducing the problem into one or more highly aggregated performance measures can severely and negatively bias planning decisions.

  15. Physics-Aware Informative Coverage Planning for Autonomous Vehicles

    DTIC Science & Technology

    2014-06-01

    environment and find the optimal path connecting fixed nodes, which is equivalent to solving the Traveling Salesman Problem (TSP). While TSP is an NP...intended for application to USV harbor patrolling, it is applicable to many different domains. The problem of traveling over an area and gathering...environment. I. INTRODUCTION There are many applications that need persistent monitor- ing of a given area, requiring repeated travel over the area to

  16. La Linguistica Aplicada a la Relacion Paradigmatica entre los Verbos "Ser" y "Estar" (Linguistics Applied to the Paradigmatic Relationship between the Verbs "Ser" and "Estar")

    ERIC Educational Resources Information Center

    Marchetti, Magda Ruggeri

    1977-01-01

    Speakers of Italian often have problems mastering Spanish because they erroneously believe its great similiarity to Italian makes it easy to learn. One of the fundamental problems is the lack of ability to choose the correct verb, "ser" or "estar," both equivalents of the Italian "essere." (Text is in Spanish.) (CFM)

  17. Gravitational Lagrangians, Mach's Principle, and the Equivalence Principle in an Expanding Universe

    NASA Astrophysics Data System (ADS)

    Essén, Hanno

    2014-08-01

    Gravitational Lagrangians as derived by Fock for the Einstein-Infeld-Hoffmann approach, and by Kennedy assuming only a fourth rank tensor interaction, contain long range interactions. Here we investigate how these affect the local dynamics when integrated over an expanding universe out to the Hubble radius. Taking the cosmic expansion velocity into account in a heuristic manner it is found that these long range interactions imply Mach's principle, provided the universe has the critical density, and that mass is renormalized. Suitable higher order additions to the Lagrangians make the formalism consistent with the equivalence principle.

  18. Entanglement-assisted transformation is asymptotically equivalent to multiple-copy transformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan Runyao; Feng Yuan; Ying Mingsheng

    2005-08-15

    We show that two ways of manipulating quantum entanglement - namely, entanglement-assisted local transformation [D. Jonathan and M. B. Plenio, Phys. Rev. Lett. 83, 3566 (1999)] and multiple-copy transformation [S. Bandyopadhyay, V. Roychowdhury, and U. Sen, Phys. Rev. A 65, 052315 (2002)]--are equivalent in the sense that they can asymptotically simulate each other's ability to implement a desired transformation from a given source state to another given target state with the same optimal success probability. As a consequence, this yields a feasible method to evaluate the optimal conversion probability of an entanglement-assisted transformation.

  19. Calculative techniques for transonic flows about certain classes of wing-body combinations, phase 2

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Spreiter, J. R.

    1972-01-01

    Theoretical analysis and associated computer programs were developed for predicting properties of transonic flows about certain classes of wing-body combinations. The procedures used are based on the transonic equivalence rule and employ either an arbitrarily-specified solution or the local linerization method for determining the nonlifting transonic flow about the equivalent body. The class of wind planform shapes include wings having sweptback trailing edges and finite tip chord. Theoretical results are presented for surface and flow-field pressure distributions for both nonlifting and lifting situations at Mach number one.

  20. Euclidean sections of protein conformation space and their implications in dimensionality reduction

    PubMed Central

    Duan, Mojie; Li, Minghai; Han, Li; Huo, Shuanghong

    2014-01-01

    Dimensionality reduction is widely used in searching for the intrinsic reaction coordinates for protein conformational changes. We find the dimensionality–reduction methods using the pairwise root–mean–square deviation as the local distance metric face a challenge. We use Isomap as an example to illustrate the problem. We believe that there is an implied assumption for the dimensionality–reduction approaches that aim to preserve the geometric relations between the objects: both the original space and the reduced space have the same kind of geometry, such as Euclidean geometry vs. Euclidean geometry or spherical geometry vs. spherical geometry. When the protein free energy landscape is mapped onto a 2D plane or 3D space, the reduced space is Euclidean, thus the original space should also be Euclidean. For a protein with N atoms, its conformation space is a subset of the 3N-dimensional Euclidean space R3N. We formally define the protein conformation space as the quotient space of R3N by the equivalence relation of rigid motions. Whether the quotient space is Euclidean or not depends on how it is parameterized. When the pairwise root–mean–square deviation is employed as the local distance metric, implicit representations are used for the protein conformation space, leading to no direct correspondence to a Euclidean set. We have demonstrated that an explicit Euclidean-based representation of protein conformation space and the local distance metric associated to it improve the quality of dimensionality reduction in the tetra-peptide and β–hairpin systems. PMID:24913095

  1. Mass change distribution inverted from space-borne gravimetric data using a Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Zhou, X.; Sun, X.; Wu, Y.; Sun, W.

    2017-12-01

    Mass estimate plays a key role in using temporally satellite gravimetric data to quantify the terrestrial water storage change. GRACE (Gravity Recovery and Climate Experiment) only observes the low degree gravity field changes, which can be used to estimate the total surface density or equivalent water height (EWH) variation, with a limited spatial resolution of 300 km. There are several methods to estimate the mass variation in an arbitrary region, such as averaging kernel, forward modelling and mass concentration (mascon). Mascon method can isolate the local mass from the gravity change at a large scale through solving the observation equation (objective function) which represents the relationship between unknown masses and the measurements. To avoid the unreasonable local mass inverted from smoothed gravity change map, regularization has to be used in the inversion. We herein give a Markov chain Monte Carlo (MCMC) method to objectively determine the regularization parameter for the non-negative mass inversion problem. We first apply this approach to the mass inversion from synthetic data. Result show MCMC can effectively reproduce the local mass variation taking GRACE measurement error into consideration. We then use MCMC to estimate the ground water change rate of North China Plain from GRACE gravity change rate from 2003 to 2014 under a supposition of the continuous ground water loss in this region. Inversion result show that the ground water loss rate in North China Plain is 7.6±0.2Gt/yr during past 12 years which is coincident with that from previous researches.

  2. AC and DC electrical behavior of MWCNT/epoxy nanocomposite near percolation threshold: Equivalent circuits and percolation limits

    NASA Astrophysics Data System (ADS)

    Alizadeh Sahraei, Abolfazl; Ayati, Moosa; Baniassadi, Majid; Rodrigue, Denis; Baghani, Mostafa; Abdi, Yaser

    2018-03-01

    This study attempts to comprehensively investigate the effects of multi-walled carbon nanotubes (MWCNTs) on the AC and DC electrical conductivity of epoxy nanocomposites. The samples (0.2, 0.3, and 0.5 wt. % MWCNT) were produced using a combination of ultrason and shear mixing methods. DC measurements were performed by continuous measurement of the current-voltage response and the results were analyzed via a numerical percolation approach, while for the AC behavior, the frequency response was studied by analyzing phase difference and impedance in the 10 Hz to 0.2 MHz frequency range. The results showed that the dielectric parameters, including relative permittivity, impedance phase, and magnitude, present completely different behaviors for the frequency range and MWCNT weight fractions studied. To better understand the nanocomposites electrical behavior, equivalent electric circuits were also built for both DC and AC modes. The DC equivalent networks were developed based on the current-voltage curves, while the AC equivalent circuits were proposed by using an optimization problem according to the impedance magnitude and phase at different frequencies. The obtained equivalent electrical circuits were found to be highly useful tools to understand the physical mechanisms involved in MWCNT filled polymer nanocomposites.

  3. Space Radiation Organ Doses for Astronauts on Past and Future Missions

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.

    2007-01-01

    We review methods and data used for determining astronaut organ dose equivalents on past space missions including Apollo, Skylab, Space Shuttle, NASA-Mir, and International Space Station (ISS). Expectations for future lunar missions are also described. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra, or a related quantity, the lineal energy (y) spectra that is measured by a tissue equivalent proportional counter (TEPC). These data are used in conjunction with space radiation transport models to project organ specific doses used in cancer and other risk projection models. Biodosimetry data from Mir, STS, and ISS missions provide an alternative estimate of organ dose equivalents based on chromosome aberrations. The physical environments inside spacecraft are currently well understood with errors in organ dose projections estimated as less than plus or minus 15%, however understanding the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons for which there are no human data to estimate risks. The accuracy of projections of organ dose equivalents described here must be supplemented with research on the health risks of space exposure to properly assess crew safety for exploration missions.

  4. Design and simulation of optoelectronic complementary dual neural elements for realizing a family of normalized vector 'equivalence-nonequivalence' operations

    NASA Astrophysics Data System (ADS)

    Krasilenko, Vladimir G.; Nikolsky, Aleksandr I.; Lazarev, Alexander A.; Magas, Taras E.

    2010-04-01

    Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix procedures with basic operations of continuous neurologic: normalized vector operations "equivalence", "nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width photoconverters have allowed us to design generalized structures for realization of the family of normalized linear vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading.

  5. Impact on Bacterial Resistance of Therapeutically Nonequivalent Generics: The Case of Piperacillin-Tazobactam

    PubMed Central

    Rodriguez, Carlos A.; Agudelo, Maria; Aguilar, Yudy A.; Zuluaga, Andres F.

    2016-01-01

    Previous studies have demonstrated that pharmaceutical equivalence and pharmacokinetic equivalence of generic antibiotics are necessary but not sufficient conditions to guarantee therapeutic equivalence (better called pharmacodynamic equivalence). In addition, there is scientific evidence suggesting a direct link between pharmacodynamic nonequivalence of generic vancomycin and promotion of resistance in Staphylococcus aureus. To find out if even subtle deviations from the expected pharmacodynamic behavior with respect to the innovator could favor resistance, we studied a generic product of piperacillin-tazobactam characterized by pharmaceutical and pharmacokinetic equivalence but a faulty fit of Hill’s Emax sigmoid model that could be interpreted as pharmacodynamic nonequivalence. We determined the impact in vivo of this generic product on the resistance of a mixed Escherichia coli population composed of ∼99% susceptible cells (ATCC 35218 strain) and a ∼1% isogenic resistant subpopulation that overproduces TEM-1 β-lactamase. After only 24 hours of treatment in the neutropenic murine thigh infection model, the generic amplified the resistant subpopulation up to 20-times compared with the innovator, following an inverted-U dose-response relationship. These findings highlight the critical role of therapeutic nonequivalence of generic antibiotics as a key factor contributing to the global problem of bacterial resistance. PMID:27191163

  6. EEG minimum-norm estimation compared with MEG dipole fitting in the localization of somatosensory sources at S1.

    PubMed

    Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J

    2004-03-01

    Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.

  7. The Stokesian hydrodynamics of flexing, stretching filaments

    NASA Astrophysics Data System (ADS)

    Shelley, Michael J.; Ueda, Tetsuji

    2000-11-01

    A central element of many fundamental problems in physics and biology lies in the interaction of a viscous fluid with slender, elastic filaments. Examples arise in the dynamics of biological fibers, the motility of microscopic organisms, and in phase transitions of liquid crystals. When considering the dynamics on the scale of a single filament, the surrounding fluid can often be assumed to be inertialess and hence governed by the Stokes’ equations. A typical simplification then is to assume a local relation, along the filament, between the force per unit length exerted by the filament upon the fluid and the velocity of the filament. While this assumption can be justified through slender-body theory as the leading-order effect, this approximation is only logarithmically separated (in aspect ratio) from the next-order contribution capturing the first effects of non-local interactions mediated by the surrounding fluid; non-local interactions become increasingly important as a filament comes within proximity to itself, or another filament. Motivated by a pattern forming system in isotropic to smectic-A phase transitions, we consider the non-local Stokesian dynamics of a growing elastica immersed in a fluid. The non-local interactions of the filament with itself are included using a modification of the slender-body theory of Keller and Rubinow. This modification is asymptotically equivalent, and removes an instability of their formulation at small, unphysical length-scales. Within this system, the filament lives on a marginal stability boundary, driven by a continual process of growth and buckling. Repeated bucklings result in filament flex, which, coupled to the non-local interactions and mediated by elastic response, leads to the development of space-filling patterns. We develop numerical methods to solve this system accurately and efficiently, even in the presence of temporal stiffness and the close self-approach of the filament. While we have ignored many of the thermodynamic aspects of this system, our simulations show good qualitative agreement with experimental observations. Our results also suggest that non-locality, induced by the surrounding fluid, will be important to understanding the dynamics of related filament systems.

  8. On a comparison of two schemes in sequential data assimilation

    NASA Astrophysics Data System (ADS)

    Grishina, Anastasiia A.; Penenko, Alexey V.

    2017-11-01

    This paper is focused on variational data assimilation as an approach to mathematical modeling. Realization of the approach requires a sequence of connected inverse problems with different sets of observational data to be solved. Two variational data assimilation schemes, "implicit" and "explicit", are considered in the article. Their equivalence is shown and the numerical results are given on a basis of non-linear Robertson system. To avoid the "inverse problem crime" different schemes were used to produce synthetic measurement and to solve the data assimilation problem.

  9. Distributed consensus for discrete-time heterogeneous multi-agent systems

    NASA Astrophysics Data System (ADS)

    Zhao, Huanyu; Fei, Shumin

    2018-06-01

    This paper studies the consensus problem for a class of discrete-time heterogeneous multi-agent systems. Two kinds of consensus algorithms will be considered. The heterogeneous multi-agent systems considered are converted into equivalent error systems by a model transformation. Then we analyse the consensus problem of the original systems by analysing the stability problem of the error systems. Some sufficient conditions for consensus of heterogeneous multi-agent systems are obtained by applying algebraic graph theory and matrix theory. Simulation examples are presented to show the usefulness of the results.

  10. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  11. Linear decomposition approach for a class of nonconvex programming problems.

    PubMed

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  12. Identification of Synchronous Machine Stability - Parameters: AN On-Line Time-Domain Approach.

    NASA Astrophysics Data System (ADS)

    Le, Loc Xuan

    1987-09-01

    A time-domain modeling approach is described which enables the stability-study parameters of the synchronous machine to be determined directly from input-output data measured at the terminals of the machine operating under normal conditions. The transient responses due to system perturbations are used to identify the parameters of the equivalent circuit models. The described models are verified by comparing their responses with the machine responses generated from the transient stability models of a small three-generator multi-bus power system and of a single -machine infinite-bus power network. The least-squares method is used for the solution of the model parameters. As a precaution against ill-conditioned problems, the singular value decomposition (SVD) is employed for its inherent numerical stability. In order to identify the equivalent-circuit parameters uniquely, the solution of a linear optimization problem with non-linear constraints is required. Here, the SVD appears to offer a simple solution to this otherwise difficult problem. Furthermore, the SVD yields solutions with small bias and, therefore, physically meaningful parameters even in the presence of noise in the data. The question concerning the need for a more advanced model of the synchronous machine which describes subtransient and even sub-subtransient behavior is dealt with sensibly by the concept of condition number. The concept provides a quantitative measure for determining whether such an advanced model is indeed necessary. Finally, the recursive SVD algorithm is described for real-time parameter identification and tracking of slowly time-variant parameters. The algorithm is applied to identify the dynamic equivalent power system model.

  13. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  14. Knowledge-based control for robot self-localization

    NASA Technical Reports Server (NTRS)

    Bennett, Bonnie Kathleen Holte

    1993-01-01

    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.

  15. 40 CFR 35.918 - Individual systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... STATE AND LOCAL ASSISTANCE Grants for Construction of Treatment Works-Clean Water Act § 35.918... commercial establishment with waste water flow equal to or smaller than one user equivalent (generally 300 gallons per day dry weather flows) is included. (3) Small commercial establishments. Private...

  16. 40 CFR 455.41 - Special definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... system being used contains the appropriate pollution control technologies (or equivalent systems... the appropriate permitting authority, e.g., the local Control Authority (the POTW) or NPDES permit... Control Authority (the POTW) or NPDES permit writer, which states that the P2 Alternative is being...

  17. 40 CFR 455.41 - Special definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... system being used contains the appropriate pollution control technologies (or equivalent systems... the appropriate permitting authority, e.g., the local Control Authority (the POTW) or NPDES permit... Control Authority (the POTW) or NPDES permit writer, which states that the P2 Alternative is being...

  18. 40 CFR 455.41 - Special definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... system being used contains the appropriate pollution control technologies (or equivalent systems... the appropriate permitting authority, e.g., the local Control Authority (the POTW) or NPDES permit... Control Authority (the POTW) or NPDES permit writer, which states that the P2 Alternative is being...

  19. Orbits of Two-Body Problem From the Lenz Vector

    ERIC Educational Resources Information Center

    Caplan, S.; And Others

    1978-01-01

    Obtains the orbits with reference to the center of mass of two bodies under mutual universe square law interaction by use of the eccentricity vector which is equivalent to the Lenz vector within a numerical factor. (Author/SL)

  20. Combined high vacuum/high frequency fatigue tester

    NASA Technical Reports Server (NTRS)

    Honeycutt, C. R.; Martin, T. F.

    1971-01-01

    Apparatus permits application of significantly greater number of cycles or equivalent number of cycles in shorter time than conventional fatigue test machines. Environment eliminates problems associated with high temperature oxidation and with sensitivity of refractory alloy behavior to atmospheric contamination.

  1. Inconsistency of topologically massive hypergravity

    NASA Technical Reports Server (NTRS)

    Aragone, C.; Deser, S.

    1985-01-01

    The coupled topologically massive spin-5/2 gravity system in D = 3 dimensions whose kinematics represents dynamical propagating gauge invariant massive spin-5/2 and spin-2 excitations, is shown to be inconsistent, or equivalently, not locally hypersymmetric. In contrast to D = 4, the local constraints on the system arising from failure of the fermionic Bianchi identities do not involve the 'highest spin' components of the field, but rather the auxiliary spinor required to construct a consistent massive model.

  2. An experimental evaluation of the effect of homogenization quality as a preconditioning on oil-water two-phase volume fraction measurement accuracy using gamma-ray attenuation technique

    NASA Astrophysics Data System (ADS)

    Sharifzadeh, M.; Hashemabadi, S. H.; Afarideh, H.; Khalafi, H.

    2018-02-01

    The problem of how to accurately measure multiphase flow in the oil/gas industry remains as an important issue since the early 80 s. Meanwhile, oil-water two-phase flow rate measurement has been regarded as an important issue. Gamma-ray attenuation is one of the most commonly used methods for phase fraction measurement which is entirely dependent on the flow regime variations. The peripheral strategy applied for removing the regime dependency problem, is using a homogenization system as a preconditioning tool, as this research work demonstrates. Here, at first, TPFHL as a two-phase flow homogenizer loop has been introduced and verified by a quantitative assessment. In the wake of this procedure, SEMPF as a static-equivalent multiphase flow with an additional capability for preparing a uniform mixture has been explained. The proposed idea in this system was verified by Monte Carlo simulations. Finally, the different water-gas oil two-phase volume fractions fed to the homogenizer loop and injected into the static-equivalent system. A comparison between performance of these two systems by using gamma-ray attenuation technique, showed not only an extra ability to prepare a homogenized mixture but a remarkably increased measurement accuracy for the static-equivalent system.

  3. Loop series for discrete statistical models on graphs

    NASA Astrophysics Data System (ADS)

    Chertkov, Michael; Chernyak, Vladimir Y.

    2006-06-01

    In this paper we present the derivation details, logic, and motivation for the three loop calculus introduced in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Generating functions for each of the three interrelated discrete statistical models are expressed in terms of a finite series. The first term in the series corresponds to the Bethe-Peierls belief-propagation (BP) contribution; the other terms are labelled by loops on the factor graph. All loop contributions are simple rational functions of spin correlation functions calculated within the BP approach. We discuss two alternative derivations of the loop series. One approach implements a set of local auxiliary integrations over continuous fields with the BP contribution corresponding to an integrand saddle-point value. The integrals are replaced by sums in the complementary approach, briefly explained in Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R)). Local gauge symmetry transformations that clarify an important invariant feature of the BP solution are revealed in both approaches. The individual terms change under the gauge transformation while the partition function remains invariant. The requirement for all individual terms to be nonzero only for closed loops in the factor graph (as opposed to paths with loose ends) is equivalent to fixing the first term in the series to be exactly equal to the BP contribution. Further applications of the loop calculus to problems in statistical physics, computer and information sciences are discussed.

  4. Structural design of composite rotor blades with consideration of manufacturability, durability, and manufacturing uncertainties

    NASA Astrophysics Data System (ADS)

    Li, Leihong

    A modular structural design methodology for composite blades is developed. This design method can be used to design composite rotor blades with sophisticate geometric cross-sections. This design method hierarchically decomposed the highly-coupled interdisciplinary rotor analysis into global and local levels. In the global level, aeroelastic response analysis and rotor trim are conduced based on multi-body dynamic models. In the local level, variational asymptotic beam sectional analysis methods are used for the equivalent one-dimensional beam properties. Compared with traditional design methodology, the proposed method is more efficient and accurate. Then, the proposed method is used to study three different design problems that have not been investigated before. The first is to add manufacturing constraints into design optimization. The introduction of manufacturing constraints complicates the optimization process. However, the design with manufacturing constraints benefits the manufacturing process and reduces the risk of violating major performance constraints. Next, a new design procedure for structural design against fatigue failure is proposed. This procedure combines the fatigue analysis with the optimization process. The durability or fatigue analysis employs a strength-based model. The design is subject to stiffness, frequency, and durability constraints. Finally, the manufacturing uncertainty impacts on rotor blade aeroelastic behavior are investigated, and a probabilistic design method is proposed to control the impacts of uncertainty on blade structural performance. The uncertainty factors include dimensions, shapes, material properties, and service loads.

  5. Estimating agricultural yield gap in Africa using MODIS NDVI dataset

    NASA Astrophysics Data System (ADS)

    Luan, Y.; Zhu, W.; Luo, X.; Liu, J.; Cui, X.

    2013-12-01

    Global agriculture has undergone a period of rapid intensification characterized as 'Green Revolution', except for Africa, which is the region most affected by unreliable food access and undernourishment. Increasing crop production will be one of the most challenges and most effectual way to mitigate food insecurity there, as Africa's agricultural yield is on a much lower level comparing to global average. In this study we characterize cropland vegetation phenology in Africa based on MODIS NDVI time series between 2000 and 2012. Cumulated NDVI is a proxy for net primary productivity and used as an indicator for evaluating the potential yield gap in Africa. It is achieved via translating the gap between optimum attainable productivity level in each classification of cropping systems and actual productivity level by the relationship of cumulated NDVI and cereal-equivalent production. The results show most of cropland area in Africa have decreasing trend in cumulated NDVI, distributing in the Nile Delta, Eastern Africa and central of semi-arid to arid savanna area, except significant positive cumulated NDVI trends are mainly found between Senegal and Benin. Using cumulated NDVI and statistics of cereal equivalent production, we find remarkable potential yield gap at the Horn of East Africa (especially in Somalia), Northern Africa (Morocco, Algeria and Tunisia). Meanwhile, countries locating at the savanna area near Sahel desert and South Africa also show significant potential, though they already have a relatively high level of productivity. Our results can help provide policy recommendation for local government or NGO to tackle food security problems by identifying zones with high potential of yield improvement.

  6. efficient association study design via power-optimized tag SNP selection

    PubMed Central

    HAN, BUHM; KANG, HYUN MIN; SEO, MYEONG SEONG; ZAITLEN, NOAH; ESKIN, ELEAZAR

    2008-01-01

    Discovering statistical correlation between causal genetic variation and clinical traits through association studies is an important method for identifying the genetic basis of human diseases. Since fully resequencing a cohort is prohibitively costly, genetic association studies take advantage of local correlation structure (or linkage disequilibrium) between single nucleotide polymorphisms (SNPs) by selecting a subset of SNPs to be genotyped (tag SNPs). While many current association studies are performed using commercially available high-throughput genotyping products that define a set of tag SNPs, choosing tag SNPs remains an important problem for both custom follow-up studies as well as designing the high-throughput genotyping products themselves. The most widely used tag SNP selection method optimizes over the correlation between SNPs (r2). However, tag SNPs chosen based on an r2 criterion do not necessarily maximize the statistical power of an association study. We propose a study design framework that chooses SNPs to maximize power and efficiently measures the power through empirical simulation. Empirical results based on the HapMap data show that our method gains considerable power over a widely used r2-based method, or equivalently reduces the number of tag SNPs required to attain the desired power of a study. Our power-optimized 100k whole genome tag set provides equivalent power to the Affymetrix 500k chip for the CEU population. For the design of custom follow-up studies, our method provides up to twice the power increase using the same number of tag SNPs as r2-based methods. Our method is publicly available via web server at http://design.cs.ucla.edu. PMID:18702637

  7. On-Line Algorithms and Reverse Mathematics

    NASA Astrophysics Data System (ADS)

    Harris, Seth

    In this thesis, we classify the reverse-mathematical strength of sequential problems. If we are given a problem P of the form ∀X(alpha(X) → ∃Zbeta(X,Z)) then the corresponding sequential problem, SeqP, asserts the existence of infinitely many solutions to P: ∀X(∀nalpha(Xn) → ∃Z∀nbeta(X n,Zn)). P is typically provable in RCA0 if all objects involved are finite. SeqP, however, is only guaranteed to be provable in ACA0. In this thesis we exactly characterize which sequential problems are equivalent to RCA0, WKL0, or ACA0.. We say that a problem P is solvable by an on-line algorithm if P can be solved according to a two-player game, played by Alice and Bob, in which Bob has a winning strategy. Bob wins the game if Alice's sequence of plays 〈a0, ..., ak〉 and Bob's sequence of responses 〈 b0, ..., bk〉 constitute a solution to P. Formally, an on-line algorithm A is a function that inputs an admissible sequence of plays 〈a 0, b0, ..., aj〉 and outputs a new play bj for Bob. (This differs from the typical definition of "algorithm", though quite often a concrete set of instructions can be easily deduced from A.). We show that SeqP is provable in RCA0 precisely when P is solvable by an on-line algorithm. Schmerl proved this result specifically for the graph coloring problem; we generalize Schmerl's result to any problem that is on-line solvable. To prove our separation, we introduce a principle called Predictk(r) that is equivalent to -WKL0 for standard k, r.. We show that WKL0 is sufficient to prove SeqP precisely when P has a solvable closed kernel. This means that a solution exists, and each initial segment of this solution is a solution to the corresponding initial segment of the problem. (Certain bounding conditions are necessary as well.) If no such solution exists, then SeqP is equivalent to ACA0 over RCA 0 + ISigma02; RCA0 alone suffices if only sequences of standard length are considered. We use different techniques from Schmerl to prove this separation, and in the process we improve some of Schmerl's results on Grundy colorings. In Chapter 4 we analyze a variety of applications, classifying their sequential forms by reverse-mathematical strength. This builds upon similar work by Dorais and Hirst and Mummert. We consider combinatorial applications such as matching problems and Dilworth's theorems, and we also consider classic algorithms such as the task scheduling and paging problems. Tables summarizing our findings can be found at the end of Chapter 4.

  8. Hybridization of decomposition and local search for multiobjective optimization.

    PubMed

    Ke, Liangjun; Zhang, Qingfu; Battiti, Roberto

    2014-10-01

    Combining ideas from evolutionary algorithms, decomposition approaches, and Pareto local search, this paper suggests a simple yet efficient memetic algorithm for combinatorial multiobjective optimization problems: memetic algorithm based on decomposition (MOMAD). It decomposes a combinatorial multiobjective problem into a number of single objective optimization problems using an aggregation method. MOMAD evolves three populations: 1) population P(L) for recording the current solution to each subproblem; 2) population P(P) for storing starting solutions for Pareto local search; and 3) an external population P(E) for maintaining all the nondominated solutions found so far during the search. A problem-specific single objective heuristic can be applied to these subproblems to initialize the three populations. At each generation, a Pareto local search method is first applied to search a neighborhood of each solution in P(P) to update P(L) and P(E). Then a single objective local search is applied to each perturbed solution in P(L) for improving P(L) and P(E), and reinitializing P(P). The procedure is repeated until a stopping condition is met. MOMAD provides a generic hybrid multiobjective algorithmic framework in which problem specific knowledge, well developed single objective local search and heuristics and Pareto local search methods can be hybridized. It is a population based iterative method and thus an anytime algorithm. Extensive experiments have been conducted in this paper to study MOMAD and compare it with some other state-of-the-art algorithms on the multiobjective traveling salesman problem and the multiobjective knapsack problem. The experimental results show that our proposed algorithm outperforms or performs similarly to the best so far heuristics on these two problems.

  9. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  10. Variationally consistent discretization schemes and numerical algorithms for contact problems

    NASA Astrophysics Data System (ADS)

    Wohlmuth, Barbara

    We consider variationally consistent discretization schemes for mechanical contact problems. Most of the results can also be applied to other variational inequalities, such as those for phase transition problems in porous media, for plasticity or for option pricing applications from finance. The starting point is to weakly incorporate the constraint into the setting and to reformulate the inequality in the displacement in terms of a saddle-point problem. Here, the Lagrange multiplier represents the surface forces, and the constraints are restricted to the boundary of the simulation domain. Having a uniform inf-sup bound, one can then establish optimal low-order a priori convergence rates for the discretization error in the primal and dual variables. In addition to the abstract framework of linear saddle-point theory, complementarity terms have to be taken into account. The resulting inequality system is solved by rewriting it equivalently by means of the non-linear complementarity function as a system of equations. Although it is not differentiable in the classical sense, semi-smooth Newton methods, yielding super-linear convergence rates, can be applied and easily implemented in terms of a primal-dual active set strategy. Quite often the solution of contact problems has a low regularity, and the efficiency of the approach can be improved by using adaptive refinement techniques. Different standard types, such as residual- and equilibrated-based a posteriori error estimators, can be designed based on the interpretation of the dual variable as Neumann boundary condition. For the fully dynamic setting it is of interest to apply energy-preserving time-integration schemes. However, the differential algebraic character of the system can result in high oscillations if standard methods are applied. A possible remedy is to modify the fully discretized system by a local redistribution of the mass. Numerical results in two and three dimensions illustrate the wide range of possible applications and show the performance of the space discretization scheme, non-linear solver, adaptive refinement process and time integration.

  11. Optimal shielding thickness for galactic cosmic ray environments

    NASA Astrophysics Data System (ADS)

    Slaba, Tony C.; Bahadori, Amir A.; Reddell, Brandon D.; Singleterry, Robert C.; Clowdsley, Martha S.; Blattnig, Steve R.

    2017-02-01

    Models have been extensively used in the past to evaluate and develop material optimization and shield design strategies for astronauts exposed to galactic cosmic rays (GCR) on long duration missions. A persistent conclusion from many of these studies was that passive shielding strategies are inefficient at reducing astronaut exposure levels and the mass required to significantly reduce the exposure is infeasible, given launch and associated cost constraints. An important assumption of this paradigm is that adding shielding mass does not substantially increase astronaut exposure levels. Recent studies with HZETRN have suggested, however, that dose equivalent values actually increase beyond ∼20 g/cm2 of aluminum shielding, primarily as a result of neutron build-up in the shielding geometry. In this work, various Monte Carlo (MC) codes and 3DHZETRN are evaluated in slab geometry to verify the existence of a local minimum in the dose equivalent versus aluminum thickness curve near 20 g/cm2. The same codes are also evaluated in polyethylene shielding, where no local minimum is observed, to provide a comparison between the two materials. Results are presented so that the physical interactions driving build-up in dose equivalent values can be easily observed and explained. Variation of transport model results for light ions (Z ≤ 2) and neutron-induced target fragments, which contribute significantly to dose equivalent for thick shielding, is also highlighted and indicates that significant uncertainties are still present in the models for some particles. The 3DHZETRN code is then further evaluated over a range of related slab geometries to draw closer connection to more realistic scenarios. Future work will examine these related geometries in more detail.

  12. Optimal shielding thickness for galactic cosmic ray environments.

    PubMed

    Slaba, Tony C; Bahadori, Amir A; Reddell, Brandon D; Singleterry, Robert C; Clowdsley, Martha S; Blattnig, Steve R

    2017-02-01

    Models have been extensively used in the past to evaluate and develop material optimization and shield design strategies for astronauts exposed to galactic cosmic rays (GCR) on long duration missions. A persistent conclusion from many of these studies was that passive shielding strategies are inefficient at reducing astronaut exposure levels and the mass required to significantly reduce the exposure is infeasible, given launch and associated cost constraints. An important assumption of this paradigm is that adding shielding mass does not substantially increase astronaut exposure levels. Recent studies with HZETRN have suggested, however, that dose equivalent values actually increase beyond ∼20g/cm 2 of aluminum shielding, primarily as a result of neutron build-up in the shielding geometry. In this work, various Monte Carlo (MC) codes and 3DHZETRN are evaluated in slab geometry to verify the existence of a local minimum in the dose equivalent versus aluminum thickness curve near 20g/cm 2 . The same codes are also evaluated in polyethylene shielding, where no local minimum is observed, to provide a comparison between the two materials. Results are presented so that the physical interactions driving build-up in dose equivalent values can be easily observed and explained. Variation of transport model results for light ions (Z ≤ 2) and neutron-induced target fragments, which contribute significantly to dose equivalent for thick shielding, is also highlighted and indicates that significant uncertainties are still present in the models for some particles. The 3DHZETRN code is then further evaluated over a range of related slab geometries to draw closer connection to more realistic scenarios. Future work will examine these related geometries in more detail. Published by Elsevier Ltd.

  13. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xavier, MA; Trimboli, MS

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggestmore » significant performance improvements might be achieved by extending the result to electrochemical models. (C) 2015 Elsevier B.V. All rights reserved.« less

  14. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  15. Comparison of alternative designs for reducing complex neurons to equivalent cables.

    PubMed

    Burke, R E

    2000-01-01

    Reduction of the morphological complexity of actual neurons into accurate, computationally efficient surrogate models is an important problem in computational neuroscience. The present work explores the use of two morphoelectrotonic transformations, somatofugal voltage attenuation (AT cables) and signal propagation delay (DL cables), as bases for construction of electrotonically equivalent cable models of neurons. In theory, the AT and DL cables should provide more accurate lumping of membrane regions that have the same transmembrane potential than the familiar equivalent cables that are based only on somatofugal electrotonic distance (LM cables). In practice, AT and DL cables indeed provided more accurate simulations of the somatic transient responses produced by fully branched neuron models than LM cables. This was the case in the presence of a somatic shunt as well as when membrane resistivity was uniform.

  16. Stress distribution in and equivalent width of flanges of wide, thin-wall steel beams

    NASA Technical Reports Server (NTRS)

    Winter, George

    1940-01-01

    The use of different forms of wide-flange, thin-wall steel beams is becoming increasingly widespread. Part of the information necessary for a national design of such members is the knowledge of the stress distribution in and the equivalent width of the flanges of such beams. This problem is analyzed in this paper on the basis of the theory of plane stress. As a result, tables and curves are given from which the equivalent width of any given beam can be read directly for use in practical design. An investigation is given of the limitations of this analysis due to the fact that extremely wide and thin flanges tend to curve out of their plane toward the neutral axis. A summary of test data confirms very satisfactorily the analytical results.

  17. An evaluation of methods for estimating the number of local optima in combinatorial optimization problems.

    PubMed

    Hernando, Leticia; Mendiburu, Alexander; Lozano, Jose A

    2013-01-01

    The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.

  18. Localization of transient gravitational wave sources: beyond triangulation

    NASA Astrophysics Data System (ADS)

    Fairhurst, Stephen

    2018-05-01

    Rapid, accurate localization of gravitational wave transient events has proved critical to successful electromagnetic followup. In previous papers we have shown that localization estimates can be obtained through triangulation based on timing information at the detector sites. In practice, detailed parameter estimation routines use additional information and provide better localization than is possible based on timing information alone. In this paper, we extend the timing based localization approximation to incorporate consistency of observed signals with two gravitational wave polarizations, and an astrophysically motivated distribution of sources. Both of these provide significant improvements to source localization, allowing many sources to be restricted to a single sky region, with an area 40% smaller than predicted by timing information alone. Furthermore, we show that the vast majority of sources will be reconstructed to be circularly polarized or, equivalently, indistinguishable from face-on.

  19. Existence and Stability of Compressible Current-Vortex Sheets in Three-Dimensional Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Chen, Gui-Qiang; Wang, Ya-Guang

    2008-03-01

    Compressible vortex sheets are fundamental waves, along with shocks and rarefaction waves, in entropy solutions to multidimensional hyperbolic systems of conservation laws. Understanding the behavior of compressible vortex sheets is an important step towards our full understanding of fluid motions and the behavior of entropy solutions. For the Euler equations in two-dimensional gas dynamics, the classical linearized stability analysis on compressible vortex sheets predicts stability when the Mach number M > sqrt{2} and instability when M < sqrt{2} ; and Artola and Majda’s analysis reveals that the nonlinear instability may occur if planar vortex sheets are perturbed by highly oscillatory waves even when M > sqrt{2} . For the Euler equations in three dimensions, every compressible vortex sheet is violently unstable and this instability is the analogue of the Kelvin Helmholtz instability for incompressible fluids. The purpose of this paper is to understand whether compressible vortex sheets in three dimensions, which are unstable in the regime of pure gas dynamics, become stable under the magnetic effect in three-dimensional magnetohydrodynamics (MHD). One of the main features is that the stability problem is equivalent to a free-boundary problem whose free boundary is a characteristic surface, which is more delicate than noncharacteristic free-boundary problems. Another feature is that the linearized problem for current-vortex sheets in MHD does not meet the uniform Kreiss Lopatinskii condition. These features cause additional analytical difficulties and especially prevent a direct use of the standard Picard iteration to the nonlinear problem. In this paper, we develop a nonlinear approach to deal with these difficulties in three-dimensional MHD. We first carefully formulate the linearized problem for the current-vortex sheets to show rigorously that the magnetic effect makes the problem weakly stable and establish energy estimates, especially high-order energy estimates, in terms of the nonhomogeneous terms and variable coefficients. Then we exploit these results to develop a suitable iteration scheme of the Nash Moser Hörmander type to deal with the loss of the order of derivative in the nonlinear level and establish its convergence, which leads to the existence and stability of compressible current-vortex sheets, locally in time, in three-dimensional MHD.

  20. Simulation of RF-fields in a fusion device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Witte, Dieter; Bogaert, Ignace; De Zutter, Daniel

    2009-11-26

    In this paper the problem of scattering off a fusion plasma is approached from the point of view of integral equations. Using the volume equivalence principle an integral equation is derived which describes the electromagnetic fields in a plasma. The equation is discretized with MoM using conforming basis functions. This reduces the problem to solving a dense matrix equation. This can be done iteratively. Each iteration can be sped up using FFTs.

  1. Emergence of Fundamental Limits in Spatially Distributed Dynamical Networks and Their Tradeoffs

    DTIC Science & Technology

    2017-05-01

    It is shown that the resulting non -convex optimization problem can be equivalently reformulated into a rank-constrained problem. We then...display a current ly valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM- YYYY) ,2. REPORT TYPE 3...robustness in distributed control and dynamical systems. Our research re- sults are highly relevant for analysis and synthesis of engineered and natural

  2. Linguistic validation of translation of the self-assessment goal achievement (saga) questionnaire from English

    PubMed Central

    2012-01-01

    Background A linguistic validation of the Self-Assessment Goal Achievement (SAGA) questionnaire was conducted for 12 European languages, documenting that each translation adequately captures the concepts of the original English-language version of the questionnaire and is readily understood by subjects in the target population. Methods Native-speaking residents of the target countries who reported urinary problems/lower urinary tract problems were asked to review a translation of the SAGA questionnaire, which was harmonized among 12 languages: Danish, Dutch, English (UK), Finnish, French, German, Greek, Icelandic, Italian, Norwegian, Spanish, and Swedish. During a cognitive debriefing interview, participants were asked to identify any words that were difficult to understand and explain in their own words the meaning of each sentence in the questionnaire. The qualitative analysis was conducted by local linguistic validation teams (original translators, back translator, project manager, interviewer, and survey research expert). Results Translations of the SAGA questionnaire from English to 12 European languages were well understood by the participants with an overall comprehension rate across language of 98.9%. In addition, the translations retained the original meaning of the SAGA items and instructions. Comprehension difficulties were identified, and after review by the translation team, minor changes were made to 7 of the 12 translations to improve clarity and comprehension. Conclusions Conceptual, semantic, and cultural equivalence of each translation of the SAGA questionnaire was achieved thus confirming linguistic validation. PMID:22525050

  3. Linguistic validation of translation of the Self-Assessment Goal Achievement (SAGA) questionnaire from English.

    PubMed

    Piault, Elisabeth; Doshi, Sameepa; Brandt, Barbara A; Angün, Çolpan; Evans, Christopher J; Bergqvist, Agneta; Trocio, Jeffrey

    2012-04-23

    A linguistic validation of the Self-Assessment Goal Achievement (SAGA) questionnaire was conducted for 12 European languages, documenting that each translation adequately captures the concepts of the original English-language version of the questionnaire and is readily understood by subjects in the target population. Native-speaking residents of the target countries who reported urinary problems/lower urinary tract problems were asked to review a translation of the SAGA questionnaire, which was harmonized among 12 languages: Danish, Dutch, English (UK), Finnish, French, German, Greek, Icelandic, Italian, Norwegian, Spanish, and Swedish. During a cognitive debriefing interview, participants were asked to identify any words that were difficult to understand and explain in their own words the meaning of each sentence in the questionnaire. The qualitative analysis was conducted by local linguistic validation teams (original translators, back translator, project manager, interviewer, and survey research expert). Translations of the SAGA questionnaire from English to 12 European languages were well understood by the participants with an overall comprehension rate across language of 98.9%. In addition, the translations retained the original meaning of the SAGA items and instructions. Comprehension difficulties were identified, and after review by the translation team, minor changes were made to 7 of the 12 translations to improve clarity and comprehension. Conceptual, semantic, and cultural equivalence of each translation of the SAGA questionnaire was achieved thus confirming linguistic validation.

  4. 12 CFR 1282.16 - Special counting requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... activity of the Enterprise is substantially equivalent to a mortgage purchase and either creates a new market or adds liquidity to an existing market, provided however that such mortgage purchase actually... housing goals: (1) Equity investments in housing development projects; (2) Purchases of State and local...

  5. Integrating Sensor Monitoring Technology into the Current Air Pollution Regulatory Support Paradigm: Practical Considerations

    EPA Science Inventory

    The US Environmental Protection Agency (EPA) along with state, local, and tribal governments operate Federal Reference Method (FRM) and Federal Equivalent Method (FEM) instruments to assess compliance with US air pollution standards designed to protect human and ecosystem health....

  6. The electronic characterization of biphenylene—Experimental and theoretical insights from core and valence level spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lüder, Johann; Sanyal, Biplab; Eriksson, Olle

    In this paper, we provide detailed insights into the electronic structure of the gas phase biphenylene molecule through core and valence spectroscopy. By comparing results of X-ray Photoelectron Spectroscopy (XPS) measurements with ΔSCF core-hole calculations in the framework of Density Functional Theory (DFT), we could decompose the characteristic contributions to the total spectra and assign them to non-equivalent carbon atoms. As a difference with similar molecules like biphenyl and naphthalene, an influence of the localized orbitals on the relative XPS shifts was found. The valence spectrum probed by photoelectron spectroscopy at a photon energy of 50 eV in conjunction withmore » hybrid DFT calculations revealed the effects of the localization on the electronic states. Using the transition potential approach to simulate the X-ray absorption spectroscopy measurements, similar contributions from the non-equivalent carbon atoms were determined from the total spectrum, for which the slightly shifted individual components can explain the observed asymmetric features.« less

  7. Twenty-seven Years of Cerebral Pyruvate Recycling.

    PubMed

    Cerdán, Sebastián

    2017-06-01

    Cerebral pyruvate recycling is a metabolic pathway deriving carbon skeletons and reducing equivalents from mitochondrial oxaloacetate and malate, to the synthesis of mitochondrial and cytosolic pyruvate, lactate and alanine. The pathway allows both, to provide the tricarboxylic acid cycle with pyruvate molecules produced from alternative substrates to glucose and, to generate reducing equivalents necessary for the operation of NADPH requiring processes. At the cellular level, pyruvate recycling involves the activity of malic enzyme, or the combined activities of phosphoenolpyruvate carboxykinase and pyruvate kinase, as well as of those transporters of the inner mitochondrial membrane exchanging the corresponding intermediates. Its cellular localization between the neuronal or astrocytic compartments of the in vivo brain has been controversial, with evidences favoring either a primarily neuronal or glial localizations, more recently accepted to occur in both environments. This review provides a brief history on the detection and characterization of the pathway, its relations with the early developments of cerebral high resolution 13 C NMR, and its potential neuroprotective functions under hypoglycemic conditions or ischemic redox stress.

  8. Algebraic cycles and local anomalies in F-theory

    NASA Astrophysics Data System (ADS)

    Bies, Martin; Mayrhofer, Christoph; Weigand, Timo

    2017-11-01

    We introduce a set of identities in the cohomology ring of elliptic fibrations which are equivalent to the cancellation of gauge and mixed gauge-gravitational anomalies in F-theory compactifications to four and six dimensions. The identities consist in (co)homological relations between complex codimension-two cycles. The same set of relations, once evaluated on elliptic Calabi-Yau three-folds and four-folds, is shown to universally govern the structure of anomalies and their Green-Schwarz cancellation in six- and four-dimensional F-theory vacua, respectively. We furthermore conjecture that these relations hold not only within the cohomology ring, but even at the level of the Chow ring, i.e. as relations among codimension-two cycles modulo rational equivalence. We verify this conjecture in non-trivial examples with Abelian and non-Abelian gauge groups factors. Apart from governing the structure of local anomalies, the identities in the Chow ring relate different types of gauge backgrounds on elliptically fibred Calabi-Yau four-folds.

  9. The potential of a modified physiologically equivalent temperature (mPET) based on local thermal comfort perception in hot and humid regions

    NASA Astrophysics Data System (ADS)

    Lin, Tzu-Ping; Yang, Shing-Ru; Chen, Yung-Chang; Matzarakis, Andreas

    2018-02-01

    Physiologically equivalent temperature (PET) is a thermal index that is widely used in the field of human biometeorology and urban bioclimate. However, it has several limitations, including its poor ability to predict thermo-physiological parameters and its weak response to both clothing insulation and humid conditions. A modified PET (mPET) was therefore developed to address these shortcomings. To determine whether the application of mPET in hot-humid regions is more appropriate than the PET, an analysis of a thermal comfort survey database, containing 2071 questionnaires collected from participants in hot-humid Taiwan, was conducted. The results indicate that the thermal comfort range is similar (26-30 °C) when the mPET and PET are applied as thermal indices to the database. The sensitivity test for vapor pressure and clothing insulation also show that the mPET responds well to the behavior and perceptions of local people in a subtropical climate.

  10. Selection in a subdivided population with local extinction and recolonization.

    PubMed Central

    Cherry, Joshua L

    2003-01-01

    In a subdivided population, local extinction and subsequent recolonization affect the fate of alleles. Of particular interest is the interaction of this force with natural selection. The effect of selection can be weakened by this additional source of stochastic change in allele frequency. The behavior of a selected allele in such a population is shown to be equivalent to that of an allele with a different selection coefficient in an unstructured population with a different size. This equivalence allows use of established results for panmictic populations to predict such quantities as fixation probabilities and mean times to fixation. The magnitude of the quantity N(e)s(e), which determines fixation probability, is decreased by extinction and recolonization. Thus deleterious alleles are more likely to fix, and advantageous alleles less likely to do so, in the presence of extinction and recolonization. Computer simulations confirm that the theoretical predictions of both fixation probabilities and mean times to fixation are good approximations. PMID:12807797

  11. Magnetoencephalography recording and analysis.

    PubMed

    Velmurugan, Jayabal; Sinha, Sanjib; Satishchandra, Parthasarathy

    2014-03-01

    Magnetoencephalography (MEG) non-invasively measures the magnetic field generated due to the excitatory postsynaptic electrical activity of the apical dendritic pyramidal cells. Such a tiny magnetic field is measured with the help of the biomagnetometer sensors coupled with the Super Conducting Quantum Interference Device (SQUID) inside the magnetically shielded room (MSR). The subjects are usually screened for the presence of ferromagnetic materials, and then the head position indicator coils, electroencephalography (EEG) electrodes (if measured simultaneously), and fiducials are digitized using a 3D digitizer, which aids in movement correction and also in transferring the MEG data from the head coordinates to the device and voxel coordinates, thereby enabling more accurate co-registration and localization. MEG data pre-processing involves filtering the data for environmental and subject interferences, artefact identification, and rejection. Magnetic resonance Imaging (MRI) is processed for correction and identifying fiducials. After choosing and computing for the appropriate head models (spherical or realistic; boundary/finite element model), the interictal/ictal epileptiform discharges are selected and modeled by an appropriate source modeling technique (clinically and commonly used - single equivalent current dipole - ECD model). The equivalent current dipole (ECD) source localization of the modeled interictal epileptiform discharge (IED) is considered physiologically valid or acceptable based on waveform morphology, isofield pattern, and dipole parameters (localization, dipole moment, confidence volume, goodness of fit). Thus, MEG source localization can aid clinicians in sublobar localization, lateralization, and grid placement, by evoking the irritative/seizure onset zone. It also accurately localizes the eloquent cortex-like visual, language areas. MEG also aids in diagnosing and delineating multiple novel findings in other neuropsychiatric disorders, including Alzheimer's disease, Parkinsonism, Traumatic brain injury, autistic disorders, and so oon.

  12. Neuroscience in its context. Neuroscience and psychology in the work of Wilhelm Wundt.

    PubMed

    Ziche, P

    1999-01-01

    Wilhelm Wundt (1832-1920), the first to establish an Institute devoted exclusively to psychological research in Germany, started his career as a (neuro)physiologist. He gradually turned into a psychologist in the 1860's and 1870's, at a time when neuroscience had to deal with the problem of giving an adequate physiological interpretation of the data accumulated by neuroanatomy. Neither the functional interpretation of brain morphology, nor the options provided by the reflex model seemed acceptable to Wundt. In his Physiological Psychology, first published in 1874, Wundt adds another aspect to this discussion by showing that psychology may help, and indeed is required, to clarify some of the most controversial problems in brain research. He thus became a key figure in neuroscience's struggle to locate itself within the various research traditions. The following theses will be argued for: 1. Wundt's turn to psychology resulted from his view that the methodological basis of physiological brain research of the time was unsatisfactory. 2. Psychology, in its attempt to solve these problems, implied a new conception of an interaction between experimental and theoretical brain research. 3. Wundt tried to demonstrate the necessity of psychological considerations for experimental brain research. These points are discussed with reference to Wundt's treatment of the localization of functions in the brain. According to Wundt, psychology can show, by analyzing the complex structure of intellect and will, that mental phenomena can be realized in the brain only in the form of complex interations of the elements of the brain. The results of the psychological considerations imply that a strict localizations cannot be correct; but they are also turned against the conception of a complete functional equivalence of the various parts of the cortext. For Wundt, a reconstruction of brain processes cannot start with neurones, but only with patterns of a functional organization of brain activity. Wundt accordingly proposes a functional interpretation on the level of the physiology of nervous tissue as well as for the over-all organization of the brain.

  13. Local Responses to Global Problems: A Key to Meeting Basic Human Needs. Worldwatch Paper 17.

    ERIC Educational Resources Information Center

    Stokes, Bruce

    The booklet maintains that the key to meeting basic human needs is the participation of individuals and communities in local problem solving. Some of the most important achievements in providing food, upgrading housing, improving human health, and tapping new energy sources, comes through local self-help projects. Proponents of local efforts at…

  14. Variation of fundamental constants on sub- and super-Hubble scales: From the equivalence principle to the multiverse

    NASA Astrophysics Data System (ADS)

    Uzan, Jean-Philippe

    2013-02-01

    Fundamental constants play a central role in many modern developments in gravitation and cosmology. Most extensions of general relativity lead to the conclusion that dimensionless constants are actually dynamical fields. Any detection of their variation on sub-Hubble scales would signal a violation of the Einstein equivalence principle and hence a lead to gravity beyond general relativity. On super-Hubble scales, or maybe should we say on super-universe scales, such variations are invoked as a solution to the fine-tuning problem, in connection with an anthropic approach.

  15. Equivalences between nonuniform exponential dichotomy and admissibility

    NASA Astrophysics Data System (ADS)

    Zhou, Linfeng; Lu, Kening; Zhang, Weinian

    2017-01-01

    Relationship between exponential dichotomies and admissibility of function classes is a significant problem for hyperbolic dynamical systems. It was proved that a nonuniform exponential dichotomy implies several admissible pairs of function classes and conversely some admissible pairs were found to imply a nonuniform exponential dichotomy. In this paper we find an appropriate admissible pair of classes of Lyapunov bounded functions which is equivalent to the existence of nonuniform exponential dichotomy on half-lines R± separately, on both half-lines R± simultaneously, and on the whole line R. Additionally, the maximal admissibility is proved in the case on both half-lines R± simultaneously.

  16. Localized states in an unbounded neural field equation with smooth firing rate function: a multi-parameter analysis.

    PubMed

    Faye, Grégory; Rankin, James; Chossat, Pascal

    2013-05-01

    The existence of spatially localized solutions in neural networks is an important topic in neuroscience as these solutions are considered to characterize working (short-term) memory. We work with an unbounded neural network represented by the neural field equation with smooth firing rate function and a wizard hat spatial connectivity. Noting that stationary solutions of our neural field equation are equivalent to homoclinic orbits in a related fourth order ordinary differential equation, we apply normal form theory for a reversible Hopf bifurcation to prove the existence of localized solutions; further, we present results concerning their stability. Numerical continuation is used to compute branches of localized solution that exhibit snaking-type behaviour. We describe in terms of three parameters the exact regions for which localized solutions persist.

  17. Entanglement entropy flow and the Ward identity.

    PubMed

    Rosenhaus, Vladimir; Smolkin, Michael

    2014-12-31

    We derive differential equations for the flow of entanglement entropy as a function of the metric and the couplings of the theory. The variation of the universal part of entanglement entropy under a local Weyl transformation is related to the variation under a local change in the couplings. We show that this relation is, in fact, equivalent to the trace Ward identity. As a concrete application of our formalism, we express the entanglement entropy for massive free fields as a two-point function of the energy-momentum tensor.

  18. The growth rate of vertex-transitive planar graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babai, L.

    1997-06-01

    A graph is vertex-transitive if all of its vertices axe equivalent under automorphisms. Confirming a conjecture of Jon Kleinberg and Eva Tardos, we prove the following trichotomy theorem concerning locally finite vertex-transitive planar graphs: the rate of growth of a graph with these properties is either linear or quadratic or exponential. The same result holds more generally for locally finite, almost vertex-transitive planar graphs (the automorphism group has a finite number of orbits). The proof uses the elements of hyperbolic plane geometry.

  19. Approaches to linear local gauge-invariant observables in inflationary cosmologies

    NASA Astrophysics Data System (ADS)

    Fröb, Markus B.; Hack, Thomas-Paul; Khavkine, Igor

    2018-06-01

    We review and relate two recent complementary constructions of linear local gauge-invariant observables for cosmological perturbations in generic spatially flat single-field inflationary cosmologies. After briefly discussing their physical significance, we give explicit, covariant and mutually invertible transformations between the two sets of observables, thus resolving any doubts about their equivalence. In this way, we get a geometric interpretation and show the completeness of both sets of observables, while previously each of these properties was available only for one of them.

  20. Solving ordinary differential equations by electrical analogy: a multidisciplinary teaching tool

    NASA Astrophysics Data System (ADS)

    Sanchez Perez, J. F.; Conesa, M.; Alhama, I.

    2016-11-01

    Ordinary differential equations are the mathematical formulation for a great variety of problems in science and engineering, and frequently, two different problems are equivalent from a mathematical point of view when they are formulated by the same equations. Students acquire the knowledge of how to solve these equations (at least some types of them) using protocols and strict algorithms of mathematical calculation without thinking about the meaning of the equation. The aim of this work is that students learn to design network models or circuits in this way; with simple knowledge of them, students can establish the association of electric circuits and differential equations and their equivalences, from a formal point of view, that allows them to associate knowledge of two disciplines and promote the use of this interdisciplinary approach to address complex problems. Therefore, they learn to use a multidisciplinary tool that allows them to solve these kinds of equations, even students of first course of engineering, whatever the order, grade or type of non-linearity. This methodology has been implemented in numerous final degree projects in engineering and science, e.g., chemical engineering, building engineering, industrial engineering, mechanical engineering, architecture, etc. Applications are presented to illustrate the subject of this manuscript.

  1. Spin systems and Political Districting Problem

    NASA Astrophysics Data System (ADS)

    Chou, Chung-I.; Li, Sai-Ping

    2007-03-01

    The aim of the Political Districting Problem is to partition a territory into electoral districts subject to some constraints such as contiguity, population equality, etc. In this paper, we apply statistical physics methods to Political Districting Problem. We will show how to transform the political problem to a spin system, and how to write down a q-state Potts model-like energy function in which the political constraints can be written as interactions between sites or external fields acting on the system. Districting into q voter districts is equivalent to finding the ground state of this q-state Potts model. Searching for the ground state becomes an optimization problem, where optimization algorithms such as the simulated annealing method and Genetic Algorithm can be employed here.

  2. Combining virtual observatory and equivalent source dipole approaches to describe the geomagnetic field with Swarm measurements

    NASA Astrophysics Data System (ADS)

    Saturnino, Diana; Langlais, Benoit; Amit, Hagay; Civet, François; Mandea, Mioara; Beucler, Éric

    2018-03-01

    A detailed description of the main geomagnetic field and of its temporal variations (i.e., the secular variation or SV) is crucial to understanding the geodynamo. Although the SV is known with high accuracy at ground magnetic observatory locations, the globally uneven distribution of the observatories hampers the determination of a detailed global pattern of the SV. Over the past two decades, satellites have provided global surveys of the geomagnetic field which have been used to derive global spherical harmonic (SH) models through some strict data selection schemes to minimise external field contributions. However, discrepancies remain between ground measurements and field predictions by these models; indeed the global models do not reproduce small spatial scales of the field temporal variations. To overcome this problem we propose to directly extract time series of the field and its temporal variation from satellite measurements as it is done at observatory locations. We follow a Virtual Observatory (VO) approach and define a global mesh of VOs at satellite altitude. For each VO and each given time interval we apply an Equivalent Source Dipole (ESD) technique to reduce all measurements to a unique location. Synthetic data are first used to validate the new VO-ESD approach. Then, we apply our scheme to data from the first two years of the Swarm mission. For the first time, a 2.5° resolution global mesh of VO time series is built. The VO-ESD derived time series are locally compared to ground observations as well as to satellite-based model predictions. Our approach is able to describe detailed temporal variations of the field at local scales. The VO-ESD time series are then used to derive global spherical harmonic models. For a simple SH parametrization the model describes well the secular trend of the magnetic field both at satellite altitude and at the surface. As more data will be made available, longer VO-ESD time series can be derived and consequently used to study sharp temporal variation features, such as geomagnetic jerks.

  3. Structure preserving parallel algorithms for solving the Bethe–Salpeter eigenvalue problem

    DOE PAGES

    Shao, Meiyue; da Jornada, Felipe H.; Yang, Chao; ...

    2015-10-02

    The Bethe–Salpeter eigenvalue problem is a dense structured eigenvalue problem arising from discretized Bethe–Salpeter equation in the context of computing exciton energies and states. A computational challenge is that at least half of the eigenvalues and the associated eigenvectors are desired in practice. In this paper, we establish the equivalence between Bethe–Salpeter eigenvalue problems and real Hamiltonian eigenvalue problems. Based on theoretical analysis, structure preserving algorithms for a class of Bethe–Salpeter eigenvalue problems are proposed. We also show that for this class of problems all eigenvalues obtained from the Tamm–Dancoff approximation are overestimated. In order to solve large scale problemsmore » of practical interest, we discuss parallel implementations of our algorithms targeting distributed memory systems. Finally, several numerical examples are presented to demonstrate the efficiency and accuracy of our algorithms.« less

  4. 76 FR 60451 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-29

    ...; Accommodation and Food Services; and Other Services (except Public Administration). This scope is equivalent to... government (Federal and local), business, and the general public. The governments of the Island Areas and the... that serve as the factual basis for economic policy-making, planning, and program administration...

  5. 33 CFR 127.1207 - Warning alarms.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 33 Navigation and Navigable Waters 2 2014-07-01 2014-07-01 false Warning alarms. 127.1207 Section... Waterfront Facilities Handling Liquefied Hazardous Gas Equipment § 127.1207 Warning alarms. (a) Each marine... the local COTP additional or alternative warning devices that provide an equivalent level of safety...

  6. 33 CFR 127.1207 - Warning alarms.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 33 Navigation and Navigable Waters 2 2012-07-01 2012-07-01 false Warning alarms. 127.1207 Section... Waterfront Facilities Handling Liquefied Hazardous Gas Equipment § 127.1207 Warning alarms. (a) Each marine... the local COTP additional or alternative warning devices that provide an equivalent level of safety...

  7. 33 CFR 127.1207 - Warning alarms.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 33 Navigation and Navigable Waters 2 2013-07-01 2013-07-01 false Warning alarms. 127.1207 Section... Waterfront Facilities Handling Liquefied Hazardous Gas Equipment § 127.1207 Warning alarms. (a) Each marine... the local COTP additional or alternative warning devices that provide an equivalent level of safety...

  8. 33 CFR 127.1207 - Warning alarms.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 33 Navigation and Navigable Waters 2 2010-07-01 2010-07-01 false Warning alarms. 127.1207 Section... Waterfront Facilities Handling Liquefied Hazardous Gas Equipment § 127.1207 Warning alarms. (a) Each marine... the local COTP additional or alternative warning devices that provide an equivalent level of safety...

  9. 33 CFR 127.1207 - Warning alarms.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 33 Navigation and Navigable Waters 2 2011-07-01 2011-07-01 false Warning alarms. 127.1207 Section... Waterfront Facilities Handling Liquefied Hazardous Gas Equipment § 127.1207 Warning alarms. (a) Each marine... the local COTP additional or alternative warning devices that provide an equivalent level of safety...

  10. 28 CFR 36.607 - Effect of certification.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... PUBLIC ACCOMMODATIONS AND IN COMMERCIAL FACILITIES Certification of State Laws or Local Building Codes... the question of whether equipment in a building built according to the code satisfies the Act's... equivalency only with respect to those features or elements that are both covered by the certified code and...

  11. 20 CFR 628.803 - Eligibility.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...-school youth. Definition. In-school youth means a youth who has not yet attained a high school diploma... has attained a high school diploma or an equivalency, is habitually truant, as defined by State law... program includes an alternative high school, an alternative course of study approved by the local...

  12. Effect of reducing rotor blade inlet diameter on the performance of a 11.66-Centimeter radial-inflow turbine

    NASA Technical Reports Server (NTRS)

    Kofskey, M. G.; Haas, J. E.

    1973-01-01

    The effect of increased rotor blade loading on turbine performance was investigated by reducing rotor blade inlet diameter. The reduction was made in four stages. Each modification was tested with the same stator using cold air as the working fluid. Results are presented in terms of equivalent mass flow and efficiency at equivalent design rotative speed and over a range of pressure ratios. Internal flow characteristics are shown in terms of stator exit static pressure and the radial variation of local loss and rotor-exit flow angle with radius ratio. Included are velocity diagrams calculated from the experimental results.

  13. Finding of No Significant Impact (FONSI) For Demolition of Buildings 113, 130, 140, 141, 256, 257, and the Boresight Tower at New Boston Air Force Station, New Hampshire

    DTIC Science & Technology

    2010-09-01

    day-night weighted equivalent sound level Leq equivalent steady sound level m meter(s) m2 square meter(s) m3 cubic meter(s) mi mile(s) mi2 ...widespread and prolonged ice storms have occurred. Based on the data for the 9,130 km2 (3,530 mi2 ) area that includes the NBAFS, less than two...tornadoes occur per year. The localized area effected by a tornado averages only 0.29 km2 (0.11 mi2 ; Ramsdell and Andrews 1986) (ANL 2000). 3.2.2

  14. Four-point probe measurements using current probes with voltage feedback to measure electric potentials

    NASA Astrophysics Data System (ADS)

    Lüpke, Felix; Cuma, David; Korte, Stefan; Cherepanov, Vasily; Voigtländer, Bert

    2018-02-01

    We present a four-point probe resistance measurement technique which uses four equivalent current measuring units, resulting in minimal hardware requirements and corresponding sources of noise. Local sample potentials are measured by a software feedback loop which adjusts the corresponding tip voltage such that no current flows to the sample. The resulting tip voltage is then equivalent to the sample potential at the tip position. We implement this measurement method into a multi-tip scanning tunneling microscope setup such that potentials can also be measured in tunneling contact, allowing in principle truly non-invasive four-probe measurements. The resulting measurement capabilities are demonstrated for \

  15. Numerical analysis of a main crack interactions with micro-defects/inhomogeneities using two-scale generalized/extended finite element method

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felício B.

    2017-12-01

    Generalized or extended finite element method (G/XFEM) models the crack by enriching functions of partition of unity type with discontinuous functions that represent well the physical behavior of the problem. However, this enrichment functions are not available for all problem types. Thus, one can use numerically-built (global-local) enrichment functions to have a better approximate procedure. This paper investigates the effects of micro-defects/inhomogeneities on a main crack behavior by modeling the micro-defects/inhomogeneities in the local problem using a two-scale G/XFEM. The global-local enrichment functions are influenced by the micro-defects/inhomogeneities from the local problem and thus change the approximate solution of the global problem with the main crack. This approach is presented in detail by solving three different linear elastic fracture mechanics problems for different cases: two plane stress and a Reissner-Mindlin plate problems. The numerical results obtained with the two-scale G/XFEM are compared with the reference solutions from the analytical, numerical solution using standard G/XFEM method and ABAQUS as well, and from the literature.

  16. Examining Equivalence of Concepts and Measures in Diverse Samples

    PubMed Central

    Choi, Yoonsun; Abbott, Robert D.; Catalano, Richard F.; Bliesner, Siri L.

    2012-01-01

    While there is growing awareness for the need to examine the etiology of problem behaviors across cultural, racial, socioeconomic, and gender groups, much research tends to assume that constructs are equivalent and that the measures developed within one group equally assess constructs across groups. The meaning of constructs, however, may differ across groups or, if similar in meaning, measures developed for a given construct in one particular group may not be assessing the same construct or may not be assessing the construct in the same manner in other groups. The aims of this paper were to demonstrate a process of testing several forms of equivalence including conceptual, functional, item, and scalar using different methods. Data were from the Cross-Cultural Families Project, a study examining factors that promote the healthy development and adjustment of children among immigrant Cambodian and Vietnamese families. The process described in this paper can be implemented in other prevention studies interested in diverse groups. Demonstrating equivalence of constructs and measures prior to group comparisons is necessary in order to lend support of our interpretation of issues such as ethnic group differences and similarities. PMID:16845592

  17. Supernovae in Binary Systems: An Application of Classical Mechanics.

    ERIC Educational Resources Information Center

    Mitalas, R.

    1980-01-01

    Presents the supernova explosion in a binary system as an application of classical mechanics. This presentation is intended to illustrate the power of the equivalent one-body problem and provide undergraduate students with a variety of insights into elementary classical mechanics. (HM)

  18. Workplace Math I: Easing into Math.

    ERIC Educational Resources Information Center

    Wilson, Nancy; Goschen, Claire

    This basic skills learning module includes instruction in performing basic computations, using general numerical concepts such as whole numbers, fractions, decimals, averages, ratios, proportions, percentages, and equivalents in practical situations. The problems are relevant to all aspects of the printing and manufacturing industry, with emphasis…

  19. From local to global measurements of nonclassical nonlinear elastic effects in geomaterials

    DOE PAGES

    Lott, Martin; Remillieux, Marcel C.; Le Bas, Pierre-Yves; ...

    2016-09-07

    Here, the equivalence between local and global measures of nonclassical nonlinear elasticity is established in a slender resonant bar. Nonlinear effects are first measured globally using nonlinear resonance ultrasound spectroscopy (NRUS), which monitors the relative shift of the resonance frequency as a function of the maximum dynamic strain in the sample. Subsequently, nonlinear effects are measured locally at various positions along the sample using dynamic acousto elasticity testing (DAET). Finally, after correcting analytically the DAET data for three-dimensional strain effects and integrating numerically these corrected data along the length of the sample, the NRUS global measures are retrieved almost exactly.

  20. Quantum currents and pair correlation of electrons in a chain of localized dots

    NASA Astrophysics Data System (ADS)

    Morawetz, Klaus

    2017-03-01

    The quantum transport of electrons in a wire of localized dots by hopping, interaction and dissipation is calculated and a representation by an equivalent RCL circuit is found. The exact solution for the electric-field induced currents allows to discuss the role of virtual currents to decay initial correlations and Bloch oscillations. The dynamical response function in random phase approximation (RPA) is calculated analytically with the help of which the static structure function and pair correlation function are determined. The pair correlation function contains a form factor from the Brillouin zone and a structure factor caused by the localized dots in the wire.

Top