Sample records for field parameter constraints

  1. Precision constraints on the top-quark effective field theory at future lepton colliders

    NASA Astrophysics Data System (ADS)

    Durieux, G.

    We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.

  2. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton

    NASA Astrophysics Data System (ADS)

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-01

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10-12 eV (i.e., range larger than a few 1 05 m ), we improve existing constraints by one order of magnitude to |α |<10-11 if the scalar field couples to the baryon number and to |α |<10-12 if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10-12 eV , the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  3. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton.

    PubMed

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-06

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10^{-12}  eV (i.e., range larger than a few 10^{5}  m), we improve existing constraints by one order of magnitude to |α|<10^{-11} if the scalar field couples to the baryon number and to |α|<10^{-12} if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10^{-12}  eV, the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  4. {gamma} parameter and Solar System constraint in chameleon-Brans-Dicke theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaidi, Kh.; Mohammadi, A.; Sheikhahmadi, H.

    2011-05-15

    The post Newtonian parameter is considered in the chameleon-Brans-Dicke model. In the first step, the general form of this parameter and also effective gravitational constant is obtained. An arbitrary function for f({Phi}), which indicates the coupling between matter and scalar field, is introduced to investigate validity of solar system constraint. It is shown that the chameleon-Brans-Dicke model can satisfy the solar system constraint and gives us an {omega} parameter of order 10{sup 4}, which is in comparable to the constraint which has been indicated in [19].

  5. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  6. Tune-stabilized, non-scaling, fixed-field, alternating gradient accelerator

    DOEpatents

    Johnstone, Carol J [Warrenville, IL

    2011-02-01

    A FFAG is a particle accelerator having turning magnets with a linear field gradient for confinement and a large edge angle to compensate for acceleration. FODO cells contain focus magnets and defocus magnets that are specified by a number of parameters. A set of seven equations, called the FFAG equations relate the parameters to one another. A set of constraints, call the FFAG constraints, constrain the FFAG equations. Selecting a few parameters, such as injection momentum, extraction momentum, and drift distance reduces the number of unknown parameters to seven. Seven equations with seven unknowns can be solved to yield the values for all the parameters and to thereby fully specify a FFAG.

  7. Constraints on the dark matter neutralinos from the radio emissions of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Kiew, Ching-Yee; Hwang, Chorng-Yuan; Zainal Abibin, Zamri

    2017-05-01

    By assuming the dark matter to be composed of neutralinos, we used the detection of upper limit on diffuse radio emission in a sample of galaxy clusters to put constraint on the properties of neutralinos. We showed the upper limit constraint on <σv>-mχ space with neutralino annihilation through b\\bar{b} and μ+μ- channels. The best constraint is from the galaxy clusters A2199 and A1367. We showed the uncertainty due to the density profile and cluster magnetic field. The largest uncertainty comes from the uncertainty in dark matter spatial distribution. We also investigated the constraints on minimal Supergravity (mSUGRA) and minimal supersymmetric standard model (MSSM) parameter space by scanning the parameters using the darksusy package. By using the current radio observation, we managed to exclude 40 combinations of mSUGRA parameters. On the other hand, 573 combinations of MSSM parameters can be excluded by current observation.

  8. Feasibility of employing model-based optimization of pulse amplitude and electrode distance for effective tumor electropermeabilization.

    PubMed

    Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan

    2007-05-01

    In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.

  9. Variations in the fine-structure constant constraining gravity theories

    NASA Astrophysics Data System (ADS)

    Bezerra, V. B.; Cunha, M. S.; Muniz, C. R.; Tahim, M. O.; Vieira, H. S.

    2016-08-01

    In this paper, we investigate how the fine-structure constant, α, locally varies in the presence of a static and spherically symmetric gravitational source. The procedure consists in calculating the solution and the energy eigenvalues of a massive scalar field around that source, considering the weak-field regime. From this result, we obtain expressions for a spatially variable fine-structure constant by considering suitable modifications in the involved parameters admitting some scenarios of semi-classical and quantum gravities. Constraints on free parameters of the approached theories are calculated from astrophysical observations of the emission spectra of a white dwarf. Such constraints are finally compared with those obtained in the literature.

  10. Experimental constraints on metric and non-metric theories of gravity

    NASA Technical Reports Server (NTRS)

    Will, Clifford M.

    1989-01-01

    Experimental constraints on metric and non-metric theories of gravitation are reviewed. Tests of the Einstein Equivalence Principle indicate that only metric theories of gravity are likely to be viable. Solar system experiments constrain the parameters of the weak field, post-Newtonian limit to be close to the values predicted by general relativity. Future space experiments will provide further constraints on post-Newtonian gravity.

  11. Updated observational constraints on quintessence dark energy models

    NASA Astrophysics Data System (ADS)

    Durrive, Jean-Baptiste; Ooba, Junpei; Ichiki, Kiyotomo; Sugiyama, Naoshi

    2018-02-01

    The recent GW170817 measurement favors the simplest dark energy models, such as a single scalar field. Quintessence models can be classified in two classes, freezing and thawing, depending on whether the equation of state decreases towards -1 or departs from it. In this paper, we put observational constraints on the parameters governing the equations of state of tracking freezing, scaling freezing, and thawing models using updated data, from the Planck 2015 release, joint light-curve analysis, and baryonic acoustic oscillations. Because of the current tensions on the value of the Hubble parameter H0, unlike previous authors, we let this parameter vary, which modifies significantly the results. Finally, we also derive constraints on neutrino masses in each of these scenarios.

  12. Constraints on the production of primordial magnetic seeds in pre-big bang cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasperini, M., E-mail: gasperini@ba.infn.it

    We study the amplification of the electromagnetic fluctuations, and the production of 'seeds' for the cosmic magnetic fields, in a class of string cosmology models whose scalar and tensor perturbations reproduce current observations and satisfy known phenomenological constraints. We find that the condition of efficient seeds production can be satisfied and compatible with all constraints only in a restricted region of parameter space, but we show that such a region has significant intersections with the portions of parameter space where the produced background of relic gravitational waves is strong enough to be detectable by aLIGO/Virgo and/or by eLISA.

  13. Constraints on the production of primordial magnetic seeds in pre-big bang cosmology

    NASA Astrophysics Data System (ADS)

    Gasperini, M.

    2017-06-01

    We study the amplification of the electromagnetic fluctuations, and the production of "seeds" for the cosmic magnetic fields, in a class of string cosmology models whose scalar and tensor perturbations reproduce current observations and satisfy known phenomenological constraints. We find that the condition of efficient seeds production can be satisfied and compatible with all constraints only in a restricted region of parameter space, but we show that such a region has significant intersections with the portions of parameter space where the produced background of relic gravitational waves is strong enough to be detectable by aLIGO/Virgo and/or by eLISA.

  14. A numerical study of crack tip constraint in ductile single crystals

    NASA Astrophysics Data System (ADS)

    Patil, Swapnil D.; Narasimhan, R.; Mishra, R. K.

    In this work, the effect of crack tip constraint on near-tip stress and deformation fields in a ductile FCC single crystal is studied under mode I, plane strain conditions. To this end, modified boundary layer simulations within crystal plasticity framework are performed, neglecting elastic anisotropy. The first and second terms of the isotropic elastic crack tip field, which are governed by the stress intensity factor K and T-stress, are prescribed as remote boundary conditions and solutions pertaining to different levels of T-stress are generated. It is found that the near-tip deformation field, especially, the development of kink or slip shear bands, is sensitive to the constraint level. The stress distribution and the size and shape of the plastic zone near the crack tip are also strongly influenced by the level of T-stress, with progressive loss of crack tip constraint occurring as T-stress becomes more negative. A family of near-tip fields is obtained which are characterized by two terms (such as K and T or J and a constraint parameter Q) as in isotropic plastic solids.

  15. Effects of ordinary and superconducting cosmic strings on primordial nucleosynthesis

    NASA Technical Reports Server (NTRS)

    Hodges, Hardy M.; Turner, Michael S.

    1988-01-01

    A precise calculation is done of the primordial nucleosynthesis constraint on the energy per length of ordinary and superconducting cosmic strings. A general formula is provided for the constraint on the string tension for ordinary strings. Using the current values for the various parameters that describe the evolution of loops, the constraint for ordinary strings is G mu less than 2.2 x 10 to the minus 5 power. Our constraint is weaker than previously quoted limits by a factor of approximately 5. For superconducting loops, with currents generated by primordial magnetic fields, the constraint can be less or more stringent than this limit, depending on the strength of the magnetic field. It is also found in this case that there is a negligible amount of entropy production if the electromagnetic radiation from strings thermalizes with the radiation background.

  16. Tropospheric wet refractivity tomography using multiplicative algebraic reconstruction technique

    NASA Astrophysics Data System (ADS)

    Xiaoying, Wang; Ziqiang, Dai; Enhong, Zhang; Fuyang, K. E.; Yunchang, Cao; Lianchun, Song

    2014-01-01

    Algebraic reconstruction techniques (ART) have been successfully used to reconstruct the total electron content (TEC) of the ionosphere and in recent years be tentatively used in tropospheric wet refractivity and water vapor tomography in the ground-based GNSS technology. The previous research on ART used in tropospheric water vapor tomography focused on the convergence and relaxation parameters for various algebraic reconstruction techniques and rarely discussed the impact of Gaussian constraints and initial field on the iteration results. The existing accuracy evaluation parameters calculated from slant wet delay can only evaluate the resultant precision of the voxels penetrated by slant paths and cannot evaluate that of the voxels not penetrated by any slant path. The paper proposes two new statistical parameters Bias and RMS, calculated from wet refractivity of the total voxels, to improve the deficiencies of existing evaluation parameters and then discusses the effect of the Gaussian constraints and initial field on the convergence and tomography results in multiplicative algebraic reconstruction technique (MART) to reconstruct the 4D tropospheric wet refractivity field using simulation method.

  17. Equivalence between the Lovelock-Cartan action and a constrained gauge theory

    NASA Astrophysics Data System (ADS)

    Junqueira, O. C.; Pereira, A. D.; Sadovski, G.; Santos, T. R. S.; Sobreiro, R. F.; Tomaz, A. A.

    2017-04-01

    We show that the four-dimensional Lovelock-Cartan action can be derived from a massless gauge theory for the SO(1, 3) group with an additional BRST trivial part. The model is originally composed of a topological sector and a BRST exact piece and has no explicit dependence on the metric, the vierbein or a mass parameter. The vierbein is introduced together with a mass parameter through some BRST trivial constraints. The effect of the constraints is to identify the vierbein with some of the additional fields, transforming the original action into the Lovelock-Cartan one. In this scenario, the mass parameter is identified with Newton's constant, while the gauge field is identified with the spin connection. The symmetries of the model are also explored. Moreover, the extension of the model to a quantum version is qualitatively discussed.

  18. Primordial black holes from inflaton and spectator field perturbations in a matter-dominated era

    NASA Astrophysics Data System (ADS)

    Carr, Bernard; Tenkanen, Tommi; Vaskonen, Ville

    2017-09-01

    We study production of primordial black holes (PBHs) during an early matter-dominated phase. As a source of perturbations, we consider either an inflaton field with a running spectral index or a spectator field that has a blue spectrum and thus provides a significant contribution to PBH production at small scales. First, we identify the region of the parameter space where a significant fraction of the observed dark matter can be produced, taking into account all current PBH constraints. Then, we present constraints on the amplitude and spectral index of the spectator field as a function of the reheating temperature. We also derive constraints on the running of the inflaton spectral index, d n /d ln k ≲0.001 , which are comparable to those from the Planck satellite for a scenario where the spectator field is absent.

  19. Bayesian prestack seismic inversion with a self-adaptive Huber-Markov random-field edge protection scheme

    NASA Astrophysics Data System (ADS)

    Tian, Yu-Kun; Zhou, Hui; Chen, Han-Ming; Zou, Ya-Ming; Guan, Shou-Jun

    2013-12-01

    Seismic inversion is a highly ill-posed problem, due to many factors such as the limited seismic frequency bandwidth and inappropriate forward modeling. To obtain a unique solution, some smoothing constraints, e.g., the Tikhonov regularization are usually applied. The Tikhonov method can maintain a global smooth solution, but cause a fuzzy structure edge. In this paper we use Huber-Markov random-field edge protection method in the procedure of inverting three parameters, P-velocity, S-velocity and density. The method can avoid blurring the structure edge and resist noise. For the parameter to be inverted, the Huber-Markov random-field constructs a neighborhood system, which further acts as the vertical and lateral constraints. We use a quadratic Huber edge penalty function within the layer to suppress noise and a linear one on the edges to avoid a fuzzy result. The effectiveness of our method is proved by inverting the synthetic data without and with noises. The relationship between the adopted constraints and the inversion results is analyzed as well.

  20. Can high fields save the tokamak? The challenge of steady-state operation for low cost compact reactors

    NASA Astrophysics Data System (ADS)

    Freidberg, Jeffrey; Dogra, Akshunna; Redman, William; Cerfon, Antoine

    2016-10-01

    The development of high field, high temperature superconductors is thought to be a game changer for the development of fusion power based on the tokamak concept. We test the validity of this assertion for pilot plant scale reactors (Q 10) for two different but related missions: pulsed operation and steady-state operation. Specifically, we derive a set of analytic criteria that determines the basic design parameters of a given fusion reactor mission. As expected there are far more constraints than degrees of freedom in any given design application. However, by defining the mission of the reactor under consideration, we have been able to determine the subset of constraints that drive the design, and calculate the values for the key parameters characterizing the tokamak. Our conclusions are as follows: 1) for pulsed reactors, high field leads to more compact designs and thus cheaper reactors - high B is the way to go; 2) steady-state reactors with H-mode like transport are large, even with high fields. The steady-state constraint is hard to satisfy in compact designs - high B helps but is not enough; 3) I-mode like transport, when combined with high fields, yields relatively compact steady-state reactors - why is there not more research on this favorable transport regime?

  1. Cosmological constraints on Brans-Dicke theory.

    PubMed

    Avilez, A; Skordis, C

    2014-07-04

    We report strong cosmological constraints on the Brans-Dicke (BD) theory of gravity using cosmic microwave background data from Planck. We consider two types of models. First, the initial condition of the scalar field is fixed to give the same effective gravitational strength Geff today as the one measured on Earth, GN. In this case, the BD parameter ω is constrained to ω>692 at the 99% confidence level, an order of magnitude improvement over previous constraints. In the second type, the initial condition for the scalar is a free parameter leading to a somewhat stronger constraint of ω>890, while Geff is constrained to 0.981

  2. Constraints on Covariant Horava-Lifshitz Gravity from frame-dragging experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radicella, Ninfa; Lambiase, Gaetano; Parisi, Luca

    The effects of Horava-Lifshitz corrections to the gravito-magnetic field are analyzed. Solutions in the weak field, slow motion limit, referring to the motion of a satellite around the Earth are considered. The post-newtonian paradigm is used to evaluate constraints on the Horava-Lifshitz parameter space from current satellite and terrestrial experiments data. In particular, we focus on GRAVITY PROBE B, LAGEOS and the more recent LARES mission, as well as a forthcoming terrestrial project, GINGER.

  3. Constraints on Covariant Horava-Lifshitz Gravity from frame-dragging experiment

    NASA Astrophysics Data System (ADS)

    Radicella, Ninfa; Lambiase, Gaetano; Parisi, Luca; Vilasi, Gaetano

    2014-12-01

    The effects of Horava-Lifshitz corrections to the gravito-magnetic field are analyzed. Solutions in the weak field, slow motion limit, referring to the motion of a satellite around the Earth are considered. The post-newtonian paradigm is used to evaluate constraints on the Horava-Lifshitz parameter space from current satellite and terrestrial experiments data. In particular, we focus on GRAVITY PROBE B, LAGEOS and the more recent LARES mission, as well as a forthcoming terrestrial project, GINGER.

  4. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; Steffen, J. H.; Weltman, A.

    2010-01-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here, we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss the GammeV-CHameleon Afterglow SEarch, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHameleon Afterglow SEarch. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experimentmore » will be able to probe a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  5. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; /Chicago U., EFI /KICP, Chicago; Steffen, J.H.

    2009-11-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss GammeV-CHASE, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHASE. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experiment will be able to probemore » a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  6. Evaluation of parameters of Black Hole, stellar cluster and dark matter distribution from bright star orbits in the Galactic Center

    NASA Astrophysics Data System (ADS)

    Zakharov, Alexander

    It is well-known that one can evaluate black hole (BH) parameters (including spin) analyz-ing trajectories of stars around BH. A bulk distribution of matter (dark matter (DM)+stellar cluster) inside stellar orbits modifies trajectories of stars, namely, generally there is a apoas-tron shift in direction which opposite to GR one, even now one could put constraints on DM distribution and BH parameters and constraints will more stringent in the future. Therefore, an analyze of bright star trajectories provides a relativistic test in a weak gravitational field approximation, but in the future one can test a strong gravitational field near the BH at the Galactic Center with the same technique due to a rapid progress in observational facilities. References A. Zakharov et al., Phys. Rev. D76, 062001 (2007). A.F. Zakharov et al., Space Sci. Rev. 148, 301313(2009).

  7. Self-constrained inversion of potential fields

    NASA Astrophysics Data System (ADS)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  8. Searching for quantum optimal controls under severe constraints

    DOE PAGES

    Riviello, Gregory; Tibbetts, Katharine Moore; Brif, Constantin; ...

    2015-04-06

    The success of quantum optimal control for both experimental and theoretical objectives is connected to the topology of the corresponding control landscapes, which are free from local traps if three conditions are met: (1) the quantum system is controllable, (2) the Jacobian of the map from the control field to the evolution operator is of full rank, and (3) there are no constraints on the control field. This paper investigates how the violation of assumption (3) affects gradient searches for globally optimal control fields. The satisfaction of assumptions (1) and (2) ensures that the control landscape lacks fundamental traps, butmore » certain control constraints can still prevent successful optimization of the objective. Using optimal control simulations, we show that the most severe field constraints are those that limit essential control resources, such as the number of control variables, the control duration, and the field strength. Proper management of these resources is an issue of great practical importance for optimization in the laboratory. For each resource, we show that constraints exceeding quantifiable limits can introduce artificial traps to the control landscape and prevent gradient searches from reaching a globally optimal solution. These results demonstrate that careful choice of relevant control parameters helps to eliminate artificial traps and facilitate successful optimization.« less

  9. Langlands Parameters of Quivers in the Sato Grassmannian

    NASA Astrophysics Data System (ADS)

    Luu, Martin T.; Penciak, Matej

    2018-01-01

    Motivated by quantum field theoretic partition functions that can be expressed as products of tau functions of the KP hierarchy we attach several types of local geometric Langlands parameters to quivers in the Sato Grassmannian. We study related questions of Virasoro constraints, of moduli spaces of relevant quivers, and of classical limits of the Langlands parameters.

  10. A New Limit on Planck Scale Lorentz Violation from Gamma-ray Burst Polarization

    NASA Technical Reports Server (NTRS)

    Stecker, Floyd W.

    2011-01-01

    Constraints on possible Lorentz invariance violation (UV) to first order in E/M(sub Plank) for photons in the framework of effective field theory (EFT) are discussed, taking cosmological factors into account. Then. using the reported detection of polarized soft gamma-ray emission from the gamma-ray burst GRB041219a that is indicative' of an absence of vacuum birefringence, together with a very recent improved method for estimating the redshift of the burst, we derive constraints on the dimension 5 Lorentz violating modification to the Lagrangian of an effective local QFT for QED. Our new constraints are more than five orders of magnitude better than recent constraints from observations of the Crab Nebula.. We obtain the upper limit on the Lorentz violating dimension 5 EFT parameter absolute value of zeta of 2.4 x 10(exp -15), corresponding to a constraint on the dimension 5 standard model extension parameter. Kappa (sup 5) (sub (v)oo) much less than 4.2 X 10(exp -3)4 / GeV.

  11. Tunnelling in Dante's Inferno

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furuuchi, Kazuyuki; Sperling, Marcus, E-mail: kazuyuki.furuuchi@manipal.edu, E-mail: marcus.sperling@univie.ac.at

    2017-05-01

    We study quantum tunnelling in Dante's Inferno model of large field inflation. Such a tunnelling process, which will terminate inflation, becomes problematic if the tunnelling rate is rapid compared to the Hubble time scale at the time of inflation. Consequently, we constrain the parameter space of Dante's Inferno model by demanding a suppressed tunnelling rate during inflation. The constraints are derived and explicit numerical bounds are provided for representative examples. Our considerations are at the level of an effective field theory; hence, the presented constraints have to hold regardless of any UV completion.

  12. Two-dimensional probabilistic inversion of plane-wave electromagnetic data: methodology, model constraints and joint inversion with electrical resistivity data

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, Marina; Linde, Niklas; Kalscheuer, Thomas; Vrugt, Jasper A.

    2014-03-01

    Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

  13. Curvature perturbation spectra from waterfall transition, black hole constraints and non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Bugaev, Edgar; Klimai, Peter

    2011-11-01

    We carried out numerical calculations of a contribution of the waterfall field to the primordial curvature perturbation (on uniform density hypersurfaces) ζ, which is produced during waterfall transition in hybrid inflation scenario. The calculation is performed for a broad interval of values of the model parameters. We show that there is a strong growth of amplitudes of the curvature perturbation spectrum in the limit when the bare mass-squared of the waterfall field becomes comparable with the square of Hubble parameter. We show that in this limit the primordial black hole constraints on the curvature perturbations must be taken into account. It is shown that, in the same limit, peak values of the curvature perturbation spectra are far beyond horizon, and the spectra are strongly non-Gaussian.

  14. COSMOLOGY OF CHAMELEONS WITH POWER-LAW COUPLINGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mota, David F.; Winther, Hans A.

    2011-05-20

    In chameleon field theories, a scalar field can couple to matter with gravitational strength and still evade local gravity constraints due to a combination of self-interactions and the couplings to matter. Originally, these theories were proposed with a constant coupling to matter; however, the chameleon mechanism also extends to the case where the coupling becomes field dependent. We study the cosmology of chameleon models with power-law couplings and power-law potentials. It is found that these generalized chameleons, when viable, have a background expansion very close to {Lambda}CDM, but can in some special cases enhance the growth of the linear perturbationsmore » at low redshifts. For the models we consider, it is found that this region of the parameter space is ruled out by local gravity constraints. Imposing a coupling to dark matter only, the local constraints are avoided, and it is possible to have observable signatures on the linear matter perturbations.« less

  15. Probing Primordial Non-Gaussianity with Weak-lensing Minkowski Functionals

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Yoshida, Naoki; Hamana, Takashi; Nishimichi, Takahiro

    2012-11-01

    We study the cosmological information contained in the Minkowski functionals (MFs) of weak gravitational lensing convergence maps. We show that the MFs provide strong constraints on the local-type primordial non-Gaussianity parameter f NL. We run a set of cosmological N-body simulations and perform ray-tracing simulations of weak lensing to generate 100 independent convergence maps of a 25 deg2 field of view for f NL = -100, 0 and 100. We perform a Fisher analysis to study the degeneracy among other cosmological parameters such as the dark energy equation of state parameter w and the fluctuation amplitude σ8. We use fully nonlinear covariance matrices evaluated from 1000 ray-tracing simulations. For upcoming wide-field observations such as those from the Subaru Hyper Suprime-Cam survey with a proposed survey area of 1500 deg2, the primordial non-Gaussianity can be constrained with a level of f NL ~ 80 and w ~ 0.036 by weak-lensing MFs. If simply scaled by the effective survey area, a 20,000 deg2 lensing survey using the Large Synoptic Survey Telescope will yield constraints of f NL ~ 25 and w ~ 0.013. We show that these constraints can be further improved by a tomographic method using source galaxies in multiple redshift bins.

  16. Cosmological effects of scalar-photon couplings: dark energy and varying-α Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avgoustidis, A.; Martins, C.J.A.P.; Monteiro, A.M.R.V.L.

    2014-06-01

    We study cosmological models involving scalar fields coupled to radiation and discuss their effect on the redshift evolution of the cosmic microwave background temperature, focusing on links with varying fundamental constants and dynamical dark energy. We quantify how allowing for the coupling of scalar fields to photons, and its important effect on luminosity distances, weakens current and future constraints on cosmological parameters. In particular, for evolving dark energy models, joint constraints on the dark energy equation of state combining BAO radial distance and SN luminosity distance determinations, will be strongly dominated by BAO. Thus, to fully exploit future SN datamore » one must also independently constrain photon number non-conservation arising from the possible coupling of SN photons to the dark energy scalar field. We discuss how observational determinations of the background temperature at different redshifts can, in combination with distance measures data, set tight constraints on interactions between scalar fields and photons, thus breaking this degeneracy. We also discuss prospects for future improvements, particularly in the context of Euclid and the E-ELT and show that Euclid can, even on its own, provide useful dark energy constraints while allowing for photon number non-conservation.« less

  17. Experimental determination of J-Q in the two-parameter characterization of fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, S.; Chiang, F.P.

    1995-11-01

    It is well recognized that using a single parameter to characterize crack tip deformation is no long adequate if constraint is present. Several approaches of two-parameter characterization scheme have been proposed. There are the J-T approach, the J-Q approach of Shih et al and the J-Q approach of Sharma and Aravas. The authors propose a scheme to measure the J and Q of the J-Q theory of Sharma and Aravas. They find that with the addition of Q term the experimentally measured U-field displacement component agrees well with the theoretical prediction. The agreement increases as the crack tip constraint increases.more » The results of a SEN and a CN specimen are presented.« less

  18. Blind Deconvolution of Astronomical Images with a Constraint on Bandwidth Determined by the Parameters of the Optical System

    NASA Astrophysics Data System (ADS)

    Luo, Lin; Fan, Min; Shen, Mang-zuo

    2008-01-01

    Atmospheric turbulence severely restricts the spatial resolution of astronomical images obtained by a large ground-based telescope. In order to reduce effectively this effect, we propose a method of blind deconvolution, with a bandwidth constraint determined by the parameters of the telescope's optical system based on the principle of maximum likelihood estimation, in which the convolution error function is minimized by using the conjugate gradient algorithm. A relation between the parameters of the telescope optical system and the image's frequency-domain bandwidth is established, and the speed of convergence of the algorithm is improved by using the positivity constraint on the variables and the limited-bandwidth constraint on the point spread function. To avoid the effective Fourier frequencies exceed the cut-off frequency, it is required that each single image element (e.g., the pixel in the CCD imaging) in the sampling focal plane should be smaller than one fourth of the diameter of the diffraction spot. In the algorithm, no object-centered constraint was used, so the proposed method is suitable for the image restoration of a whole field of objects. By the computer simulation and by the restoration of an actually-observed image of α Piscium, the effectiveness of the proposed method is demonstrated.

  19. The four fixed points of scale invariant single field cosmological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, BingKan, E-mail: bxue@princeton.edu

    2012-10-01

    We introduce a new set of flow parameters to describe the time dependence of the equation of state and the speed of sound in single field cosmological models. A scale invariant power spectrum is produced if these flow parameters satisfy specific dynamical equations. We analyze the flow of these parameters and find four types of fixed points that encompass all known single field models. Moreover, near each fixed point we uncover new models where the scale invariance of the power spectrum relies on having simultaneously time varying speed of sound and equation of state. We describe several distinctive new modelsmore » and discuss constraints from strong coupling and superluminality.« less

  20. The Hoffmeister asteroid family

    NASA Astrophysics Data System (ADS)

    Carruba, V.; Novaković, B.; Aljbaae, S.

    2017-03-01

    The Hoffmeister family is a C-type group located in the central main belt. Dynamically, it is important because of its interaction with the ν1C nodal secular resonance with Ceres, which significantly increases the dispersion in inclination of family members at a lower semimajor axis. As an effect, the distribution of inclination values of the Hoffmeister family at a semimajor axis lower than its centre is significantly leptokurtic, and this can be used to set constraints on the terminal ejection velocity field of the family at the time it was produced. By performing an analysis of the time behaviour of the kurtosis of the vW component of the ejection velocity field [γ2(vW)], as obtained from Gauss' equations, for different fictitious Hoffmeister families with different values of the ejection velocity field, we were able to exclude that the Hoffmeister family should be older than 335 Myr. Constraints from the currently observed inclination distribution of the Hoffmeister family suggest that its terminal ejection velocity parameter VEJ should be lower than 25 m s-1. Results of a Yarko-YORP Monte Carlo method to family dating, combined with other constraints from inclinations and γ2(vW), indicate that the Hoffmeister family should be 220^{+60}_{-40} Myr old, with an ejection parameter VEJ = 20 ± 5 m s-1.

  1. Plasma constraints on the cosmological abundance of magnetic monopoles and the origin of cosmic magnetic fields

    NASA Astrophysics Data System (ADS)

    Medvedev, Mikhail V.; Loeb, Abraham

    2017-06-01

    Existing theoretical and observational constraints on the abundance of magnetic monopoles are limited. Here we demonstrate that an ensemble of monopoles forms a plasma whose properties are well determined and whose collective effects place new tight constraints on the cosmological abundance of monopoles. In particular, the existence of micro-Gauss magnetic fields in galaxy clusters and radio relics implies that the scales of these structures are below the Debye screening length, thus setting an upper limit on the cosmological density parameter of monopoles, ΩM lesssim 3 × 10-4, which precludes them from being the dark matter. Future detection of Gpc-scale coherent magnetic fields could improve this limit by a few orders of magnitude. In addition, we predict the existence of magnetic Langmuir waves and turbulence which may appear on the sky as ``zebra patterns'' of an alternating magnetic field with k·B ≠ 0. We also show that magnetic monopole Langmuir turbulence excited near the accretion shock of galaxy clusters may be an efficient mechanism for generating the observed intracluster magnetic fields.

  2. Hybrid Inflation: Multi-field Dynamics and Cosmological Constraints

    NASA Astrophysics Data System (ADS)

    Clesse, Sébastien

    2011-09-01

    The dynamics of hybrid models is usually approximated by the evolution of a scalar field slowly rolling along a nearly flat valley. Inflation ends with a waterfall phase, due to a tachyonic instability. This final phase is usually assumed to be nearly instantaneous. In this thesis, we go beyond these approximations and analyze the exact 2-field dynamics of hybrid models. Several effects are put in evidence: 1) the possible slow-roll violations along the valley induce the non existence of inflation at small field values. Provided super-planckian fields, the scalar spectrum of the original model is red, in agreement with observations. 2) The initial field values are not fine-tuned along the valley but also occupy a considerable part of the field space exterior to it. They form a structure with fractal boundaries. Using bayesian methods, their distribution in the whole parameter space is studied. Natural bounds on the potential parameters are derived. 3) For the original model, inflation is found to continue for more than 60 e-folds along waterfall trajectories in some part of the parameter space. The scalar power spectrum of adiabatic perturbations is modified and is generically red, possibly in agreement with CMB observations. Topological defects are conveniently stretched outside the observable Universe. 4) The analysis of the initial conditions is extended to the case of a closed Universe, in which the initial singularity is replaced by a classical bounce. In the third part of the thesis, we study how the present CMB constraints on the cosmological parameters could be ameliorated with the observation of the 21cm cosmic background, by future giant radio-telescopes. Forecasts are determined for a characteristic Fast Fourier Transform Telescope, by using both Fisher matrix and MCMC methods.

  3. Rate-gyro-integral constraint for ambiguity resolution in GNSS attitude determination applications.

    PubMed

    Zhu, Jiancheng; Li, Tao; Wang, Jinling; Hu, Xiaoping; Wu, Meiping

    2013-06-21

    In the field of Global Navigation Satellite System (GNSS) attitude determination, the constraints usually play a critical role in resolving the unknown ambiguities quickly and correctly. Many constraints such as the baseline length, the geometry of multi-baselines and the horizontal attitude angles have been used extensively to improve the performance of ambiguity resolution. In the GNSS/Inertial Navigation System (INS) integrated attitude determination systems using low grade Inertial Measurement Unit (IMU), the initial heading parameters of the vehicle are usually worked out by the GNSS subsystem instead of by the IMU sensors independently. However, when a rotation occurs, the angle at which vehicle has turned within a short time span can be measured accurately by the IMU. This measurement will be treated as a constraint, namely the rate-gyro-integral constraint, which can aid the GNSS ambiguity resolution. We will use this constraint to filter the candidates in the ambiguity search stage. The ambiguity search space shrinks significantly with this constraint imposed during the rotation, thus it is helpful to speeding up the initialization of attitude parameters under dynamic circumstances. This paper will only study the applications of this new constraint to land vehicles. The impacts of measurement errors on the effect of this new constraint will be assessed for different grades of IMU and current average precision level of GNSS receivers. Simulations and experiments in urban areas have demonstrated the validity and efficacy of the new constraint in aiding GNSS attitude determinations.

  4. Very low scale Coleman-Weinberg inflation with nonminimal coupling

    NASA Astrophysics Data System (ADS)

    Kaneta, Kunio; Seto, Osamu; Takahashi, Ryo

    2018-03-01

    We study viable small-field Coleman-Weinberg (CW) inflation models with the help of nonminimal coupling to gravity. The simplest small-field CW inflation model (with a low-scale potential minimum) is incompatible with the cosmological constraint on the scalar spectral index. However, there are possibilities to make the model realistic. First, we revisit the CW inflation model supplemented with a linear potential term. We next consider the CW inflation model with a logarithmic nonminimal coupling and illustrate that the model can open a new viable parameter space that includes the model with a linear potential term. We also show parameter spaces where the Hubble scale during the inflation can be as small as 10-4 GeV , 1 GeV, 1 04 GeV , and 1 08 GeV for the number of e -folds of 40, 45, 50, and 55, respectively, with other cosmological constraints being satisfied.

  5. Revisiting CMB constraints on warm inflation

    NASA Astrophysics Data System (ADS)

    Arya, Richa; Dasgupta, Arnab; Goswami, Gaurav; Prasad, Jayanti; Rangarajan, Raghavan

    2018-02-01

    We revisit the constraints that Planck 2015 temperature, polarization and lensing data impose on the parameters of warm inflation. To this end, we study warm inflation driven by a single scalar field with a quartic self interaction potential in the weak dissipative regime. We analyse the effect of the parameters of warm inflation, namely, the inflaton self coupling λ and the inflaton dissipation parameter QP on the CMB angular power spectrum. We constrain λ and QP for 50 and 60 number of e-foldings with the full Planck 2015 data (TT, TE, EE + lowP and lensing) by performing a Markov-Chain Monte Carlo analysis using the publicly available code CosmoMC and obtain the joint as well as marginalized distributions of those parameters. We present our results in the form of mean and 68 % confidence limits on the parameters and also highlight the degeneracy between λ and QP in our analysis. From this analysis we show how warm inflation parameters can be well constrained using the Planck 2015 data.

  6. Characteristic parameters of superconductor-coolant interaction including high Tc current density limits

    NASA Technical Reports Server (NTRS)

    Frederking, T. H. K.

    1989-01-01

    In the area of basic mechanisms of helium heat transfer and related influence on super-conducting magnet stability, thermal boundary conditions are important constraints. Characteristic lengths are considered along with other parameters of the superconducting composite-coolant system. Based on helium temperature range developments, limiting critical current densities are assessed at low fields for high transition temperature superconductors.

  7. The reconstruction of tachyon inflationary potentials

    NASA Astrophysics Data System (ADS)

    Fei, Qin; Gong, Yungui; Lin, Jiong; Yi, Zhu

    2017-08-01

    We derive a lower bound on the field excursion for the tachyon inflation, which is determined by the amplitude of the scalar perturbation and the number of e-folds before the end of inflation. Using the relation between the observables like ns and r with the slow-roll parameters, we reconstruct three classes of tachyon potentials. The model parameters are determined from the observations before the potentials are reconstructed, and the observations prefer the concave potential. We also discuss the constraints from the reheating phase preceding the radiation domination for the three classes of models by assuming the equation of state parameter wre during reheating is a constant. Depending on the model parameters and the value of wre, the constraints on Nre and Tre are different. As ns increases, the allowed reheating epoch becomes longer for wre=-1/3, 0 and 1/6 while the allowed reheating epoch becomes shorter for wre=2/3.

  8. Cosmology with photometric weak lensing surveys: Constraints with redshift tomography of convergence peaks and moments

    NASA Astrophysics Data System (ADS)

    Petri, Andrea; May, Morgan; Haiman, Zoltán

    2016-09-01

    Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w . When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ωm,w ,σ8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. We find that redshift tomography with the power spectrum reduces the area of the 1 σ confidence interval in (Ωm,w ) space by a factor of 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ωm,w ) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. We find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.

  9. Hubble Frontier Fields: systematic errors in strong lensing models of galaxy clusters - implications for cosmography

    NASA Astrophysics Data System (ADS)

    Acebron, Ana; Jullo, Eric; Limousin, Marceau; Tilquin, André; Giocoli, Carlo; Jauzac, Mathilde; Mahler, Guillaume; Richard, Johan

    2017-09-01

    Strong gravitational lensing by galaxy clusters is a fundamental tool to study dark matter and constrain the geometry of the Universe. Recently, the Hubble Space Telescope Frontier Fields programme has allowed a significant improvement of mass and magnification measurements but lensing models still have a residual root mean square between 0.2 arcsec and few arcseconds, not yet completely understood. Systematic errors have to be better understood and treated in order to use strong lensing clusters as reliable cosmological probes. We have analysed two simulated Hubble-Frontier-Fields-like clusters from the Hubble Frontier Fields Comparison Challenge, Ares and Hera. We use several estimators (relative bias on magnification, density profiles, ellipticity and orientation) to quantify the goodness of our reconstructions by comparing our multiple models, optimized with the parametric software lenstool, with the input models. We have quantified the impact of systematic errors arising, first, from the choice of different density profiles and configurations and, secondly, from the availability of constraints (spectroscopic or photometric redshifts, redshift ranges of the background sources) in the parametric modelling of strong lensing galaxy clusters and therefore on the retrieval of cosmological parameters. We find that substructures in the outskirts have a significant impact on the position of the multiple images, yielding tighter cosmological contours. The need for wide-field imaging around massive clusters is thus reinforced. We show that competitive cosmological constraints can be obtained also with complex multimodal clusters and that photometric redshifts improve the constraints on cosmological parameters when considering a narrow range of (spectroscopic) redshifts for the sources.

  10. Weakly dynamic dark energy via metric-scalar couplings with torsion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sur, Sourav; Bhatia, Arshdeep Singh, E-mail: sourav.sur@gmail.com, E-mail: arshdeepsb@gmail.com

    We study the dynamical aspects of dark energy in the context of a non-minimally coupled scalar field with curvature and torsion. Whereas the scalar field acts as the source of the trace mode of torsion, a suitable constraint on the torsion pseudo-trace provides a mass term for the scalar field in the effective action. In the equivalent scalar-tensor framework, we find explicit cosmological solutions representing dark energy in both Einstein and Jordan frames. We demand the dynamical evolution of the dark energy to be weak enough, so that the present-day values of the cosmological parameters could be estimated keeping themmore » within the confidence limits set for the standard LCDM model from recent observations. For such estimates, we examine the variations of the effective matter density and the dark energy equation of state parameters over different redshift ranges. In spite of being weakly dynamic, the dark energy component differs significantly from the cosmological constant, both in characteristics and features, for e.g. it interacts with the cosmological (dust) fluid in the Einstein frame, and crosses the phantom barrier in the Jordan frame. We also obtain the upper bounds on the torsion mode parameters and the lower bound on the effective Brans-Dicke parameter. The latter turns out to be fairly large, and in agreement with the local gravity constraints, which therefore come in support of our analysis.« less

  11. Weakly dynamic dark energy via metric-scalar couplings with torsion

    NASA Astrophysics Data System (ADS)

    Sur, Sourav; Singh Bhatia, Arshdeep

    2017-07-01

    We study the dynamical aspects of dark energy in the context of a non-minimally coupled scalar field with curvature and torsion. Whereas the scalar field acts as the source of the trace mode of torsion, a suitable constraint on the torsion pseudo-trace provides a mass term for the scalar field in the effective action. In the equivalent scalar-tensor framework, we find explicit cosmological solutions representing dark energy in both Einstein and Jordan frames. We demand the dynamical evolution of the dark energy to be weak enough, so that the present-day values of the cosmological parameters could be estimated keeping them within the confidence limits set for the standard LCDM model from recent observations. For such estimates, we examine the variations of the effective matter density and the dark energy equation of state parameters over different redshift ranges. In spite of being weakly dynamic, the dark energy component differs significantly from the cosmological constant, both in characteristics and features, for e.g. it interacts with the cosmological (dust) fluid in the Einstein frame, and crosses the phantom barrier in the Jordan frame. We also obtain the upper bounds on the torsion mode parameters and the lower bound on the effective Brans-Dicke parameter. The latter turns out to be fairly large, and in agreement with the local gravity constraints, which therefore come in support of our analysis.

  12. Effective theories of universal theories

    DOE PAGES

    Wells, James D.; Zhang, Zhengkang

    2016-01-20

    It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less

  13. Effective theories of universal theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, James D.; Zhang, Zhengkang

    It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less

  14. Constraining Nonperturbative Strong-Field Effects in Scalar-Tensor Gravity by Combining Pulsar Timing and Laser-Interferometer Gravitational-Wave Detectors

    NASA Astrophysics Data System (ADS)

    Shao, Lijing; Sennett, Noah; Buonanno, Alessandra; Kramer, Michael; Wex, Norbert

    2017-10-01

    Pulsar timing and laser-interferometer gravitational-wave (GW) detectors are superb laboratories to study gravity theories in the strong-field regime. Here, we combine these tools to test the mono-scalar-tensor theory of Damour and Esposito-Farèse (DEF), which predicts nonperturbative scalarization phenomena for neutron stars (NSs). First, applying Markov-chain Monte Carlo techniques, we use the absence of dipolar radiation in the pulsar-timing observations of five binary systems composed of a NS and a white dwarf, and eleven equations of state (EOSs) for NSs, to derive the most stringent constraints on the two free parameters of the DEF scalar-tensor theory. Since the binary-pulsar bounds depend on the NS mass and the EOS, we find that current pulsar-timing observations leave scalarization windows, i.e., regions of parameter space where scalarization can still be prominent. Then, we investigate if these scalarization windows could be closed and if pulsar-timing constraints could be improved by laser-interferometer GW detectors, when spontaneous (or dynamical) scalarization sets in during the early (or late) stages of a binary NS (BNS) evolution. For the early inspiral of a BNS carrying constant scalar charge, we employ a Fisher-matrix analysis to show that Advanced LIGO can improve pulsar-timing constraints for some EOSs, and next-generation detectors, such as the Cosmic Explorer and Einstein Telescope, will be able to improve those bounds for all eleven EOSs. Using the late inspiral of a BNS, we estimate that for some of the EOSs under consideration, the onset of dynamical scalarization can happen early enough to improve the constraints on the DEF parameters obtained by combining the five binary pulsars. Thus, in the near future, the complementarity of pulsar timing and direct observations of GWs on the ground will be extremely valuable in probing gravity theories in the strong-field regime.

  15. Thermal inflation with a thermal waterfall scalar field coupled to a light spectator scalar field

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Konstantinos; Lyth, David H.; Rumsey, Arron

    2017-05-01

    A new model of thermal inflation is introduced, in which the mass of the thermal waterfall field is dependent on a light spectator scalar field. Using the δ N formalism, the "end of inflation" scenario is investigated in order to ascertain whether this model is able to produce the dominant contribution to the primordial curvature perturbation. A multitude of constraints are considered so as to explore the parameter space, with particular emphasis on key observational signatures. For natural values of the parameters, the model is found to yield a sharp prediction for the scalar spectral index and its running, well within the current observational bounds.

  16. A compendium of chameleon constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrage, Clare; Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk

    2016-11-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical andmore » laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.« less

  17. BDDC algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart-Thomas vector fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oh, Duk -Soon; Widlund, Olof B.; Zampini, Stefano

    Here, a BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a new type of weighted average an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced. Assuming that the subdomains aremore » all built from elements of a coarse triangulation of the given domain, and that in each subdomain the material parameters are consistent, one obtains a bound for the preconditioned linear system's condition number which is independent of the values and jumps of these parameters across the subdomains' interface. Numerical experiments, using the PETSc library, are also presented which support the theory and show the algorithms' effectiveness even for problems not covered by the theory. Also included are experiments with Brezzi-Douglas-Marini finite-element approximations.« less

  18. BDDC algorithms with deluxe scaling and adaptive selection of primal constraints for Raviart-Thomas vector fields

    DOE PAGES

    Oh, Duk -Soon; Widlund, Olof B.; Zampini, Stefano; ...

    2017-06-21

    Here, a BDDC domain decomposition preconditioner is defined by a coarse component, expressed in terms of primal constraints, a weighted average across the interface between the subdomains, and local components given in terms of solvers of local subdomain problems. BDDC methods for vector field problems discretized with Raviart-Thomas finite elements are introduced. The methods are based on a new type of weighted average an adaptive selection of primal constraints developed to deal with coefficients with high contrast even inside individual subdomains. For problems with very many subdomains, a third level of the preconditioner is introduced. Assuming that the subdomains aremore » all built from elements of a coarse triangulation of the given domain, and that in each subdomain the material parameters are consistent, one obtains a bound for the preconditioned linear system's condition number which is independent of the values and jumps of these parameters across the subdomains' interface. Numerical experiments, using the PETSc library, are also presented which support the theory and show the algorithms' effectiveness even for problems not covered by the theory. Also included are experiments with Brezzi-Douglas-Marini finite-element approximations.« less

  19. Plasma Constraints on the Cosmological Abundance of Magnetic Monopoles and the Origin of Cosmic Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Medvedev, Mikhail; Loeb, Abraham

    2017-10-01

    Existing theoretical and observational constraints on the abundance of magnetic monopoles are limited. Here we demonstrate that an ensemble of monopoles forms a plasma whose properties are well determined and whose collective effects place new tight constraints on the cosmological abundance of monopoles. In particular, the existence of micro-Gauss magnetic fields in galaxy clusters and radio relics implies that the scales of these structures are below the Debye screening length, thus setting an upper limit on the cosmological density parameter of monopoles, ΩM <= 3 ×10-4 , which precludes them from being the dark matter. Future detection of Gpc-scale coherent magnetic fields could improve this limit by a few orders of magnitude. In addition, we predict the existence of magnetic Langmuir waves and turbulence which may appear on the sky as ``zebra patterns'' of an alternating magnetic field with k . B ≠ 0 . We also show that magnetic monopole Langmuir turbulence excited near the accretion shock of galaxy clusters may be an efficient mechanism for generating the observed intracluster magnetic fields. The authors acknowledge DOE partial support via Grant DE-SC0016368.

  20. Plasma constraints on the cosmological abundance of magnetic monopoles and the origin of cosmic magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medvedev, Mikhail V.; Loeb, Abraham, E-mail: mmedvedev@cfa.harvard.edu, E-mail: aloeb@cfa.harvard.edu

    Existing theoretical and observational constraints on the abundance of magnetic monopoles are limited. Here we demonstrate that an ensemble of monopoles forms a plasma whose properties are well determined and whose collective effects place new tight constraints on the cosmological abundance of monopoles. In particular, the existence of micro-Gauss magnetic fields in galaxy clusters and radio relics implies that the scales of these structures are below the Debye screening length, thus setting an upper limit on the cosmological density parameter of monopoles, Ω {sub M} {sub ∼<} {sub 3} {sub ×} {sub 10}{sup −4}, which precludes them from being themore » dark matter. Future detection of Gpc-scale coherent magnetic fields could improve this limit by a few orders of magnitude. In addition, we predict the existence of magnetic Langmuir waves and turbulence which may appear on the sky as ''zebra patterns'' of an alternating magnetic field with k·B ≠ 0. We also show that magnetic monopole Langmuir turbulence excited near the accretion shock of galaxy clusters may be an efficient mechanism for generating the observed intracluster magnetic fields.« less

  1. A method for establishing constraints on galactic magnetic field models using ultra high energy cosmic rays and results from the data of the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Sutherland, Michael Stephen

    2010-12-01

    The Galactic magnetic field is poorly understood. Essentially the only reliable measurements of its properties are the local orientation and field strength. Its behavior at galactic scales is unknown. Historically, magnetic field measurements have been performed using radio astronomy techniques which are sensitive to certain regions of the Galaxy and rely upon models of the distribution of gas and dust within the disk. However, the deflection of trajectories of ultra high energy cosmic rays arriving from extragalactic sources depends only on the properties of the magnetic field. In this work, a method is developed for determining acceptable global models of the Galactic magnetic field by backtracking cosmic rays through the field model. This method constrains the parameter space of magnetic field models by comparing a test statistic between backtracked cosmic rays and isotropic expectations for assumed cosmic ray source and composition hypotheses. Constraints on Galactic magnetic field models are established using data from the southern site of the Pierre Auger Observatory under various source distribution and cosmic ray composition hypotheses. Field models possessing structure similar to the stellar spiral arms are found to be inconsistent with hypotheses of an iron cosmic ray composition and sources selected from catalogs tracing the local matter distribution in the universe. These field models are consistent with hypothesis combinations of proton composition and sources tracing the local matter distribution. In particular, strong constraints are found on the parameter space of bisymmetric magnetic field models scanned under hypotheses of proton composition and sources selected from the 2MRS-VS, Swift 39-month, and VCV catalogs. Assuming that the Galactic magnetic field is well-described by a bisymmetric model under these hypotheses, the magnetic field strength near the Sun is less than 3-4 muG and magnetic pitch angle is less than -8°. These results comprise the first measurements of the Galactic magnetic field using ultra-high energy cosmic rays and supplement existing radio astronomical measurements of the Galactic magnetic field.

  2. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    NASA Astrophysics Data System (ADS)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  3. Constraints on geomagnetic secular variation modeling from electromagnetism and fluid dynamics of the Earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1986-01-01

    A spherical harmonic representation of the geomagnetic field and its secular variation for epoch 1980, designated GSFC(9/84), is derived and evaluated. At three epochs (1977.5, 1980.0, 1982.5) this model incorporates conservation of magnetic flux through five selected patches of area on the core/mantle boundary bounded by the zero contours of vertical magnetic field. These fifteen nonlinear constraints are included like data in an iterative least squares parameter estimation procedure that starts with the recently derived unconstrained field model GSFC (12/83). Convergence is approached within three iterations. The constrained model is evaluated by comparing its predictive capability outside the time span of its data, in terms of residuals at magnetic observatories, with that for the unconstrained model.

  4. Constraining screened fifth forces with the electron magnetic moment

    NASA Astrophysics Data System (ADS)

    Brax, Philippe; Davis, Anne-Christine; Elder, Benjamin; Wong, Leong Khim

    2018-04-01

    Chameleon and symmetron theories serve as archetypal models for how light scalar fields can couple to matter with gravitational strength or greater, yet evade the stringent constraints from classical tests of gravity on Earth and in the Solar System. They do so by employing screening mechanisms that dynamically alter the scalar's properties based on the local environment. Nevertheless, these do not hide the scalar completely, as screening leads to a distinct phenomenology that can be well constrained by looking for specific signatures. In this work, we investigate how a precision measurement of the electron magnetic moment places meaningful constraints on both chameleons and symmetrons. Two effects are identified: First, virtual chameleons and symmetrons run in loops to generate quantum corrections to the intrinsic value of the magnetic moment—a common process widely considered in the literature for many scenarios beyond the Standard Model. A second effect, however, is unique to scalar fields that exhibit screening. A scalar bubblelike profile forms inside the experimental vacuum chamber and exerts a fifth force on the electron, leading to a systematic shift in the experimental measurement. In quantifying this latter effect, we present a novel approach that combines analytic arguments and a small number of numerical simulations to solve for the bubblelike profile quickly for a large range of model parameters. Taken together, both effects yield interesting constraints in complementary regions of parameter space. While the constraints we obtain for the chameleon are largely uncompetitive with those in the existing literature, this still represents the tightest constraint achievable yet from an experiment not originally designed to search for fifth forces. We break more ground with the symmetron, for which our results exclude a large and previously unexplored region of parameter space. Central to this achievement are the quantum correction terms, which are able to constrain symmetrons with masses in the range μ ∈[10-3.88,108] eV , whereas other experiments have hitherto only been sensitive to 1 or 2 orders of magnitude at a time.

  5. Derivation of Hamilton's equations of motion for mechanical systems with constraints on the basis of Pontriagin's maximum principle

    NASA Astrophysics Data System (ADS)

    Kovalev, A. M.

    The problem of the motion of a mechanical system with constraints conforming to Hamilton's principle is stated as an optimum control problem, with equations of motion obtained on the basis of Pontriagin's principle. A Hamiltonian function in Rodrigues-Hamilton parameters for a gyrostat in a potential force field is obtained as an example. Equations describing the motion of a skate on a sloping surface and the motion of a disk on a horizontal plane are examined.

  6. On the effect of the degeneracy among dark energy parameters

    NASA Astrophysics Data System (ADS)

    Gong, Yungui; Gao, Qing

    2014-01-01

    The dynamics of scalar fields as dark energy is well approximated by some general relations between the equation of state parameter and the fractional energy density . Based on the approximation, for slowly rolling scalar fields, we derived the analytical expressions of which reduce to the popular Chevallier-Polarski-Linder parametrization with an explicit degeneracy relation between and . The models approximate the dynamics of scalar fields well and help eliminate the degeneracies among , , and . With the explicit degeneracy relations, we test their effects on the constraints of the cosmological parameters. We find that: (1) The analytical relations between and for the two models are consistent with observational data. (2) The degeneracies have little effect on . (3) The error of was reduced about 30 % with the degeneracy relations.

  7. Statistical field theory with constraints: Application to critical Casimir forces in the canonical ensemble.

    PubMed

    Gross, Markus; Gambassi, Andrea; Dietrich, S

    2017-08-01

    The effect of imposing a constraint on a fluctuating scalar order parameter field in a system of finite volume is studied within statistical field theory. The canonical ensemble, corresponding to a fixed total integrated order parameter (e.g., the total number of particles), is obtained as a special case of the theory. A perturbative expansion is developed which allows one to systematically determine the constraint-induced finite-volume corrections to the free energy and to correlation functions. In particular, we focus on the Landau-Ginzburg model in a film geometry (i.e., in a rectangular parallelepiped with a small aspect ratio) with periodic, Dirichlet, or Neumann boundary conditions in the transverse direction and periodic boundary conditions in the remaining, lateral directions. Within the expansion in terms of ε=4-d, where d is the spatial dimension of the bulk, the finite-size contribution to the free energy of the confined system and the associated critical Casimir force are calculated to leading order in ε and are compared to the corresponding expressions for an unconstrained (grand canonical) system. The constraint restricts the fluctuations within the system and it accordingly modifies the residual finite-size free energy. The resulting critical Casimir force is shown to depend on whether it is defined by assuming a fixed transverse area or a fixed total volume. In the former case, the constraint is typically found to significantly enhance the attractive character of the force as compared to the grand canonical case. In contrast to the grand canonical Casimir force, which, for supercritical temperatures, vanishes in the limit of thick films, in the canonical case with fixed transverse area the critical Casimir force attains for thick films a negative value for all boundary conditions studied here. Typically, the dependence of the critical Casimir force both on the temperaturelike and on the fieldlike scaling variables is different in the two ensembles.

  8. Statistical field theory with constraints: Application to critical Casimir forces in the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Gross, Markus; Gambassi, Andrea; Dietrich, S.

    2017-08-01

    The effect of imposing a constraint on a fluctuating scalar order parameter field in a system of finite volume is studied within statistical field theory. The canonical ensemble, corresponding to a fixed total integrated order parameter (e.g., the total number of particles), is obtained as a special case of the theory. A perturbative expansion is developed which allows one to systematically determine the constraint-induced finite-volume corrections to the free energy and to correlation functions. In particular, we focus on the Landau-Ginzburg model in a film geometry (i.e., in a rectangular parallelepiped with a small aspect ratio) with periodic, Dirichlet, or Neumann boundary conditions in the transverse direction and periodic boundary conditions in the remaining, lateral directions. Within the expansion in terms of ɛ =4 -d , where d is the spatial dimension of the bulk, the finite-size contribution to the free energy of the confined system and the associated critical Casimir force are calculated to leading order in ɛ and are compared to the corresponding expressions for an unconstrained (grand canonical) system. The constraint restricts the fluctuations within the system and it accordingly modifies the residual finite-size free energy. The resulting critical Casimir force is shown to depend on whether it is defined by assuming a fixed transverse area or a fixed total volume. In the former case, the constraint is typically found to significantly enhance the attractive character of the force as compared to the grand canonical case. In contrast to the grand canonical Casimir force, which, for supercritical temperatures, vanishes in the limit of thick films, in the canonical case with fixed transverse area the critical Casimir force attains for thick films a negative value for all boundary conditions studied here. Typically, the dependence of the critical Casimir force both on the temperaturelike and on the fieldlike scaling variables is different in the two ensembles.

  9. Cosmology with photometric weak lensing surveys: Constraints with redshift tomography of convergence peaks and moments

    DOE PAGES

    Petri, Andrea; May, Morgan; Haiman, Zoltán

    2016-09-30

    Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w. When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ω m,w,σ 8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. Here we find that redshift tomography with the power spectrum reduces the area of the 1σ confidence interval in (Ω m,w) space by a factor ofmore » 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ω m,w) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. In conclusion, we find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.« less

  10. Development of a Fatigue Crack Growth Coupon for Highly Plastic Stress Conditions

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Aggarwal, Pravin K.; Swanson, Gregory R.

    2003-01-01

    The analytical approach used to develop a novel fatigue crack growth coupon for highly plastic stress field condition is presented in this paper. The flight hardware investigated is a large separation bolt that has a deep notch, which produces a large plastic zone at the notch root when highly loaded. Four test specimen configurations are analyzed in an attempt to match the elastic-plastic stress field and crack constraint conditions present in the separation bolt. Elastic-plastic finite element analysis is used to compare the stress fields and critical fracture parameters. Of the four test specimens analyzed, the modified double-edge notch tension - 3 (MDENT-3) most closely approximates the stress field, J values, and crack constraint conditions found in the flight hardware. The MDENT-3 is also most insensitive to load misalignment and/or load redistribution during crack growth.

  11. The N2HDM under theoretical and experimental scrutiny

    NASA Astrophysics Data System (ADS)

    Mühlleitner, Margarete; Sampaio, Marco O. P.; Santos, Rui; Wittbrodt, Jonas

    2017-03-01

    The N2HDM is based on the CP-conserving 2HDM extended by a real scalar singlet field. Its enlarged parameter space and its fewer symmetry conditions as compared to supersymmetric models allow for an interesting phenomenology compatible with current experimental constraints, while adding to the 2HDM sector the possibility of Higgs-to-Higgs decays with three different Higgs bosons. In this paper the N2HDM is subjected to detailed scrutiny. Regarding the theoretical constraints we implement tests of tree-level perturbativity and vacuum stability. Moreover, we present, for the first time, a thorough analysis of the global minimum of the N2HDM. The model and the theoretical constraints have been implemented in ScannerS, and we provide N2HDECAY, a code based on HDECAY, for the computation of the N2HDM branching ratios and total widths including the state-of-the-art higher order QCD corrections and off-shell decays. We then perform an extensive parameter scan in the N2HDM parameter space, with all theoretical and experimental constraints applied, and analyse its allowed regions. We find that large singlet admixtures are still compatible with the Higgs data and investigate which observables will allow to restrict the singlet nature most effectively in the next runs of the LHC. Similarly to the 2HDM, the N2HDM exhibits a wrong-sign parameter regime, which will be constrained by future Higgs precision measurements.

  12. Microwave background anisotropies in quasiopen inflation

    NASA Astrophysics Data System (ADS)

    García-Bellido, Juan; Garriga, Jaume; Montes, Xavier

    1999-10-01

    Quasiopenness seems to be generic to multifield models of single-bubble open inflation. Instead of producing infinite open universes, these models actually produce an ensemble of very large but finite inflating islands. In this paper we study the possible constraints from CMB anisotropies on existing models of open inflation. The effect of supercurvature anisotropies combined with the quasiopenness of the inflating regions make some models incompatible with observations, and severely reduces the parameter space of others. Supernatural open inflation and the uncoupled two-field model seem to be ruled out due to these constraints for values of Ω0<~0.98. Others, such as the open hybrid inflation model with suitable parameters for the slow roll potential can be made compatible with observations.

  13. Orbital Motions and the Conservation-Law/Preferred-Frame α_3 Parameter

    NASA Astrophysics Data System (ADS)

    Iorio, Lorenzo

    2014-09-01

    We analytically calculate some orbital effects induced by the Lorentz-invariance/ momentum-conservation parameterized post-Newtonian (PPN) parameter α_3 in a gravitationally bound binary system made of a primary orbited by a test particle. We neither restrict ourselves to any particular orbital configuration nor to specific orientations of the primary's spin axis ψ. We use our results to put preliminary upper bounds on α_3 in the weak-field regime by using the latest data from Solar System's planetary dynamics. By linearly combining the supplementary perihelion precessions Δw of the Earth, Mars and Saturn, determined by astronomers with the Ephemerides of Planets and the Moon (EPM) 2011 ephemerides for the general relativistic values of the PPN parameters β = γ = 1, we infer |α_3| ;5 6 × 10^-10. Our result is about three orders of magnitude better than the previous weak-field constraints existing in the literature and of the same order of magnitude of the constraint expected from the future BepiColombo mission to Mercury. It is, by construction, independent of the other preferred-frame PPN parameters α1, α2, both preliminarily constrained down to a ≈ 10^-6 level. Future analyses should be performed by explicitly including α3 and a selection of other PPN parameters in the models fitted by the astronomers to the observations and estimating them in dedicated covariance analyses.

  14. Effects of long-term fluid injection on induced seismicity parameters and maximum magnitude in northwestern part of The Geysers geothermal field

    NASA Astrophysics Data System (ADS)

    Kwiatek, Grzegorz; Martínez-Garzón, Patricia; Dresen, Georg; Bohnhoff, Marco; Sone, Hiroki; Hartline, Craig

    2015-10-01

    The long-term temporal and spatial changes in statistical, source, and stress characteristics of one cluster of induced seismicity recorded at The Geysers geothermal field (U.S.) are analyzed in relation to the field operations, fluid migration, and constraints on the maximum likely magnitude. Two injection wells, Prati-9 and Prati-29, located in the northwestern part of the field and their associated seismicity composed of 1776 events recorded throughout a 7 year period were analyzed. The seismicity catalog was relocated, and the source characteristics including focal mechanisms and static source parameters were refined using first-motion polarity, spectral fitting, and mesh spectral ratio analysis techniques. The source characteristics together with statistical parameters (b value) and cluster dynamics were used to investigate and understand the details of fluid migration scheme in the vicinity of injection wells. The observed temporal, spatial, and source characteristics were clearly attributed to fluid injection and fluid migration toward greater depths, involving increasing pore pressure in the reservoir. The seasonal changes of injection rates were found to directly impact the shape and spatial extent of the seismic cloud. A tendency of larger seismic events to occur closer to injection wells and a correlation between the spatial extent of the seismic cloud and source sizes of the largest events was observed suggesting geometrical constraints on the maximum likely magnitude and its correlation to the average injection rate and volume of fluids present in the reservoir.

  15. Constraining axion-like-particles with hard X-ray emission from magnetars

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-François; Sinha, Kuver

    2018-06-01

    Axion-like particles (ALPs) produced in the core of a magnetar will convert to photons in the magnetosphere, leading to possible signatures in the hard X-ray band. We perform a detailed calculation of the ALP-to-photon conversion probability in the magnetosphere, recasting the coupled differential equations that describe ALP-photon propagation into a form that is efficient for large scale numerical scans. We show the dependence of the conversion probability on the ALP energy, mass, ALP-photon coupling, magnetar radius, surface magnetic field, and the angle between the magnetic field and direction of propagation. Along the way, we develop an analytic formalism to perform similar calculations in more general n-state oscillation systems. Assuming ALP emission rates from the core that are just subdominant to neutrino emission, we calculate the resulting constraints on the ALP mass versus ALP-photon coupling space, taking SGR 1806-20 as an example. In particular, we take benchmark values for the magnetar radius and core temperature, and constrain the ALP parameter space by the requirement that the luminosity from ALP-to-photon conversion should not exceed the total observed luminosity from the magnetar. The resulting constraints are competitive with constraints from helioscope experiments in the relevant part of ALP parameter space.

  16. The reconstruction of tachyon inflationary potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fei, Qin; Gong, Yungui; Lin, Jiong

    We derive a lower bound on the field excursion for the tachyon inflation, which is determined by the amplitude of the scalar perturbation and the number of e -folds before the end of inflation. Using the relation between the observables like n {sub s} and r with the slow-roll parameters, we reconstruct three classes of tachyon potentials. The model parameters are determined from the observations before the potentials are reconstructed, and the observations prefer the concave potential. We also discuss the constraints from the reheating phase preceding the radiation domination for the three classes of models by assuming the equationmore » of state parameter w {sub re} during reheating is a constant. Depending on the model parameters and the value of w {sub re} , the constraints on N {sub re} and T {sub re} are different. As n {sub s} increases, the allowed reheating epoch becomes longer for w {sub re} =−1/3, 0 and 1/6 while the allowed reheating epoch becomes shorter for w {sub re} =2/3.« less

  17. Program manual for ASTOP, an Arbitrary space trajectory optimization program

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.

    1974-01-01

    The ASTOP program (an Arbitrary Space Trajectory Optimization Program) designed to generate optimum low-thrust trajectories in an N-body field while satisfying selected hardware and operational constraints is presented. The trajectory is divided into a number of segments or arcs over which the control is held constant. This constant control over each arc is optimized using a parameter optimization scheme based on gradient techniques. A modified Encke formulation of the equations of motion is employed. The program provides a wide range of constraint, end conditions, and performance index options. The basic approach is conducive to future expansion of features such as the incorporation of new constraints and the addition of new end conditions.

  18. Two-spoke placement optimization under explicit specific absorption rate and power constraints in parallel transmission at ultra-high field.

    PubMed

    Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas

    2015-06-01

    The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Constraints on parity violating conformal field theories in d = 3

    NASA Astrophysics Data System (ADS)

    Chowdhury, Subham Dutta; David, Justin R.; Prakash, Shiroman

    2017-11-01

    We derive constraints on three-point functions involving the stress tensor, T, and a conserved U(1) current, j, in 2+1 dimensional conformal field theories that violate parity, using conformal collider bounds introduced by Hofman and Maldacena. Conformal invariance allows parity-odd tensor-structures for the 〈 T T T〉 and 〈 jjT〉 correlation functions which are unique to three space-time dimensions. Let the parameters which determine the 〈 T T T〉 correlation function be t 4 and α T , where α T is the parity-violating contribution. Similarly let the parameters which determine 〈 jjT〉 correlation function be a 2, and α J , where α J is the parity-violating contribution. We show that the parameters ( t 4, α T ) and (a2, α J ) are bounded to lie inside a disc at the origin of the t 4 - α T plane and the a 2 - α J plane respectively. We then show that large N Chern-Simons theories coupled to a fundamental fermion/boson lie on the circle which bounds these discs. The `t Hooft coupling determines the location of these theories on the boundary circles.

  20. Cosmological evolution and Solar System consistency of massive scalar-tensor gravity

    NASA Astrophysics Data System (ADS)

    de Pirey Saint Alby, Thibaut Arnoulx; Yunes, Nicolás

    2017-09-01

    The scalar-tensor theory of Damour and Esposito-Farèse recently gained some renewed interest because of its ability to suppress modifications to general relativity in the weak field, while introducing large corrections in the strong field of compact objects through a process called scalarization. A large sector of this theory that allows for scalarization, however, has been shown to be in conflict with Solar System observations when accounting for the cosmological evolution of the scalar field. We here study an extension of this theory by endowing the scalar field with a mass to determine whether this allows the theory to pass Solar System constraints upon cosmological evolution for a larger sector of coupling parameter space. We show that the cosmological scalar field goes first through a quiescent phase, similar to the behavior of a massless field, but then it enters an oscillatory phase, with an amplitude (and frequency) that decays (and grows) exponentially. We further show that after the field enters the oscillatory phase, its effective energy density and pressure are approximately those of dust, as expected from previous cosmological studies. Due to these oscillations, we show that the scalar field cannot be treated as static today on astrophysical scales, and so we use time-dependent perturbation theory to compute the scalar-field-induced modifications to Solar System observables. We find that these modifications are suppressed when the mass of the scalar field and the coupling parameter of the theory are in a wide range, allowing the theory to pass Solar System constraints, while in principle possibly still allowing for scalarization.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mori, Taro; Kohri, Kazunori; White, Jonathan, E-mail: moritaro@post.kek.jp, E-mail: kohri@post.kek.jp, E-mail: jwhite@post.kek.jp

    We consider inflation in the system containing a Ricci scalar squared term and a canonical scalar field with quadratic mass term. In the Einstein frame this model takes the form of a two-field inflation model with a curved field space, and under the slow-roll approximation contains four free parameters corresponding to the masses of the two fields and their initial positions. We investigate how the inflationary dynamics and predictions for the primordial curvature perturbation depend on these four parameters. Our analysis is based on the δ N formalism, which allows us to determine predictions for the non-Gaussianity of the curvaturemore » perturbation as well as for quantities relating to its power spectrum. Depending on the choice of parameters, we find predictions that range from those of R {sup 2} inflation to those of quadratic chaotic inflation, with the non-Gaussianity of the curvature perturbation always remaining small. Using our results we are able to put constraints on the masses of the two fields.« less

  2. A derivation of the Cramer-Rao lower bound of euclidean parameters under equality constraints via score function

    NASA Astrophysics Data System (ADS)

    Susyanto, Nanang

    2017-12-01

    We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.

  3. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  4. Isocurvature constraints and anharmonic effects on QCD axion dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Takeshi; Kurematsu, Ryosuke; Takahashi, Fuminobu, E-mail: takeshi@cita.utoronto.ca, E-mail: rkurematsu@tuhep.phys.tohoku.ac.jp, E-mail: fumi@tuhep.phys.tohoku.ac.jp

    2013-09-01

    We revisit the isocurvature density perturbations induced by quantum fluctuations of the axion field by extending a recently developed analytic method and approximations to a time-dependent scalar potential, which enables us to follow the evolution of the axion until it starts to oscillate. We find that, as the initial misalignment angle approaches the hilltop of the potential, the isocurvature perturbations become significantly enhanced, while the non-Gaussianity parameter increases slowly but surely. As a result, the isocurvature constraint on the inflation scale is tightened as H{sub inf}∼

  5. Electromagnetic fields of slowly rotating magnetized compact stars in conformal gravity

    NASA Astrophysics Data System (ADS)

    Turimov, Bobur; Ahmedov, Bobomurat; Abdujabbarov, Ahmadjon; Bambi, Cosimo

    2018-06-01

    In this paper we investigate the exterior vacuum electromagnetic fields of slow-rotating magnetized compact stars in conformal gravity. Assuming the dipolar magnetic field configuration, we obtain an analytical solution of the Maxwell equations for the magnetic and the electric fields outside a slowly rotating magnetized star in conformal gravity. Furthermore, we study the dipolar electromagnetic radiation and energy losses from a rotating magnetized star in conformal gravity. In order to get constraints on the L parameter of conformal gravity, the theoretical results for the magnetic field of a magnetized star in conformal gravity are combined with the precise observational data of radio pulsar period slowdown, and it is found that the maximum value of the parameter of conformal gravity is less than L ≲9.5 ×105 cm (L /M ≲5 ).

  6. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    NASA Astrophysics Data System (ADS)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  7. Constraining the location of gamma-ray flares in luminous blazars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nalewajko, Krzysztof; Begelman, Mitchell C.; Sikora, Marek, E-mail: knalew@jila.colorado.edu

    2014-07-10

    Locating the gamma-ray emission sites in blazar jets is a long standing and highly controversial issue. We jointly investigate several constraints on the distance scale r and Lorentz factor Γ of the gamma-ray emitting regions in luminous blazars (primarily flat spectrum radio quasars). Working in the framework of one-zone external radiation Comptonization models, we perform a parameter space study for several representative cases of actual gamma-ray flares in their multiwavelength context. We find a particularly useful combination of three constraints: from an upper limit on the collimation parameter Γθ ≲ 1, from an upper limit on the synchrotron self-Compton (SSC)more » luminosity L{sub SSC} ≲ L{sub X}, and from an upper limit on the efficient cooling photon energy E{sub cool,obs} ≲ 100 MeV. These three constraints are particularly strong for sources with low accretion disk luminosity L{sub d}. The commonly used intrinsic pair-production opacity constraint on Γ is usually much weaker than the SSC constraint. The SSC and cooling constraints provide a robust lower limit on the collimation parameter Γθ ≳ 0.1-0.7. Typical values of r corresponding to moderate values of Γ ∼ 20 are in the range 0.1-1 pc, and are determined primarily by the observed variability timescale t{sub var,obs}. Alternative scenarios motivated by the observed gamma-ray/millimeter connection, in which gamma-ray flares of t{sub var,obs} ∼ a few days are located at r ∼ 10 pc, are in conflict with both the SSC and cooling constraints. Moreover, we use a simple light travel time argument to point out that the gamma-ray/millimeter connection does not provide a significant constraint on the location of gamma-ray flares. We argue that spine-sheath models of the jet structure do not offer a plausible alternative to external radiation fields at large distances; however, an extended broad-line region is an idea worth exploring. We propose that the most definite additional constraint could be provided by determination of the synchrotron self-absorption frequency for correlated synchrotron and gamma-ray flares.« less

  8. BRST symmetry for a torus knot

    NASA Astrophysics Data System (ADS)

    Pandey, Vipul Kumar; Prasad Mandal, Bhabani

    2017-08-01

    We develop BRST symmetry for the first time for a particle on the surface of a torus knot by analyzing the constraints of the system. The theory contains 2nd-class constraints and has been extended by introducing the Wess-Zumino term to convert it into a theory with first-class constraints. BFV analysis of the extended theory is performed to construct BRST/anti-BRST symmetries for the particle on a torus knot. The nilpotent BRST/anti-BRST charges which generate such symmetries are constructed explicitly. The states annihilated by these nilpotent charges consist of the physical Hilbert space. We indicate how various effective theories on the surface of the torus knot are related through the generalized version of the BRST transformation with finite-field-dependent parameters.

  9. Hard X-Ray Constraints on Small-Scale Coronal Heating Events

    NASA Astrophysics Data System (ADS)

    Marsh, Andrew; Smith, David M.; Glesener, Lindsay; Klimchuk, James A.; Bradshaw, Stephen; Hannah, Iain; Vievering, Juliana; Ishikawa, Shin-Nosuke; Krucker, Sam; Christe, Steven

    2017-08-01

    A large body of evidence suggests that the solar corona is heated impulsively. Small-scale heating events known as nanoflares may be ubiquitous in quiet and active regions of the Sun. Hard X-ray (HXR) observations with unprecedented sensitivity >3 keV have recently been enabled through the use of focusing optics. We analyze active region spectra from the FOXSI-2 sounding rocket and the NuSTAR satellite to constrain the physical properties of nanoflares simulated with the EBTEL field-line-averaged hydrodynamics code. We model a wide range of X-ray spectra by varying the nanoflare heating amplitude, duration, delay time, and filling factor. Additional constraints on the nanoflare parameter space are determined from energy constraints and EUV/SXR data.

  10. Statistical approach to Higgs boson couplings in the standard model effective field theory

    NASA Astrophysics Data System (ADS)

    Murphy, Christopher W.

    2018-01-01

    We perform a parameter fit in the standard model effective field theory (SMEFT) with an emphasis on using regularized linear regression to tackle the issue of the large number of parameters in the SMEFT. In regularized linear regression, a positive definite function of the parameters of interest is added to the usual cost function. A cross-validation is performed to try to determine the optimal value of the regularization parameter to use, but it selects the standard model (SM) as the best model to explain the measurements. Nevertheless as proof of principle of this technique we apply it to fitting Higgs boson signal strengths in SMEFT, including the latest Run-2 results. Results are presented in terms of the eigensystem of the covariance matrix of the least squares estimators as it has a degree model-independent to it. We find several results in this initial work: the SMEFT predicts the total width of the Higgs boson to be consistent with the SM prediction; the ATLAS and CMS experiments at the LHC are currently sensitive to non-resonant double Higgs boson production. Constraints are derived on the viable parameter space for electroweak baryogenesis in the SMEFT, reinforcing the notion that a first order phase transition requires fairly low-scale beyond the SM physics. Finally, we study which future experimental measurements would give the most improvement on the global constraints on the Higgs sector of the SMEFT.

  11. A power-law coupled three-form dark energy model

    NASA Astrophysics Data System (ADS)

    Yao, Yan-Hong; Yan, Yang-Jie; Meng, Xin-He

    2018-02-01

    We consider a field theory model of coupled dark energy which treats dark energy as a three-form field and dark matter as a spinor field. By assuming the effective mass of dark matter as a power-law function of the three-form field and neglecting the potential term of dark energy, we obtain three solutions of the autonomous system of evolution equations, including a de Sitter attractor, a tracking solution and an approximate solution. To understand the strength of the coupling, we confront the model with the latest Type Ia Supernova, Baryon Acoustic Oscillations and Cosmic Microwave Background radiation observations, with the conclusion that the combination of these three databases marginalized over the present dark matter density parameter Ω _{m0} and the present three-form field κ X0 gives stringent constraints on the coupling constant, - 0.017< λ <0.047 (2σ confidence level), by which we present the model's applicable parameter range.

  12. Moyal deformations of Clifford gauge theories of gravity

    NASA Astrophysics Data System (ADS)

    Castro, Carlos

    2016-12-01

    A Moyal deformation of a Clifford Cl(3, 1) Gauge Theory of (Conformal) Gravity is performed for canonical noncommutativity (constant Θμν parameters). In the very special case when one imposes certain constraints on the fields, there are no first-order contributions in the Θμν parameters to the Moyal deformations of Clifford gauge theories of gravity. However, when one does not impose constraints on the fields, there are first-order contributions in Θμν to the Moyal deformations in variance with the previous results obtained by other authors and based on different gauge groups. Despite that the generators of U(2, 2),SO(4, 2),SO(2, 3) can be expressed in terms of the Clifford algebra generators this does not imply that these algebras are isomorphic to the Clifford algebra. Therefore one should not expect identical results to those obtained by other authors. In particular, there are Moyal deformations of the Einstein-Hilbert gravitational action with a cosmological constant to first-order in Θμν. Finally, we provide a mechanism which furnishes a plausible cancellation of the huge vacuum energy density.

  13. Effective field theory of cosmic acceleration: Constraining dark energy with CMB data

    NASA Astrophysics Data System (ADS)

    Raveri, Marco; Hu, Bin; Frusciante, Noemi; Silvestri, Alessandra

    2014-08-01

    We introduce EFTCAMB/EFTCosmoMC as publicly available patches to the commonly used camb/CosmoMC codes. We briefly describe the structure of the codes, their applicability and main features. To illustrate the use of these patches, we obtain constraints on parametrized pure effective field theory and designer f(R) models, both on ΛCDM and wCDM background expansion histories, using data from Planck temperature and lensing potential spectra, WMAP low-ℓ polarization spectra (WP), and baryon acoustic oscillations (BAO). Upon inspecting the theoretical stability of the models on the given background, we find nontrivial parameter spaces that we translate into viability priors. We use different combinations of data sets to show their individual effects on cosmological and model parameters. Our data analysis results show that, depending on the adopted data sets, in the wCDM background case these viability priors could dominate the marginalized posterior distributions. Interestingly, with Planck +WP+BAO+lensing data, in f(R) gravity models, we get very strong constraints on the constant dark energy equation of state, w0∈(-1,-0.9997) (95% C.L.).

  14. The variation of the fine-structure constant from disformal couplings

    NASA Astrophysics Data System (ADS)

    van de Bruck, Carsten; Mifsud, Jurgen; Nunes, Nelson J.

    2015-12-01

    We study a theory in which the electromagnetic field is disformally coupled to a scalar field, in addition to a usual non-minimal electromagnetic coupling. We show that disformal couplings modify the expression for the fine-structure constant, α. As a result, the theory we consider can explain the non-zero reported variation in the evolution of α by purely considering disformal couplings. We also find that if matter and photons are coupled in the same way to the scalar field, disformal couplings itself do not lead to a variation of the fine-structure constant. A number of scenarios are discussed consistent with the current astrophysical, geochemical, laboratory and the cosmic microwave background radiation constraints on the cosmological evolution of α. The models presented are also consistent with the current type Ia supernovae constraints on the effective dark energy equation of state. We find that the Oklo bound in particular puts strong constraints on the model parameters. From our numerical results, we find that the introduction of a non-minimal electromagnetic coupling enhances the cosmological variation in α. Better constrained data is expected to be reported by ALMA and with the forthcoming generation of high-resolution ultra-stable spectrographs such as PEPSI, ESPRESSO, and ELT-HIRES. Furthermore, an expected increase in the sensitivity of molecular and nuclear clocks will put a more stringent constraint on the theory.

  15. The variation of the fine-structure constant from disformal couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Bruck, Carsten van; Mifsud, Jurgen; Nunes, Nelson J., E-mail: c.vandebruck@sheffield.ac.uk, E-mail: jmifsud1@sheffield.ac.uk, E-mail: njnunes@fc.ul.pt

    2015-12-01

    We study a theory in which the electromagnetic field is disformally coupled to a scalar field, in addition to a usual non-minimal electromagnetic coupling. We show that disformal couplings modify the expression for the fine-structure constant, α. As a result, the theory we consider can explain the non-zero reported variation in the evolution of α by purely considering disformal couplings. We also find that if matter and photons are coupled in the same way to the scalar field, disformal couplings itself do not lead to a variation of the fine-structure constant. A number of scenarios are discussed consistent with themore » current astrophysical, geochemical, laboratory and the cosmic microwave background radiation constraints on the cosmological evolution of α. The models presented are also consistent with the current type Ia supernovae constraints on the effective dark energy equation of state. We find that the Oklo bound in particular puts strong constraints on the model parameters. From our numerical results, we find that the introduction of a non-minimal electromagnetic coupling enhances the cosmological variation in α. Better constrained data is expected to be reported by ALMA and with the forthcoming generation of high-resolution ultra-stable spectrographs such as PEPSI, ESPRESSO, and ELT-HIRES. Furthermore, an expected increase in the sensitivity of molecular and nuclear clocks will put a more stringent constraint on the theory.« less

  16. Constraining alternative theories of gravity using GW150914 and GW151226

    NASA Astrophysics Data System (ADS)

    De Laurentis, Mariafelicia; Porth, Oliver; Bovard, Luke; Ahmedov, Bobomurat; Abdujabbarov, Ahmadjon

    2016-12-01

    The recently reported gravitational wave events GW150914 and GW151226 caused by the mergers of binary black holes [Abbott et al., Phys. Rev. Lett. 116, 221101 (2016); Phys. Rev. Lett. 116, 241103 (2016); Phys. Rev. X 6, 041015] provide a formidable way to set constraints on alternative metric theories of gravity in the strong field regime. In this paper, we develop an approach where an arbitrary theory of gravity can be parametrized by an effective coupling Geff and an effective gravitational potential Φ (r ). The standard Newtonian limit of general relativity is recovered as soon as Geff→GN and Φ (r )→ΦN. The upper bound on the graviton mass and the gravitational interaction length, reported by the LIGO-VIRGO Collaboration, can be directly recast in terms of the parameters of the theory that allows an analysis where the gravitational wave frequency modulation sets constraints on the range of possible alternative models of gravity. Numerical results based on published parameters for the binary black hole mergers are also reported. The comparison of the observed phases of GW150914 and GW151226 with the modulated phase in alternative theories of gravity does not give reasonable constraints due to the large uncertainties in the estimated parameters for the coalescing black holes. In addition to these general considerations, we obtain limits for the frequency dependence of the α parameter in scalar tensor theories of gravity.

  17. Constraints on Einstein-aether theory after GW170817

    NASA Astrophysics Data System (ADS)

    Oost, Jacob; Mukohyama, Shinji; Wang, Anzhong

    2018-06-01

    In this paper, we carry out a systematic analysis of the theoretical and observational constraints on the dimensionless coupling constants ci (i =1 , 2, 3, 4) of the Einstein-aether theory, taking into account the events GW170817 and GRB 170817A. The combination of these events restricts the deviation of the speed cT of the spin-2 graviton to the range, -3 ×10-15

  18. Observational constraints on tachyonic chameleon dark energy model

    NASA Astrophysics Data System (ADS)

    Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.

    2018-03-01

    It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.

  19. Exact solutions and phenomenological constraints from massive scalars in a gravity's rainbow spacetime

    NASA Astrophysics Data System (ADS)

    Bezerra, V. B.; Christiansen, H. R.; Cunha, M. S.; Muniz, C. R.

    2017-07-01

    We obtain the exact (confluent Heun) solutions to the massive scalar field in a gravity's rainbow Schwarzschild metric. With these solutions at hand, we study the Hawking radiation resulting from the tunneling rate through the event horizon. We show that the emission spectrum obeys nonextensive statistics and is halted when a certain mass remnant is reached. Next, we infer constraints on the rainbow parameters from recent LHC particle physics experiments and Hubble STIS astrophysics measurements. Finally, we study the low frequency limit in order to find the modified energy spectrum around the source.

  20. Constraints on single-field inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico

    2016-06-28

    Many alternatives to canonical slow-roll inflation have been proposed over the years, one of the main motivations being to have a model, capable of generating observable values of non-Gaussianity. In this work, we (re-)explore the physical implications of a great majority of such models within a single, effective field theory framework (including novel models with large non-Gaussianity discussed for the first time below). The constraints we apply — both theoretical and experimental — are found to be rather robust, determined to a great extent by just three parameters: the coefficients of the quadratic EFT operators (δN){sup 2} and δNδE, andmore » the slow-roll parameter ε. This allows to significantly limit the majority of single-field alternatives to canonical slow-roll inflation. While the existing data still leaves some room for most of the considered models, the situation would change dramatically if the current upper limit on the tensor-to-scalar ratio decreased down to r<10{sup −2}. Apart from inflationary models driven by plateau-like potentials, the single-field model that would have a chance of surviving this bound is the recently proposed slow-roll inflation with weakly-broken galileon symmetry. In contrast to canonical slow-roll inflation, the latter model can support r<10{sup −2} even if driven by a convex potential, as well as generate observable values for the amplitude of non-Gaussianity.« less

  1. Scale-dependent CMB power asymmetry from primordial speed of sound and a generalized δ N formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dong-Gang; Cai, Yi-Fu; Zhao, Wen

    2016-02-01

    We explore a plausible mechanism that the hemispherical power asymmetry in the CMB is produced by the spatial variation of the primordial sound speed parameter. We suggest that in a generalized approach of the δ N formalism the local e-folding number may depend on some other primordial parameters besides the initial values of inflaton. Here the δ N formalism is extended by considering the effects of a spatially varying sound speed parameter caused by a super-Hubble perturbation of a light field. Using this generalized δ N formalism, we systematically calculate the asymmetric primordial spectrum in the model of multi-speed inflation by taking intomore » account the constraints of primordial non-Gaussianities. We further discuss specific model constraints, and the corresponding asymmetry amplitudes are found to be scale-dependent, which can accommodate current observations of the power asymmetry at different length scales.« less

  2. Basic research and data analysis for the National Geodetic Satellite program and for the Earth Surveys program

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Current research is reported on precise and accurate descriptions of the earth's surface and gravitational field and on time variations of geophysical parameters. A new computer program was written in connection with the adjustment of the BC-4 worldwide geometric satellite triangulation net. The possibility that an increment to accuracy could be transferred from a super-control net to the basic geodetic (first-order triangulation) was investigated. Coordinates of the NA9 solution were computed and were transformed to the NAD datum, based on GEOS 1 observations. Normal equations from observational data of several different systems and constraint equations were added and a single solution was obtained for the combined systems. Transformation parameters with constraints were determined, and the impact of computers on surveying and mapping is discussed.

  3. The Effects of Earth's Outer Core's Viscosity on Geodynamo Models

    NASA Astrophysics Data System (ADS)

    Dong, C.; Jiao, L.; Zhang, H.

    2017-12-01

    Geodynamo process is controlled by mathematic equations and input parameters. To study effects of parameters on geodynamo system, MoSST model has been used to simulate geodynamo outputs under different outer core's viscosity ν. With spanning ν for nearly three orders when other parameters fixed, we studied the variation of each physical field and its typical length scale. We find that variation of ν affects the velocity field intensely. The magnetic field almost decreases monotonically with increasing of ν, while the variation is no larger than 30%. The temperature perturbation increases monotonically with ν, but by a very small magnitude (6%). The averaged velocity field (u) of the liquid core increases with ν as a simple fitted scaling relation: u∝ν0.49. The phenomenon that u increases with ν is essentially that increasing of ν breaks the Taylor-Proudman constraint and drops the critical Rayleigh number, and thus u increases under the same thermal driving force. Forces balance is analyzed and balance mode shifts with variation of ν. When compared with former studies of scaling laws, this study supports the conclusion that in a certain parameter range, the magnetic field strength doesn't vary much with the viscosity, but opposes to the assumption that the velocity field has nothing to do with the outer core viscosity.

  4. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  5. Constraints on the Computation of Rigid Motion Parameters from Retinal Displacements.

    DTIC Science & Technology

    1985-10-01

    field (two temporall . proximal frames) is, in general, ambiguous. two frames can recover structure "hen the moing surface satisfies the conditions of...8217(i.b) Furthermore the following identity holds Z(X + SX, . + 6 Y) = z(x + ax . + 6)’) (iii) Using the Taylor series expansion of the above Z(X + 8X Y

  6. Test of Parameterized Post-Newtonian Gravity with Galaxy-scale Strong Lensing Systems

    NASA Astrophysics Data System (ADS)

    Cao, Shuo; Li, Xiaolei; Biesiada, Marek; Xu, Tengpeng; Cai, Yongzhi; Zhu, Zong-Hong

    2017-01-01

    Based on a mass-selected sample of galaxy-scale strong gravitational lenses from the SLACS, BELLS, LSD, and SL2S surveys and using a well-motivated fiducial set of lens-galaxy parameters, we tested the weak-field metric on kiloparsec scales and found a constraint on the post-Newtonian parameter γ ={0.995}-0.047+0.037 under the assumption of a flat ΛCDM universe with parameters taken from Planck observations. General relativity (GR) predicts exactly γ = 1. Uncertainties concerning the total mass density profile, anisotropy of the velocity dispersion, and the shape of the light profile combine to systematic uncertainties of ˜25%. By applying a cosmological model-independent method to the simulated future LSST data, we found a significant degeneracy between the PPN γ parameter and the spatial curvature of the universe. Setting a prior on the cosmic curvature parameter -0.007 < Ωk < 0.006, we obtained the constraint on the PPN parameter that γ ={1.000}-0.0025+0.0023. We conclude that strong lensing systems with measured stellar velocity dispersions may serve as another important probe to investigate validity of the GR, if the mass-dynamical structure of the lensing galaxies is accurately constrained in future lens surveys.

  7. Parameter estimation in 3D affine and similarity transformation: implementation of variance component estimation

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2018-01-01

    Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.

  8. Using atom interferometry to detect dark energy

    NASA Astrophysics Data System (ADS)

    Burrage, Clare; Copeland, Edmund J.

    2016-04-01

    We review the tantalising prospect that the first evidence for the dark energy driving the observed acceleration of the universe on giga-parsec scales may be found through metre scale laboratory-based atom interferometry experiments. To do that, we first introduce the idea that scalar fields could be responsible for dark energy and show that in order to be compatible with fifth force constraints, these fields must have a screening mechanism which hides their effects from us within the solar system. Particular emphasis is placed on one such screening mechanism known as the chameleon effect where the field's mass becomes dependent on the environment. The way the field behaves in the presence of a spherical source is determined and we then go on to show how in the presence of the kind of high vacuum associated with atom interferometry experiments, and when the test particle is an atom, it is possible to use the associated interference pattern to place constraints on the acceleration due to the fifth force of the chameleon field - this has already been used to rule out large regions of the chameleon parameter space and maybe one day will be able to detect the force due to the dark energy field in the laboratory.

  9. Molecular dynamics simulations on PGLa using NMR orientational constraints.

    PubMed

    Sternberg, Ulrich; Witter, Raiker

    2015-11-01

    NMR data obtained by solid state NMR from anisotropic samples are used as orientational constraints in molecular dynamics simulations for determining the structure and dynamics of the PGLa peptide within a membrane environment. For the simulation the recently developed molecular dynamics with orientational constraints technique (MDOC) is used. This method introduces orientation dependent pseudo-forces into the COSMOS-NMR force field. Acting during a molecular dynamics simulation these forces drive molecular rotations, re-orientations and folding in such a way that the motional time-averages of the tensorial NMR properties are consistent with the experimentally measured NMR parameters. This MDOC strategy does not depend on the initial choice of atomic coordinates, and is in principle suitable for any flexible and mobile kind of molecule; and it is of course possible to account for flexible parts of peptides or their side-chains. MDOC has been applied to the antimicrobial peptide PGLa and a related dimer model. With these simulations it was possible to reproduce most NMR parameters within the experimental error bounds. The alignment, conformation and order parameters of the membrane-bound molecule and its dimer were directly derived with MDOC from the NMR data. Furthermore, this new approach yielded for the first time the distribution of segmental orientations with respect to the membrane and the order parameter tensors of the dimer systems. It was demonstrated the deuterium splittings measured at the peptide to lipid ratio of 1/50 are consistent with a membrane spanning orientation of the peptide.

  10. Approximate Bayesian computation for forward modeling in cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akeret, Joël; Refregier, Alexandre; Amara, Adam

    Bayesian inference is often used in cosmology and astrophysics to derive constraints on model parameters from observations. This approach relies on the ability to compute the likelihood of the data given a choice of model parameters. In many practical situations, the likelihood function may however be unavailable or intractable due to non-gaussian errors, non-linear measurements processes, or complex data formats such as catalogs and maps. In these cases, the simulation of mock data sets can often be made through forward modeling. We discuss how Approximate Bayesian Computation (ABC) can be used in these cases to derive an approximation to themore » posterior constraints using simulated data sets. This technique relies on the sampling of the parameter set, a distance metric to quantify the difference between the observation and the simulations and summary statistics to compress the information in the data. We first review the principles of ABC and discuss its implementation using a Population Monte-Carlo (PMC) algorithm and the Mahalanobis distance metric. We test the performance of the implementation using a Gaussian toy model. We then apply the ABC technique to the practical case of the calibration of image simulations for wide field cosmological surveys. We find that the ABC analysis is able to provide reliable parameter constraints for this problem and is therefore a promising technique for other applications in cosmology and astrophysics. Our implementation of the ABC PMC method is made available via a public code release.« less

  11. CMB constraints on β-exponential inflationary models

    NASA Astrophysics Data System (ADS)

    Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.

    2018-03-01

    We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.

  12. Constraints for the thawing and freezing potentials

    NASA Astrophysics Data System (ADS)

    Hara, Tetsuya; Suzuki, Anna; Saka, Shogo; Tanigawa, Takuma

    2018-01-01

    We study the accelerating present universe in terms of the time evolution of the equation of state w(z) (redshift z) due to thawing and freezing scalar potentials in the quintessence model. The values of dw/da and d^2w/da^2 at a scale factor of a = 1 are associated with two parameters of each potential. For five types of scalar potentials, the scalar fields Q and w as functions of time t and/or z are numerically calculated under the fixed boundary condition of w(z=0)=-1+Δ. The observational constraint w_obs (Planck Collaboration, arXiv:1502.01590) is imposed to test whether the numerical w(z) is in w_obs. Some solutions show thawing features in the freezing potentials. Mutually exclusive allowed regions in the dw/da vs. d^2w/da^2 diagram are obtained in order to identify the likely scalar potential and even the potential parameters for future observational tests.

  13. Hubble induced mass after inflation in spectator field models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujita, Tomohiro; Harigaya, Keisuke, E-mail: tomofuji@stanford.edu, E-mail: keisukeh@icrr.u-tokyo.ac.jp

    2016-12-01

    Spectator field models such as the curvaton scenario and the modulated reheating are attractive scenarios for the generation of the cosmic curvature perturbation, as the constraints on inflation models are relaxed. In this paper, we discuss the effect of Hubble induced masses on the dynamics of spectator fields after inflation. We pay particular attention to the Hubble induced mass by the kinetic energy of an oscillating inflaton, which is generically unsuppressed but often overlooked. In the curvaton scenario, the Hubble induced mass relaxes the constraint on the property of the inflaton and the curvaton, such as the reheating temperature andmore » the inflation scale. We comment on the implication of our discussion for baryogenesis in the curvaton scenario. In the modulated reheating, the predictions of models e.g. the non-gaussianity can be considerably altered. Furthermore, we propose a new model of the modulated reheating utilizing the Hubble induced mass which realizes a wide range of the local non-gaussianity parameter.« less

  14. On the Collisionless Asymmetric Magnetic Reconnection Rate

    NASA Astrophysics Data System (ADS)

    Liu, Yi-Hsin; Hesse, M.; Cassak, P. A.; Shay, M. A.; Wang, S.; Chen, L.-J.

    2018-04-01

    A prediction of the steady state reconnection electric field in asymmetric reconnection is obtained by maximizing the reconnection rate as a function of the opening angle made by the upstream magnetic field on the weak magnetic field (magnetosheath) side. The prediction is within a factor of 2 of the widely examined asymmetric reconnection model (Cassak & Shay, 2007, https://doi.org/10.1063/1.2795630) in the collisionless limit, and they scale the same over a wide parameter regime. The previous model had the effective aspect ratio of the diffusion region as a free parameter, which simulations and observations suggest is on the order of 0.1, but the present model has no free parameters. In conjunction with the symmetric case (Liu et al., 2017, https://doi.org/10.1103/PhysRevLett.118.085101), this work further suggests that this nearly universal number 0.1, essentially the normalized fast-reconnection rate, is a geometrical factor arising from maximizing the reconnection rate within magnetohydrodynamic-scale constraints.

  15. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    An optimizing computer program determined the turboprop aircraft with lowest direct operating cost for various sets of cruise speed and field length constraints. External variables included wing area, wing aspect ratio and engine sea level static horsepower; tail sizes, climb speed and cruise altitude were varied within the function evaluation program. Direct operating cost was minimized for a 150 n.mi typical mission. Generally, DOC increased with increasing speed and decreasing field length but not by a large amount. Ride roughness, however, increased considerably as speed became higher and field length became shorter.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tronconi, Alessandro, E-mail: Alessandro.Tronconi@bo.infn.it

    We study the constraints imposed by the requirement of Asymptotic Safety on a class of inflationary models with an inflaton field non-minimally coupled to the Ricci scalar. The critical surface in the space of theories is determined by the improved renormalization group flow which takes into account quantum corrections beyond the one loop approximation. The combination of constraints deriving from Planck observations and those from theory puts severe bounds on the values of the parameters of the model and predicts a quite large tensor to scalar ratio. We finally comment on the dependence of the results on the definition ofmore » the infrared energy scale which parametrises the running on the critical surface.« less

  17. Classical and quantum stability in putative landscapes

    DOE PAGES

    Dine, Michael

    2017-01-18

    Landscape analyses often assume the existence of large numbers of fields, N, with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N, eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N; scaling of couplings with N may also be necessary for perturbativity.more » We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. Finally, we consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.« less

  18. Classical and quantum stability in putative landscapes

    NASA Astrophysics Data System (ADS)

    Dine, Michael

    2017-01-01

    Landscape analyses often assume the existence of large numbers of fields, N , with all of the many couplings among these fields (subject to constraints such as local supersymmetry) selected independently and randomly from simple (say Gaussian) distributions. We point out that unitarity and perturbativity place significant constraints on behavior of couplings with N , eliminating otherwise puzzling results. In would-be flux compactifications of string theory, we point out that in order that there be large numbers of light fields, the compactification radii must scale as a positive power of N ; scaling of couplings with N may also be necessary for perturbativity. We show that in some simple string theory settings with large numbers of fields, for fixed R and string coupling, one can bound certain sums of squares of couplings by order one numbers. This may argue for strong correlations, possibly calling into question the assumption of uncorrelated distributions. We consider implications of these considerations for classical and quantum stability of states without supersymmetry, with low energy supersymmetry arising from tuning of parameters, and with dynamical breaking of supersymmetry.

  19. Influence of Constraint in Parameter Space on Quantum Games

    NASA Astrophysics Data System (ADS)

    Zhao, Hai-Jun; Fang, Xi-Ming

    2004-04-01

    We study the influence of the constraint in the parameter space on quantum games. Decomposing SU(2) operator into product of three rotation operators and controlling one kind of them, we impose a constraint on the parameter space of the players' operator. We find that the constraint can provide a tuner to make the bilateral payoffs equal, so that the mismatch of the players' action at multi-equilibrium could be avoided. We also find that the game exhibits an intriguing structure as a function of the parameter of the controlled operators, which is useful for making game models.

  20. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models.

    PubMed

    Cotten, Cameron; Reed, Jennifer L

    2013-01-30

    Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.

  1. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models

    PubMed Central

    2013-01-01

    Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254

  2. Dark energy models through nonextensive Tsallis' statistics

    NASA Astrophysics Data System (ADS)

    Barboza, Edésio M.; Nunes, Rafael da C.; Abreu, Everton M. C.; Ananias Neto, Jorge

    2015-10-01

    The accelerated expansion of the Universe is one of the greatest challenges of modern physics. One candidate to explain this phenomenon is a new field called dark energy. In this work we have used the Tsallis nonextensive statistical formulation of the Friedmann equation to explore the Barboza-Alcaniz and Chevalier-Polarski-Linder parametric dark energy models and the Wang-Meng and Dalal vacuum decay models. After that, we have discussed the observational tests and the constraints concerning the Tsallis nonextensive parameter. Finally, we have described the dark energy physics through the role of the q-parameter.

  3. Weyl current, scale-invariant inflation, and Planck scale generation

    DOE PAGES

    Ferreira, Pedro G.; Hill, Christopher T.; Ross, Graham G.

    2017-02-08

    Scalar fields,more » $$\\phi$$ i, can be coupled nonminimally to curvature and satisfy the general criteria: (i) the theory has no mass input parameters, including M P=0; (ii) the $$\\phi$$ i have arbitrary values and gradients, but undergo a general expansion and relaxation to constant values that satisfy a nontrivial constraint, K($$\\phi$$ i)=constant; (iii) this constraint breaks scale symmetry spontaneously, and the Planck mass is dynamically generated; (iv) there can be adequate inflation associated with slow roll in a scale-invariant potential subject to the constraint; (v) the final vacuum can have a small to vanishing cosmological constant; (vi) large hierarchies in vacuum expectation values can naturally form; (vii) there is a harmless dilaton which naturally eludes the usual constraints on massless scalars. Finally, these models are governed by a global Weyl scale symmetry and its conserved current, K μ. At the quantum level the Weyl scale symmetry can be maintained by an invariant specification of renormalized quantities.« less

  4. Planck satellite constraints on pseudo-Nambu-Goldstone boson quintessence

    NASA Astrophysics Data System (ADS)

    Smer-Barreto, Vanessa; Liddle, Andrew R.

    2017-01-01

    The pseudo-Nambu-Goldstone Boson (PNGB) potential, defined through the amplitude M4 and width f of its characteristic potential V(phi) = M4[1 + cos(phi/f)], is one of the best-suited models for the study of thawing quintessence. We analyse its present observational constraints by direct numerical solution of the scalar field equation of motion. Observational bounds are obtained using Supernovae data, cosmic microwave background temperature, polarization and lensing data from Planck, direct Hubble constant constraints, and baryon acoustic oscillations data. We find the parameter ranges for which PNGB quintessence gives a viable theory for dark energy. This exact approach is contrasted with the use of an approximate equation-of-state parametrization for thawing theories. We also discuss other possible parameterization choices, as well as commenting on the accuracy of the constraints imposed by Planck alone. Overall our analysis highlights a significant prior dependence to the outcome coming from the choice of modelling methodology, which current data are not sufficient to override.

  5. Planck satellite constraints on pseudo-Nambu-Goldstone boson quintessence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smer-Barreto, Vanessa; Liddle, Andrew R., E-mail: vsm@roe.ac.uk, E-mail: arl@roe.ac.uk

    2017-01-01

    The pseudo-Nambu-Goldstone Boson (PNGB) potential, defined through the amplitude M {sup 4} and width f of its characteristic potential V (φ) = M {sup 4}[1 + cos(φ/ f )], is one of the best-suited models for the study of thawing quintessence. We analyse its present observational constraints by direct numerical solution of the scalar field equation of motion. Observational bounds are obtained using Supernovae data, cosmic microwave background temperature, polarization and lensing data from Planck , direct Hubble constant constraints, and baryon acoustic oscillations data. We find the parameter ranges for which PNGB quintessence gives a viable theory for darkmore » energy. This exact approach is contrasted with the use of an approximate equation-of-state parametrization for thawing theories. We also discuss other possible parameterization choices, as well as commenting on the accuracy of the constraints imposed by Planck alone. Overall our analysis highlights a significant prior dependence to the outcome coming from the choice of modelling methodology, which current data are not sufficient to override.« less

  6. 3D galaxy clustering with future wide-field surveys: Advantages of a spherical Fourier-Bessel analysis

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2015-06-01

    Context. Upcoming spectroscopic galaxy surveys are extremely promising to help in addressing the major challenges of cosmology, in particular in understanding the nature of the dark universe. The strength of these surveys, naturally described in spherical geometry, comes from their unprecedented depth and width, but an optimal extraction of their three-dimensional information is of utmost importance to best constrain the properties of the dark universe. Aims: Although there is theoretical motivation and novel tools to explore these surveys using the 3D spherical Fourier-Bessel (SFB) power spectrum of galaxy number counts Cℓ(k,k'), most survey optimisations and forecasts are based on the tomographic spherical harmonics power spectrum C(ij)_ℓ. The goal of this paper is to perform a new investigation of the information that can be extracted from these two analyses in the context of planned stage IV wide-field galaxy surveys. Methods: We compared tomographic and 3D SFB techniques by comparing the forecast cosmological parameter constraints obtained from a Fisher analysis. The comparison was made possible by careful and coherent treatment of non-linear scales in the two analyses, which makes this study the first to compare 3D SFB and tomographic constraints on an equal footing. Nuisance parameters related to a scale- and redshift-dependent galaxy bias were also included in the computation of the 3D SFB and tomographic power spectra for the first time. Results: Tomographic and 3D SFB methods can recover similar constraints in the absence of systematics. This requires choosing an optimal number of redshift bins for the tomographic analysis, which we computed to be N = 26 for zmed ≃ 0.4, N = 30 for zmed ≃ 1.0, and N = 42 for zmed ≃ 1.7. When marginalising over nuisance parameters related to the galaxy bias, the forecast 3D SFB constraints are less affected by this source of systematics than the tomographic constraints. In addition, the rate of increase of the figure of merit as a function of median redshift is higher for the 3D SFB method than for the 2D tomographic method. Conclusions: Constraints from the 3D SFB analysis are less sensitive to unavoidable systematics stemming from a redshift- and scale-dependent galaxy bias. Even for surveys that are optimised with tomography in mind, a 3D SFB analysis is more powerful. In addition, for survey optimisation, the figure of merit for the 3D SFB method increases more rapidly with redshift, especially at higher redshifts, suggesting that the 3D SFB method should be preferred for designing and analysing future wide-field spectroscopic surveys. CosmicPy, the Python package developed for this paper, is freely available at https://cosmicpy.github.io. Appendices are available in electronic form at http://www.aanda.org

  7. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  8. Oscillating scalar fields in extended quintessence

    NASA Astrophysics Data System (ADS)

    Li, Dan; Pi, Shi; Scherrer, Robert J.

    2018-01-01

    We study a rapidly oscillating scalar field with potential V (ϕ )=k |ϕ |n nonminimally coupled to the Ricci scalar R via a term of the form (1 -8 π G0ξ ϕ2)R in the action. In the weak coupling limit, we calculate the effect of the nonminimal coupling on the time-averaged equation of state parameter γ =(p +ρ )/ρ . The change in ⟨γ ⟩ is always negative for n ≥2 and always positive for n <0.71 (which includes the case where the oscillating scalar field could serve as dark energy), while it can be either positive or negative for intermediate values of n . Constraints on the time variation of G force this change to be infinitesimally small at the present time whenever the scalar field dominates the expansion, but constraints in the early universe are not as stringent. The rapid oscillation induced in G also produces an additional contribution to the Friedman equation that behaves like an effective energy density with a stiff equation of state, but we show that, under reasonable assumptions, this effective energy density is always smaller than the density of the scalar field itself.

  9. Observational constraints on monomial warm inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Visinelli, Luca, E-mail: Luca.Visinelli@studio.unibo.it

    Warm inflation is, as of today, one of the best motivated mechanisms for explaining an early inflationary period. In this paper, we derive and analyze the current bounds on warm inflation with a monomial potential U ∝ φ {sup p} , using the constraints from the PLANCK mission. In particular, we discuss the parameter space of the tensor-to-scalar ratio r and the potential coupling λ of the monomial warm inflation in terms of the number of e-folds. We obtain that the theoretical tensor-to-scalar ratio r ∼ 10{sup −8} is much smaller than the current observational constrain r ∼< 0.12, despitemore » a relatively large value of the field excursion Δ φ ∼ 0.1 M {sub Pl}. Warm inflation thus eludes the Lyth bound set on the tensor-to-scalar ratio by the field excursion.« less

  10. Influence of toroidal magnetic field in multiaccreting tori

    NASA Astrophysics Data System (ADS)

    Pugliese, D.; Montani, G.

    2018-06-01

    We analysed the effects of a toroidal magnetic field in the formation of several magnetized accretion tori, dubbed as ringed accretion discs (RADs), orbiting around one central Kerr supermassive black hole (SMBH) in active galactic nuclei (AGNs), where both corotating and counterotating discs are considered. Constraints on tori formation and emergence of RADs instabilities, accretion on to the central attractor and tori collision emergence, are investigated. The results of this analysis show that the role of the central BH spin-mass ratio, the magnetic field and the relative fluid rotation and tori rotation with respect the central BH, are crucial elements in determining the accretion tori features, providing ultimately evidence of a strict correlation between SMBH spin, fluid rotation, and magnetic fields in RADs formation and evolution. More specifically, we proved that magnetic field and discs rotation are in fact strongly constrained, as tori formation and evolution in RADs depend on the toroidal magnetic fields parameters. Eventually, this analysis identifies specific classes of tori, for restrict ranges of magnetic field parameter, that can be observed around some specific SMBHs identified by their dimensionless spin.

  11. KiDS-450 + 2dFLenS: Cosmological parameter constraints from weak gravitational lensing tomography and overlapping redshift-space galaxy clustering

    NASA Astrophysics Data System (ADS)

    Joudaki, Shahab; Blake, Chris; Johnson, Andrew; Amon, Alexandra; Asgari, Marika; Choi, Ami; Erben, Thomas; Glazebrook, Karl; Harnois-Déraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Mead, Alexander; Miller, Lance; Parkinson, David; Poole, Gregory B.; Schneider, Peter; Viola, Massimo; Wolf, Christian

    2018-03-01

    We perform a combined analysis of cosmic shear tomography, galaxy-galaxy lensing tomography, and redshift-space multipole power spectra (monopole and quadrupole) using 450 deg2 of imaging data by the Kilo Degree Survey (KiDS-450) overlapping with two spectroscopic surveys: the 2-degree Field Lensing Survey (2dFLenS) and the Baryon Oscillation Spectroscopic Survey (BOSS). We restrict the galaxy-galaxy lensing and multipole power spectrum measurements to the overlapping regions with KiDS, and self-consistently compute the full covariance between the different observables using a large suite of N-body simulations. We methodically analyse different combinations of the observables, finding that the galaxy-galaxy lensing measurements are particularly useful in improving the constraint on the intrinsic alignment amplitude, while the multipole power spectra are useful in tightening the constraints along the lensing degeneracy direction. The fully combined constraint on S_8 ≡ σ _8 √{Ω _m/0.3}=0.742± 0.035, which is an improvement by 20 per cent compared to KiDS alone, corresponds to a 2.6σ discordance with Planck, and is not significantly affected by fitting to a more conservative set of scales. Given the tightening of the parameter space, we are unable to resolve the discordance with an extended cosmology that is simultaneously favoured in a model selection sense, including the sum of neutrino masses, curvature, evolving dark energy and modified gravity. The complementarity of our observables allows for constraints on modified gravity degrees of freedom that are not simultaneously bounded with either probe alone, and up to a factor of three improvement in the S8 constraint in the extended cosmology compared to KiDS alone.

  12. Massive spin-2 scattering and asymptotic superluminality

    NASA Astrophysics Data System (ADS)

    Hinterbichler, Kurt; Joyce, Austin; Rosen, Rachel A.

    2018-03-01

    We place model-independent constraints on theories of massive spin-2 particles by considering the positivity of the phase shift in eikonal scattering. The phase shift is an asymptotic S-matrix observable, related to the time delay/advance experienced by a particle during scattering. Demanding the absence of a time advance leads to constraints on the cubic vertices present in the theory. We find that, in theories with massive spin-2 particles, requiring no time advance means that either: (i) the cubic vertices must appear as a particular linear combination of the Einstein-Hilbert cubic vertex and an h μν 3 potential term or (ii) new degrees of freedom or strong coupling must enter at parametrically the mass of the massive spin-2 field. These conclusions have implications for a variety of situations. Applied to theories of large- N QCD, this indicates that any spectrum with an isolated massive spin-2 at the bottom must have these particular cubic self-couplings. Applied to de Rham-Gabadadze-Tolley massive gravity, the constraint is in accord with results obtained from a shockwave calculation: of the two free dimensionless parameters in the theory there is a one parameter line consistent with a subluminal phase shift.

  13. Evaluation of an artificial intelligence guided inverse planning system: clinical case study.

    PubMed

    Yan, Hui; Yin, Fang-Fang; Willett, Christopher

    2007-04-01

    An artificial intelligence (AI) guided method for parameter adjustment of inverse planning was implemented on a commercial inverse treatment planning system. For evaluation purpose, four typical clinical cases were tested and the results from both plans achieved by automated and manual methods were compared. The procedure of parameter adjustment mainly consists of three major loops. Each loop is in charge of modifying parameters of one category, which is carried out by a specially customized fuzzy inference system. A physician prescribed multiple constraints for a selected volume were adopted to account for the tradeoff between prescription dose to the PTV and dose-volume constraints for critical organs. The searching process for an optimal parameter combination began with the first constraint, and proceeds to the next until a plan with acceptable dose was achieved. The initial setup of the plan parameters was the same for each case and was adjusted independently by both manual and automated methods. After the parameters of one category were updated, the intensity maps of all fields were re-optimized and the plan dose was subsequently re-calculated. When final plan arrived, the dose statistics were calculated from both plans and compared. For planned target volume (PTV), the dose for 95% volume is up to 10% higher in plans using the automated method than those using the manual method. For critical organs, an average decrease of the plan dose was achieved. However, the automated method cannot improve the plan dose for some critical organs due to limitations of the inference rules currently employed. For normal tissue, there was no significant difference between plan doses achieved by either automated or manual method. With the application of AI-guided method, the basic parameter adjustment task can be accomplished automatically and a comparable plan dose was achieved in comparison with that achieved by the manual method. Future improvements to incorporate case-specific inference rules are essential to fully automate the inverse planning process.

  14. Testing for Lorentz violation: constraints on standard-model-extension parameters via lunar laser ranging.

    PubMed

    Battat, James B R; Chandler, John F; Stubbs, Christopher W

    2007-12-14

    We present constraints on violations of Lorentz invariance based on archival lunar laser-ranging (LLR) data. LLR measures the Earth-Moon separation by timing the round-trip travel of light between the two bodies and is currently accurate to the equivalent of a few centimeters (parts in 10(11) of the total distance). By analyzing this LLR data under the standard-model extension (SME) framework, we derived six observational constraints on dimensionless SME parameters that describe potential Lorentz violation. We found no evidence for Lorentz violation at the 10(-6) to 10(-11) level in these parameters. This work constitutes the first LLR constraints on SME parameters.

  15. Constraints on the {omega}- and {sigma}-meson coupling constants with dibaryons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faessler, A.; Buchmann, A.J.; Krivoruchenko, M.I.

    The effect of narrow dibaryon resonances on basic nuclear matter properties and on the structure of neutron stars is investigated in mean-field theory and in relativistic Hartree approximation. The existence of massive neutron stars imposes constraints on the coupling constants of the {omega} and {sigma} mesons with dibaryons. In the allowed region of the parameter space of the coupling constants, a Bose condensate of the light dibaryon candidates d{sub 1}(1920) and d{sup {prime}}(2060) is stable against compression. This proves the stability of the ground state of heterophase nuclear matter with a Bose condensate of light dibaryons. {copyright} {ital 1997} {italmore » The American Physical Society}« less

  16. Geomagnetic main field modeling using magnetohydrodynamic constraints

    NASA Technical Reports Server (NTRS)

    Estes, R. H.

    1985-01-01

    The influence of physical constraints are investigated which may be approximately satisfied by the Earth's liquid core on models of the geomagnetic main field and its secular variation. A previous report describes the methodology used to incorporate nonlinear equations of constraint into the main field model. The application of that methodology to the GSFC 12/83 field model to test the frozen-flux hypothesis and the usefulness of incorporating magnetohydrodynamic constraints for obtaining improved geomagnetic field models is described.

  17. Fiber-reinforced materials: finite elements for the treatment of the inextensibility constraint

    NASA Astrophysics Data System (ADS)

    Auricchio, Ferdinando; Scalet, Giulia; Wriggers, Peter

    2017-12-01

    The present paper proposes a numerical framework for the analysis of problems involving fiber-reinforced anisotropic materials. Specifically, isotropic linear elastic solids, reinforced by a single family of inextensible fibers, are considered. The kinematic constraint equation of inextensibility in the fiber direction leads to the presence of an undetermined fiber stress in the constitutive equations. To avoid locking-phenomena in the numerical solution due to the presence of the constraint, mixed finite elements based on the Lagrange multiplier, perturbed Lagrangian, and penalty method are proposed. Several boundary-value problems under plane strain conditions are solved and numerical results are compared to analytical solutions, whenever the derivation is possible. The performed simulations allow to assess the performance of the proposed finite elements and to discuss several features of the developed formulations concerning the effective approximation for the displacement and fiber stress fields, mesh convergence, and sensitivity to penalty parameters.

  18. Cosmological Parameter Estimation Using the Genus Amplitude—Application to Mock Galaxy Catalogs

    NASA Astrophysics Data System (ADS)

    Appleby, Stephen; Park, Changbom; Hong, Sungwook E.; Kim, Juhan

    2018-01-01

    We study the topology of the matter density field in two-dimensional slices and consider how we can use the amplitude A of the genus for cosmological parameter estimation. Using the latest Horizon Run 4 simulation data, we calculate the genus of the smoothed density field constructed from light cone mock galaxy catalogs. Information can be extracted from the amplitude of the genus by considering both its redshift evolution and magnitude. The constancy of the genus amplitude with redshift can be used as a standard population, from which we derive constraints on the equation of state of dark energy {w}{de}—by measuring A at z∼ 0.1 and z∼ 1, we can place an order {{Δ }}{w}{de}∼ { O }(15 % ) constraint on {w}{de}. By comparing A to its Gaussian expectation value, we can potentially derive an additional stringent constraint on the matter density {{Δ }}{{{Ω }}}{mat}∼ 0.01. We discuss the primary sources of contamination associated with the two measurements—redshift space distortion (RSD) and shot noise. With accurate knowledge of galaxy bias, we can successfully remove the effect of RSD, and the combined effect of shot noise and nonlinear gravitational evolution is suppressed by smoothing over suitably large scales {R}{{G}}≥slant 15 {Mpc}/h. Without knowledge of the bias, we discuss how joint measurements of the two- and three-dimensional genus can be used to constrain the growth factor β =f/b. The method can be applied optimally to redshift slices of a galaxy distribution generated using the drop-off technique.

  19. Ambulatory instrumentation suitable for long-term monitoring of cattle health.

    PubMed

    Schoenig, S A; Hildreth, T S; Nagl, L; Erickson, H; Spire, M; Andresen, D; Warren, S

    2004-01-01

    The benefits of real-time health diagnoses of cattle are potentially tremendous. Early detection of transmissible disease, whether from natural or terrorist events, could help to avoid huge financial losses in the agriculture industry while also improving meat quality. This work discusses physiological and behavioral parameters relevant to cattle state-of-health assessment. These parameters, along with a potentially harsh monitoring environment, drive a set of design considerations that must be addressed when building systems to acquire long-term, real-time measurements in the field. A prototype system is presented that supports the measurement of suitable physiologic parameters and begins to address the design constraints for continuous state-of-health determination in free-roaming cattle.

  20. Linear spin-2 fields in most general backgrounds

    NASA Astrophysics Data System (ADS)

    Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael

    2016-04-01

    We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.

  1. Astrophysical tests of modified gravity: Constraints from distance indicators in the nearby universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Bhuvnesh; Vikram, Vinu; Sakstein, Jeremy

    2013-12-10

    We use distance measurements in the nearby universe to carry out new tests of gravity, surpassing other astrophysical tests by over two orders of magnitude for chameleon theories. The three nearby distance indicators—cepheids, tip of the red giant branch (TRGB) stars, and water masers—operate in gravitational fields of widely different strengths. This enables tests of scalar-tensor gravity theories because they are screened from enhanced forces to different extents. Inferred distances from cepheids and TRGB stars are altered (in opposite directions) over a range of chameleon gravity theory parameters well below the sensitivity of cosmological probes. Using published data, we havemore » compared cepheid and TRGB distances in a sample of unscreened dwarf galaxies within 10 Mpc. We use a comparable set of screened galaxies as a control sample. We find no evidence for the order unity force enhancements expected in these theories. Using a two-parameter description of the models (the coupling strength and background field value), we obtain constraints on both the chameleon and symmetron screening scenarios. In particular we show that f(R) models with background field values f {sub R0} above 5 × 10{sup –7} are ruled out at the 95% confidence level. We also compare TRGB and maser distances to the galaxy NGC 4258 as a second test for larger field values. While there are several approximations and caveats in our study, our analysis demonstrates the power of gravity tests in the local universe. We discuss the prospects for additional improved tests with future observations.« less

  2. Systems and methods for maintaining multiple objects within a camera field-of-view

    DOEpatents

    Gans, Nicholas R.; Dixon, Warren

    2016-03-15

    In one embodiment, a system and method for maintaining objects within a camera field of view include identifying constraints to be enforced, each constraint relating to an attribute of the viewed objects, identifying a priority rank for the constraints such that more important constraints have a higher priority that less important constraints, and determining the set of solutions that satisfy the constraints relative to the order of their priority rank such that solutions that satisfy lower ranking constraints are only considered viable if they also satisfy any higher ranking constraints, each solution providing an indication as to how to control the camera to maintain the objects within the camera field of view.

  3. Weak-field limit of Kaluza-Klein models with spherically symmetric static scalar field: observational constraints

    NASA Astrophysics Data System (ADS)

    Zhuk, Alexander; Chopovsky, Alexey; Fakhr, Seyed Hossein; Shulga, Valerii; Wei, Han

    2017-11-01

    In a multidimensional Kaluza-Klein model with Ricci-flat internal space, we study the gravitational field in the weak-field limit. This field is created by two coupled sources. First, this is a point-like massive body which has a dust-like equation of state in the external space and an arbitrary parameter Ω of equation of state in the internal space. The second source is a static spherically symmetric massive scalar field centered at the origin where the point-like massive body is. The found perturbed metric coefficients are used to calculate the parameterized post-Newtonian (PPN) parameter γ . We define under which conditions γ can be very close to unity in accordance with the relativistic gravitational tests in the solar system. This can take place for both massive or massless scalar fields. For example, to have γ ≈ 1 in the solar system, the mass of scalar field should be μ ≳ 5.05× 10^{-49}g ˜ 2.83× 10^{-16}eV. In all cases, we arrive at the same conclusion that to be in agreement with the relativistic gravitational tests, the gravitating mass should have tension: Ω = - 1/2.

  4. Emulating Simulations of Cosmic Dawn for 21 cm Power Spectrum Constraints on Cosmology, Reionization, and X-Ray Heating

    NASA Astrophysics Data System (ADS)

    Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley

    2017-10-01

    Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.

  5. Hybrid Stars in the Light of GW170817

    NASA Astrophysics Data System (ADS)

    Nandi, Rana; Char, Prasanta

    2018-04-01

    We have studied the effect of the tidal deformability constraint given by the binary neutron star merger event GW170817 on the equations of state (EOS) of hybrid stars. The EOS are constructed by matching the hadronic EOS described by the relativistic mean-field model and parameter sets NL3, TM1, and NL3ωρ with the quark matter EOS described by the modified MIT bag model, via a Gibbs construction. It is found that the tidal deformability constraints along with the lower bound on the maximum mass (M max = 2.01 ± 0.04 M ⊙) significantly limits the bag model parameter space ({B}eff}1/4, a 4). We also obtain upper limits on the radius of 1.4 M ⊙ and 1.6 M ⊙ stars as R 1.4 ≤ 13.2–13.5 km and R 1.6 ≤ 13.2–13.4 km, respectively, for the different hadronic EOS considered here.

  6. Comprehensive analysis of the simplest curvaton model

    NASA Astrophysics Data System (ADS)

    Byrnes, Christian T.; Cortês, Marina; Liddle, Andrew R.

    2014-07-01

    We carry out a comprehensive analysis of the simplest curvaton model, which is based on two noninteracting massive fields. Our analysis encompasses cases where the inflaton and curvaton both contribute to observable perturbations, and where the curvaton itself drives a second period of inflation. We consider both power spectrum and non-Gaussianity observables, and focus on presenting constraints in model parameter space. The fully curvaton-dominated regime is in some tension with observational data, while an admixture of inflaton-generated perturbations improves the fit. The inflating curvaton regime mimics the predictions of Nflation. Some parts of parameter space permitted by power spectrum data are excluded by non-Gaussianity constraints. The recent BICEP2 results [P. A. R. Ade et al. (BICEP2 Collaboration), Phys. Rev. Lett. 112, 241101 (2014)], if confirmed as of predominantly primordial origin, require that the inflaton perturbations provide a significant fraction of the total perturbation, ruling out the usual curvaton scenario in which the inflaton perturbations are negligible, though not the admixture regime where both inflaton and curvaton contribute to the spectrum.

  7. Multi-band implications of external-IC flares

    NASA Astrophysics Data System (ADS)

    Richter, Stephan; Spanier, Felix

    2015-02-01

    Very fast variability on scales of minutes is regularly observed in Blazars. The assumption that these flares are emerging from the dominant emission zone of the very high energy (VHE) radiation within the jet challenges current acceleration and radiation models. In this work we use a spatially resolved and time dependent synchrotron-self-Compton (SSC) model that includes the full time dependence of Fermi-I acceleration. We use the (apparent) orphan γ -ray flare of Mrk501 during MJD 54952 and test various flare scenarios against the observed data. We find that a rapidly variable external radiation field can reproduce the high energy lightcurve best. However, the effect of the strong inverse Compton (IC) cooling on other bands and the X-ray observations are constraining the parameters to rather extreme ranges. Then again other scenarios would require parameters even more extreme or stronger physical constraints on the rise and decay of the source of the variability which might be in contradiction with constraints derived from the size of the black hole's ergosphere.

  8. Extended RF shimming: Sequence‐level parallel transmission optimization applied to steady‐state free precession MRI of the heart

    PubMed Central

    Price, Anthony N.; Padormo, Francesco; Hajnal, Joseph V.; Malik, Shaihan J.

    2017-01-01

    Cardiac magnetic resonance imaging (MRI) at high field presents challenges because of the high specific absorption rate and significant transmit field (B 1 +) inhomogeneities. Parallel transmission MRI offers the ability to correct for both issues at the level of individual radiofrequency (RF) pulses, but must operate within strict hardware and safety constraints. The constraints are themselves affected by sequence parameters, such as the RF pulse duration and TR, meaning that an overall optimal operating point exists for a given sequence. This work seeks to obtain optimal performance by performing a ‘sequence‐level’ optimization in which pulse sequence parameters are included as part of an RF shimming calculation. The method is applied to balanced steady‐state free precession cardiac MRI with the objective of minimizing TR, hence reducing the imaging duration. Results are demonstrated using an eight‐channel parallel transmit system operating at 3 T, with an in vivo study carried out on seven male subjects of varying body mass index (BMI). Compared with single‐channel operation, a mean‐squared‐error shimming approach leads to reduced imaging durations of 32 ± 3% with simultaneous improvement in flip angle homogeneity of 32 ± 8% within the myocardium. PMID:28195684

  9. Extended RF shimming: Sequence-level parallel transmission optimization applied to steady-state free precession MRI of the heart.

    PubMed

    Beqiri, Arian; Price, Anthony N; Padormo, Francesco; Hajnal, Joseph V; Malik, Shaihan J

    2017-06-01

    Cardiac magnetic resonance imaging (MRI) at high field presents challenges because of the high specific absorption rate and significant transmit field (B 1 + ) inhomogeneities. Parallel transmission MRI offers the ability to correct for both issues at the level of individual radiofrequency (RF) pulses, but must operate within strict hardware and safety constraints. The constraints are themselves affected by sequence parameters, such as the RF pulse duration and TR, meaning that an overall optimal operating point exists for a given sequence. This work seeks to obtain optimal performance by performing a 'sequence-level' optimization in which pulse sequence parameters are included as part of an RF shimming calculation. The method is applied to balanced steady-state free precession cardiac MRI with the objective of minimizing TR, hence reducing the imaging duration. Results are demonstrated using an eight-channel parallel transmit system operating at 3 T, with an in vivo study carried out on seven male subjects of varying body mass index (BMI). Compared with single-channel operation, a mean-squared-error shimming approach leads to reduced imaging durations of 32 ± 3% with simultaneous improvement in flip angle homogeneity of 32 ± 8% within the myocardium. © 2017 The Authors. NMR in Biomedicine published by John Wiley & Sons Ltd.

  10. A Multi-Parameter Approach for Calculating Crack Instability

    NASA Technical Reports Server (NTRS)

    Zanganeh, M.; Forman, R. G.

    2014-01-01

    An accurate fracture control analysis of spacecraft pressure systems, boosters, rocket hardware and other critical low-cycle fatigue cases where the fracture toughness highly impacts cycles to failure requires accurate knowledge of the material fracture toughness. However, applicability of the measured fracture toughness values using standard specimens and transferability of the values to crack instability analysis of the realistically complex structures is refutable. The commonly used single parameter Linear Elastic Fracture Mechanics (LEFM) approach which relies on the key assumption that the fracture toughness is a material property would result in inaccurate crack instability predictions. In the past years extensive studies have been conducted to improve the single parameter (K-controlled) LEFM by introducing parameters accounting for the geometry or in-plane constraint effects]. Despite the importance of the thickness (out-of-plane constraint) effects in fracture control problems, the literature is mainly limited to some empirical equations for scaling the fracture toughness data] and only few theoretically based developments can be found. In aerospace hardware where the structure might have only one life cycle and weight reduction is crucial, reducing the design margin of safety by decreasing the uncertainty involved in fracture toughness evaluations would result in lighter hardware. In such conditions LEFM would not suffice and an elastic-plastic analysis would be vital. Multi-parameter elastic plastic crack tip field quantifying developments combined with statistical methods] have been shown to have the potential to be used as a powerful tool for tackling such problems. However, these approaches have not been comprehensively scrutinized using experimental tests. Therefore, in this paper a multi-parameter elastic-plastic approach has been used to study the crack instability problem and the transferability issue by considering the effects of geometrical constraints as well as the thickness. The feasibility of the approach has been examined using a wide range of specimen geometries and thicknesses manufactured from 7075-T7351 aluminum alloy.

  11. Constraints on CDM cosmology from galaxy power spectrum, CMB and SNIa evolution

    NASA Astrophysics Data System (ADS)

    Ferramacho, L. D.; Blanchard, A.; Zolnierowski, Y.

    2009-05-01

    Aims: We examine the constraints that can be obtained on standard cold dark matter models from the most currently used data set: CMB anisotropies, type Ia supernovae and the SDSS luminous red galaxies. We also examine how these constraints are widened when the equation of state parameter w and the curvature parameter Ωk are left as free parameters. Finally, we investigate the impact on these constraints of a possible form of evolution in SNIa intrinsic luminosity. Methods: We obtained our results from MCMC analysis using the full likelihood of each data set. Results: For the ΛCDM model, our “vanilla” model, cosmological parameters are tightly constrained and consistent with current estimates from various methods. When the dark energy parameter w is free we find that the constraints remain mostly unchanged, i.e. changes are smaller than the 1 sigma uncertainties. Similarly, relaxing the assumption of a flat universe leads to nearly identical constraints on the dark energy density parameter of the universe Ω_Λ , baryon density of the universe Ω_b, the optical depth τ, the index of the power spectrum of primordial fluctuations n_S, with most one sigma uncertainties better than 5%. More significant changes appear on other parameters: while preferred values are almost unchanged, uncertainties for the physical dark matter density Ω_ch^2, Hubble constant H0 and σ8 are typically twice as large. The constraint on the age of the Universe, which is very accurate for the vanilla model, is the most degraded. We found that different methodological approaches on large scale structure estimates lead to appreciable differences in preferred values and uncertainty widths. We found that possible evolution in SNIa intrinsic luminosity does not alter these constraints by much, except for w, for which the uncertainty is twice as large. At the same time, this possible evolution is severely constrained. Conclusions: We conclude that systematic uncertainties for some estimated quantities are similar or larger than statistical ones.

  12. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  13. Joint cosmic microwave background and weak lensing analysis: constraints on cosmological parameters.

    PubMed

    Contaldi, Carlo R; Hoekstra, Henk; Lewis, Antony

    2003-06-06

    We use cosmic microwave background (CMB) observations together with the red-sequence cluster survey weak lensing results to derive constraints on a range of cosmological parameters. This particular choice of observations is motivated by their robust physical interpretation and complementarity. Our combined analysis, including a weak nucleosynthesis constraint, yields accurate determinations of a number of parameters including the amplitude of fluctuations sigma(8)=0.89+/-0.05 and matter density Omega(m)=0.30+/-0.03. We also find a value for the Hubble parameter of H(0)=70+/-3 km s(-1) Mpc(-1), in good agreement with the Hubble Space Telescope key-project result. We conclude that the combination of CMB and weak lensing data provides some of the most powerful constraints available in cosmology today.

  14. A model with isospin doublet U(1)D gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-05-01

    We propose a model with an extra isospin doublet U(1)D gauge symmetry, in which we introduce several extra fermions with odd parity under a discrete Z2 symmetry in order to cancel the gauge anomalies out. A remarkable issue is that we impose nonzero U(1)D charge to the Standard Model Higgs, and it gives the most stringent constraint to the vacuum expectation value of a scalar field breaking the U(1)D symmetry that is severer than the LEP bound. We then explore relic density of a Majorana dark matter candidate without conflict of constraints from lepton flavor violating processes. A global analysis is carried out to search for parameters which can accommodate with the observed data.

  15. Functional renormalization group for the U (1 )-T56 tensorial group field theory with closure constraint

    NASA Astrophysics Data System (ADS)

    Lahoche, Vincent; Ousmane Samary, Dine

    2017-02-01

    This paper is focused on the functional renormalization group applied to the T56 tensor model on the Abelian group U (1 ) with closure constraint. For the first time, we derive the flow equations for the couplings and mass parameters in a suitable truncation around the marginal interactions with respect to the perturbative power counting. For the second time, we study the behavior around the Gaussian fixed point, and show that the theory is nonasymptotically free. Finally, we discuss the UV completion of the theory. We show the existence of several nontrivial fixed points, study the behavior of the renormalization group flow around them, and point out evidence in favor of an asymptotically safe theory.

  16. Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui

    2018-06-01

    In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.

  17. Time limited field of regard search

    NASA Astrophysics Data System (ADS)

    Flug, Eric; Maurer, Tana; Nguyen, Oanh-Tho

    2005-05-01

    Recent work by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has led to the Time-Limited Search (TLS) model, which has given new formulations for the field of view (FOV) search times. The next step in the evaluation of the overall search model (ACQUIRE) is to apply these parameters to the field of regard (FOR) model. Human perception experiments were conducted using synthetic imagery developed at NVESD. The experiments were competitive player-on-player search tests with the intention of imposing realistic time constraints on the observers. FOR detection probabilities, search times, and false alarm data are analyzed and compared to predictions using both the TLS model and ACQUIRE.

  18. Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints.

    PubMed

    López-Nicolás, Gonzalo; Gans, Nicholas R; Bhattacharya, Sourabh; Sagüés, Carlos; Guerrero, Josechu J; Hutchinson, Seth

    2010-08-01

    In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.

  19. Skew-flavored dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Prateek; Chacko, Zackaria; Fortes, Elaine C. F. S.

    We explore a novel flavor structure in the interactions of dark matter with the Standard Model. We consider theories in which both the dark matter candidate, and the particles that mediate its interactions with the Standard Model fields, carry flavor quantum numbers. The interactions are skewed in flavor space, so that a dark matter particle does not directly couple to the Standard Model matter fields of the same flavor, but only to the other two flavors. This framework respects minimal flavor violation and is, therefore, naturally consistent with flavor constraints. We study the phenomenology of a benchmark model in whichmore » dark matter couples to right-handed charged leptons. In large regions of parameter space, the dark matter can emerge as a thermal relic, while remaining consistent with the constraints from direct and indirect detection. The collider signatures of this scenario include events with multiple leptons and missing energy. In conclusion, these events exhibit a characteristic flavor pattern that may allow this class of models to be distinguished from other theories of dark matter.« less

  20. Skew-flavored dark matter

    DOE PAGES

    Agrawal, Prateek; Chacko, Zackaria; Fortes, Elaine C. F. S.; ...

    2016-05-10

    We explore a novel flavor structure in the interactions of dark matter with the Standard Model. We consider theories in which both the dark matter candidate, and the particles that mediate its interactions with the Standard Model fields, carry flavor quantum numbers. The interactions are skewed in flavor space, so that a dark matter particle does not directly couple to the Standard Model matter fields of the same flavor, but only to the other two flavors. This framework respects minimal flavor violation and is, therefore, naturally consistent with flavor constraints. We study the phenomenology of a benchmark model in whichmore » dark matter couples to right-handed charged leptons. In large regions of parameter space, the dark matter can emerge as a thermal relic, while remaining consistent with the constraints from direct and indirect detection. The collider signatures of this scenario include events with multiple leptons and missing energy. In conclusion, these events exhibit a characteristic flavor pattern that may allow this class of models to be distinguished from other theories of dark matter.« less

  1. Current and Future Constraints on Primordial Magnetic Fields

    NASA Astrophysics Data System (ADS)

    Sutton, Dylan R.; Feng, Chang; Reichardt, Christian L.

    2017-09-01

    We present new limits on the amplitude of potential primordial magnetic fields (PMFs) using temperature and polarization measurements of the cosmic microwave background (CMB) from Planck, the BICEP2/Keck Array, Polarbear, and SPTpol. We reduce twofold the 95% confidence upper limit on the CMB anisotropy power due to a nearly scale-invariant PMF, with an allowed B-mode power at ℓ = 1500 of {D}{\\ell =1500}{BB}< 0.071 μ {K}2 for Planck versus {D}{\\ell =1500}{BB}< 0.034 μ {K}2 for the combined data set. We also forecast the expected limits from soon-to-deploy CMB experiments (like SPT-3G, Adv. ACTpol, or the Simons Array) and the proposed CMB-S4 experiment. Future CMB experiments should dramatically reduce the current uncertainties by one order of magnitude for the near-term experiments and two orders of magnitude for the CMB-S4 experiment. The constraints from CMB-S4 have the potential to rule out much of the parameter space for PMFs.

  2. On classical mechanical systems with non-linear constraints

    NASA Astrophysics Data System (ADS)

    Terra, Gláucio; Kobayashi, Marcelo H.

    2004-03-01

    In the present work, we analyze classical mechanical systems with non-linear constraints in the velocities. We prove that the d'Alembert-Chetaev trajectories of a constrained mechanical system satisfy both Gauss' principle of least constraint and Hölder's principle. In the case of a free mechanics, they also satisfy Hertz's principle of least curvature if the constraint manifold is a cone. We show that the Gibbs-Maggi-Appell (GMA) vector field (i.e. the second-order vector field which defines the d'Alembert-Chetaev trajectories) conserves energy for any potential energy if, and only if, the constraint is homogeneous (i.e. if the Liouville vector field is tangent to the constraint manifold). We introduce the Jacobi-Carathéodory metric tensor and prove Jacobi-Carathéodory's theorem assuming that the constraint manifold is a cone. Finally, we present a version of Liouville's theorem on the conservation of volume for the flow of the GMA vector field.

  3. Observational constraints on holographic tachyonic dark energy in interaction with dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Micheletti, Sandro M. R., E-mail: smrm@fma.if.usp.br

    2010-05-01

    We discuss an interacting tachyonic dark energy model in the context of the holographic principle. The potential of the holographic tachyon field in interaction with dark matter is constructed. The model results are compared with CMB shift parameter, baryonic acoustic oscilations, lookback time and the Constitution supernovae sample. The coupling constant of the model is compatible with zero, but dark energy is not given by a cosmological constant.

  4. Evaluation of GIS Technology in Assessing and Modeling Land Management Practices

    NASA Technical Reports Server (NTRS)

    Archer, F.; Coleman, T. L.; Manu, A.; Tadesse, W.; Liu, G.

    1997-01-01

    There is an increasing concern of land owners to protect and maintain healthy and sustainable agroecosystems through the implementation of best management practices (BMP). The objectives of this study were: (1) To develop and evaluate the use of a Geographic Information System (GIS) technology for enhancing field-scale management practices; (2) evaluate the use of 2-dimensional displays of the landscape and (3) define spatial classes of variables from interpretation of geostatistical parameters. Soil samples were collected to a depth of 2 m at 15 cm increments. Existing data from topographic, land use, and soil survey maps of the Winfred Thomas Agricultural Research Station were converted to digital format. Additional soils data which included texture, pH, and organic matter were also generated. The digitized parameters were used to create a multilayered field-scale GIS. Two dimensional (2-D) displays of the parameters were generated using the ARC/INFO software. The spatial distribution of the parameters evaluated in both fields were similar which could be attributed to the similarity in vegetation and surface elevation. The ratio of the nugget to total semivariance, expressed as a percentage, was used to assess the degree of spatial variability. The results indicated that most of the parameters were moderate spatially dependent Biophysical constraint maps were generated from the database layers, and used in multiple combination to visualize results of the BMP. Understanding the spatial relationships of physical and chemical parameters that exists within a field should enable land managers to more effectively implement BMP to ensure a safe and sustainable environment.

  5. TH-CD-209-01: A Greedy Reassignment Algorithm for the PBS Minimum Monitor Unit Constraint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Y; Kooy, H; Craft, D

    2016-06-15

    Purpose: To investigate a Greedy Reassignment algorithm in order to mitigate the effects of low weight spots in proton pencil beam scanning (PBS) treatment plans. Methods: To convert a plan from the treatment planning system’s (TPS) to a deliverable plan, post processing methods can be used to adjust the spot maps to meets the minimum MU constraint. Existing methods include: deleting low weight spots (Cut method), or rounding spots with weight above/below half the limit up/down to the limit/zero (Round method). An alternative method called Greedy Reassignment was developed in this work in which the lowest weight spot in themore » field was removed and its weight reassigned equally among its nearest neighbors. The process was repeated with the next lowest weight spot until all spots in the field were above the MU constraint. The algorithm performance was evaluated using plans collected from 190 patients (496 fields) treated at our facility. The evaluation criteria were the γ-index pass rate comparing the pre-processed and post-processed dose distributions. A planning metric was further developed to predict the impact of post-processing on treatment plans for various treatment planning, machine, and dose tolerance parameters. Results: For fields with a gamma pass rate of 90±1%, the metric has a standard deviation equal to 18% of the centroid value. This showed that the metric and γ-index pass rate are correlated for the Greedy Reassignment algorithm. Using a 3rd order polynomial fit to the data, the Greedy Reassignment method had 1.8 times better metric at 90% pass rate compared to other post-processing methods. Conclusion: We showed that the Greedy Reassignment method yields deliverable plans that are closest to the optimized-without-MU-constraint plan from the TPS. The metric developed in this work could help design the minimum MU threshold with the goal of keeping the γ-index pass rate above an acceptable value.« less

  6. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  7. Constraints on a generalized deceleration parameter from cosmic chronometers

    NASA Astrophysics Data System (ADS)

    Mamon, Abdulla Al

    2018-04-01

    In this paper, we have proposed a generalized parametrization for the deceleration parameter q in order to study the evolutionary history of the universe. We have shown that the proposed model can reproduce three well known q-parametrized models for some specific values of the model parameter α. We have used the latest compilation of the Hubble parameter measurements obtained from the cosmic chronometer (CC) method (in combination with the local value of the Hubble constant H0) and the Type Ia supernova (SNIa) data to place constraints on the parameters of the model for different values of α. We have found that the resulting constraints on the deceleration parameter and the dark energy equation of state support the ΛCDM model within 1σ confidence level at the present epoch.

  8. Exploring the hyperchargeless Higgs triplet model up to the Planck scale

    NASA Astrophysics Data System (ADS)

    Khan, Najimuddin

    2018-04-01

    We examine an extension of the SM Higgs sector by a Higgs triplet taking into consideration the discovery of a Higgs-like particle at the LHC with mass around 125 GeV. We evaluate the bounds on the scalar potential through the unitarity of the scattering matrix. Considering the cases with and without Z_2-symmetry of the extra triplet, we derive constraints on the parameter space. We identify the region of the parameter space that corresponds to the stability and metastability of the electroweak vacuum. We also show that at large field values the scalar potential of this model is suitable to explain inflation.

  9. Relativistic protons in the Coma galaxy cluster: first gamma-ray constraints ever on turbulent reacceleration

    NASA Astrophysics Data System (ADS)

    Brunetti, G.; Zimmer, S.; Zandanel, F.

    2017-12-01

    The Fermi-LAT (Large Area Telescope) collaboration recently published deep upper limits to the gamma-ray emission of the Coma cluster, a cluster hosting the prototype of giant radio haloes. In this paper, we extend previous studies and use a formalism that combines particle reacceleration by turbulence and the generation of secondary particles in the intracluster medium to constrain relativistic protons and their role for the origin of the radio halo. We conclude that a pure hadronic origin of the halo is clearly disfavoured as it would require excessively large magnetic fields. However, secondary particles can still generate the observed radio emission if they are reaccelerated. For the first time the deep gamma-ray limits allow us to derive meaningful constraints if the halo is generated during phases of reacceleration of relativistic protons and their secondaries by cluster-scale turbulence. In this paper, we explore a relevant range of parameter space of reacceleration models of secondaries. Within this parameter space, a fraction of model configurations is already ruled out by current gamma-ray limits, including the cases that assume weak magnetic fields in the cluster core, B ≤ 2-3 μG. Interestingly, we also find that the flux predicted by a large fraction of model configurations assuming magnetic fields consistent with Faraday rotation measures (RMs) is not far from the limits. This suggests that a detection of gamma-rays from the cluster might be possible in the near future, provided that the electrons generating the radio halo are secondaries reaccelerated and the magnetic field in the cluster is consistent with that inferred from RM.

  10. A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.

    PubMed

    Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan

    2015-06-01

    Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology. Copyright © 2015. Published by Elsevier Inc.

  11. Recent Advances in Stellarator Optimization

    NASA Astrophysics Data System (ADS)

    Gates, David; Brown, T.; Breslau, J.; Landreman, M.; Lazerson, S. A.; Mynick, H.; Neilson, G. H.; Pomphrey, N.

    2016-10-01

    Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. One criticism that has been levelled at this method of design is the complexity of the resultant field coils. Recently, a new coil optimization code, COILOPT + + , was written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. We have also explored possibilities for generating an experimental database that could check whether the reduction in turbulent transport that is predicted by GENE as a function of local shear would be consistent with experiments. To this end, a series of equilibria that can be made in the now latent QUASAR experiment have been identified. This work was supported by U.S. DoE Contract #DE-AC02-09CH11466.

  12. CONSTRAINTS ON THE INTERGALACTIC MAGNETIC FIELD WITH GAMMA-RAY OBSERVATIONS OF BLAZARS

    DOE PAGES

    Finke, Justin D.; Reyes, Luis C.; Georganopoulos, Markos; ...

    2015-11-12

    Distant BL Lacertae objects emit γ rays which interact with the extragalactic background light (EBL), creating electron-positron pairs, and reducing the flux measured by ground-based imaging atmospheric Cherenkov telescopes (IACTs) at very-high energies (VHE). These pairs can Comptonscatter the cosmic microwave background, creating a γ-ray signature at slightly lower energies observable by the Fermi Large Area Telescope (LAT). This signal is strongly dependent on the intergalactic magnetic field (IGMF) strength (B) and its coherence length (LB). We use IACT spectra taken from the literature for 5 VHE-detected BL Lac objects, and combine it with LAT spectra for these sources tomore » constrain these IGMF parameters. Low B values can be ruled out by the constraint that the cascade flux cannot exceed that observed by the LAT. High values of B can be ruled out from the constraint that the EBL-deabsorbed IACT spectrum cannot be greater than the LAT spectrum extrapolated into the VHE band, unless the cascade spectrum contributes a sizable fraction of the LAT flux. We rule out low B values (B . 10 -19 G for LB ≥ 1 Mpc) at > 5σ in all trials with different EBL models and data selection, except when« less

  13. Using optimal control methods with constraints to generate singlet states in NMR

    NASA Astrophysics Data System (ADS)

    Rodin, Bogdan A.; Kiryutin, Alexey S.; Yurkovskaya, Alexandra V.; Ivanov, Konstantin L.; Yamamoto, Satoru; Sato, Kazunobu; Takui, Takeji

    2018-06-01

    A method is proposed for optimizing the performance of the APSOC (Adiabatic-Passage Spin Order Conversion) technique, which can be exploited in NMR experiments with singlet spin states. In this technique magnetization-to-singlet conversion (and singlet-to-magnetization conversion) is performed by using adiabatically ramped RF-fields. Optimization utilizes the GRAPE (Gradient Ascent Pulse Engineering) approach, in which for a fixed search area we assume monotonicity to the envelope of the RF-field. Such an approach allows one to achieve much better performance for APSOC; consequently, the efficiency of magnetization-to-singlet conversion is greatly improved as compared to simple model RF-ramps, e.g., linear ramps. We also demonstrate that the optimization method is reasonably robust to possible inaccuracies in determining NMR parameters of the spin system under study and also in setting the RF-field parameters. The present approach can be exploited in other NMR and EPR applications using adiabatic switching of spin Hamiltonians.

  14. UCMS - A new signal parameter measurement system using digital signal processing techniques. [User Constraint Measurement System

    NASA Technical Reports Server (NTRS)

    Choi, H. J.; Su, Y. T.

    1986-01-01

    The User Constraint Measurement System (UCMS) is a hardware/software package developed by NASA Goddard to measure the signal parameter constraints of the user transponder in the TDRSS environment by means of an all-digital signal sampling technique. An account is presently given of the features of UCMS design and of its performance capabilities and applications; attention is given to such important aspects of the system as RF interface parameter definitions, hardware minimization, the emphasis on offline software signal processing, and end-to-end link performance. Applications to the measurement of other signal parameters are also discussed.

  15. Future Cosmological Constraints From Fast Radio Bursts

    NASA Astrophysics Data System (ADS)

    Walters, Anthony; Weltman, Amanda; Gaensler, B. M.; Ma, Yin-Zhe; Witzemann, Amadeus

    2018-03-01

    We consider the possible observation of fast radio bursts (FRBs) with planned future radio telescopes, and investigate how well the dispersions and redshifts of these signals might constrain cosmological parameters. We construct mock catalogs of FRB dispersion measure (DM) data and employ Markov Chain Monte Carlo analysis, with which we forecast and compare with existing constraints in the flat ΛCDM model, as well as some popular extensions that include dark energy equation of state and curvature parameters. We find that the scatter in DM observations caused by inhomogeneities in the intergalactic medium (IGM) poses a big challenge to the utility of FRBs as a cosmic probe. Only in the most optimistic case, with a high number of events and low IGM variance, do FRBs aid in improving current constraints. In particular, when FRBs are combined with CMB+BAO+SNe+H 0 data, we find the biggest improvement comes in the {{{Ω }}}{{b}}{h}2 constraint. Also, we find that the dark energy equation of state is poorly constrained, while the constraint on the curvature parameter, Ω k , shows some improvement when combined with current constraints. When FRBs are combined with future baryon acoustic oscillation (BAO) data from 21 cm Intensity Mapping, we find little improvement over the constraints from BAOs alone. However, the inclusion of FRBs introduces an additional parameter constraint, {{{Ω }}}{{b}}{h}2, which turns out to be comparable to existing constraints. This suggests that FRBs provide valuable information about the cosmological baryon density in the intermediate redshift universe, independent of high-redshift CMB data.

  16. Solar system and equivalence principle constraints on f(R) gravity by the chameleon approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capozziello, Salvatore; Tsujikawa, Shinji

    2008-05-15

    We study constraints on f(R) dark energy models from solar system experiments combined with experiments on the violation of the equivalence principle. When the mass of an equivalent scalar field degree of freedom is heavy in a region with high density, a spherically symmetric body has a thin shell so that an effective coupling of the fifth force is suppressed through a chameleon mechanism. We place experimental bounds on the cosmologically viable models recently proposed in the literature that have an asymptotic form f(R)=R-{lambda}R{sub c}[1-(R{sub c}/R){sup 2n}] in the regime R>>R{sub c}. From the solar system constraints on the post-Newtonianmore » parameter {gamma}, we derive the bound n>0.5, whereas the constraints from the violations of the weak and strong equivalence principles give the bound n>0.9. This allows a possibility to find the deviation from the {lambda}-cold dark matter ({lambda}CDM) cosmological model. For the model f(R)=R-{lambda}R{sub c}(R/R{sub c}){sup p} with 0

  17. The Relationship Between Constraint and Ductile Fracture Initiation as Defined by Micromechanical Analyses

    NASA Technical Reports Server (NTRS)

    Panontin, Tina L.; Sheppard, Sheri D.

    1994-01-01

    The use of small laboratory specimens to predict the integrity of large, complex structures relies on the validity of single parameter fracture mechanics. Unfortunately, the constraint loss associated with large scale yielding, whether in a laboratory specimen because of its small size or in a structure because it contains shallow flaws loaded in tension, can cause the breakdown of classical fracture mechanics and the loss of transferability of critical, global fracture parameters. Although the issue of constraint loss can be eliminated by testing actual structural configurations, such an approach can be prohibitively costly. Hence, a methodology that can correct global fracture parameters for constraint effects is desirable. This research uses micromechanical analyses to define the relationship between global, ductile fracture initiation parameters and constraint in two specimen geometries (SECT and SECB with varying a/w ratios) and one structural geometry (circumferentially cracked pipe). Two local fracture criteria corresponding to ductile fracture micromechanisms are evaluated: a constraint-modified, critical strain criterion for void coalescence proposed by Hancock and Cowling and a critical void ratio criterion for void growth based on the Rice and Tracey model. Crack initiation is assumed to occur when the critical value in each case is reached over some critical length. The primary material of interest is A516-70, a high-hardening pressure vessel steel sensitive to constraint; however, a low-hardening structural steel that is less sensitive to constraint is also being studied. Critical values of local fracture parameters are obtained by numerical analysis and experimental testing of circumferentially notched tensile specimens of varying constraint (e.g., notch radius). These parameters are then used in conjunction with large strain, large deformation, two- and three-dimensional finite element analyses of the geometries listed above to predict crack initiation loads and to calculate the associated (critical) global fracture parameters. The loads are verified experimentally, and microscopy is used to measure pre-crack length, crack tip opening displacement (CTOD), and the amount of stable crack growth. Results for A516-70 steel indicate that the constraint-modified, critical strain criterion with a critical length approximately equal to the grain size (0.0025 inch) provides accurate predictions of crack initiation. The critical void growth criterion is shown to considerably underpredict crack initiation loads with the same critical length. The relationship between the critical value of the J-integral for ductile crack initiation and crack depth for SECT and SECB specimens has been determined using the constraint-modified, critical strain criterion, demonstrating that this micromechanical model can be used to correct in-plane constraint effects due to crack depth and bending vs. tension loading. Finally, the relationship developed for the SECT specimens is used to predict the behavior of circumferentially cracked pipe specimens.

  18. Design and analysis of a high power moderate band radiator using a switched oscillator

    NASA Astrophysics Data System (ADS)

    Armanious, Miena Magdi Hakeem

    Quarter-wave switched oscillators (SWOs) are an important technology for the generation of high-power, moderate bandwidth (mesoband) wave forms. The use of SWOs in high power microwave sources has been discussed for the past 10 years [1--6], but a detailed discussion of the design of this type of oscillators for particular waveforms has been lacking. In this dissertation I develop a design methodology for a realization of SWOs, also known as MATRIX oscillators in the scientific community. A key element in the design of SWOs is the self-breakdown switch, which is created by a large electric field. In order for the switch to close as expected from the design, it is essential to manage the electrostatic field distribution inside the oscillator during the charging time. This enforces geometric constraints on the shape of the conductors inside MATRIX. At the same time, the electrodynamic operation of MATRIX is dependent on the geometry of the structure. In order to generate a geometry that satisfies both the electrostatic and electrodynamic constraints, a new approach is developed to generate this geometry using the 2-D static solution of the Laplace equation, subject to a particular set of boundary conditions. These boundary conditions are manipulated to generate equipotential lines with specific dimensions that satisfy the electrodynamic constraints. Meanwhile, these equipotential lines naturally support an electrostatic field distribution that meets the requirements for the switch operation. To study the electrodynamic aspects of MATRIX, three different (but interrelated) numerical models are built. Depending on the assumptions made in each model, different information about the electrodynamic properties of the designed SWO are obtained. In addition, the agreement and consistency between the different models, validate and give confidence in the calculated results. Another important aspect of the design process is understanding the relationship between the geometric parameters of MATRIX and the output waveforms. Using the numerical models, the relationship between the dimensions of MATRIX and its calculated resonant parameters are studied. For a given set of geometric constraints, this provides more flexibility to the output specifications. Finally, I present a comprehensive design methodology that generates the geometry of a MATRIX system from the desired specification then calculates the radiated waveform.

  19. Field-sensitivity To Rheological Parameters

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2017-11-01

    We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.

  20. Experiment definition and integration study for the accommodation of magnetic spectrometer payload on Spacelab/shuttle missions

    NASA Technical Reports Server (NTRS)

    Buffington, A.

    1978-01-01

    A super-cooled magnetic spectrometer for a cosmic-ray experiment is considered for application in the high energy astronomical observatory which may be used on a space shuttle spacelab mission. New cryostat parameters are reported which are appropriate to shuttle mission weight and mission duration constraints. Since a super-conducting magnetic spectrometer has a magnetic fringe field, methods for shielding sensitive electronic and mechanical components on nearby experiments are described.

  1. Constraints on the merging of the transition lines at the tricritical point in a wing-structure phase diagram

    DOE PAGES

    Taufour, Valentin; Kaluarachchi, Udhara S.; Kogan, Vladimir G.

    2016-08-19

    Here, we consider the phase diagram of a ferromagnetic system driven to a quantum phase transition with a tuning parameter $p$. Before being suppressed, the transition becomes of the first order at a tricritical point, from which wings emerge under application of the magnetic field H in the T $-$ p $-$ H phase diagram. We show that the edge of the wings merge with tangent slopes at the tricritical point.

  2. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  3. Neutron star mergers as a probe of modifications of general relativity with finite-range scalar forces

    NASA Astrophysics Data System (ADS)

    Sagunski, Laura; Zhang, Jun; Johnson, Matthew C.; Lehner, Luis; Sakellariadou, Mairi; Liebling, Steven L.; Palenzuela, Carlos; Neilsen, David

    2018-03-01

    Observations of gravitational radiation from compact binary systems provide an unprecedented opportunity to test general relativity in the strong field dynamical regime. In this paper, we investigate how future observations of gravitational radiation from binary neutron star mergers might provide constraints on finite-range forces from a universally coupled massive scalar field. Such scalar degrees of freedom (d.o.f.) are a characteristic feature of many extensions of general relativity. For concreteness, we work in the context of metric f (R ) gravity, which is equivalent to general relativity and a universally coupled scalar field with a nonlinear potential whose form is fixed by the choice of f (R ). In theories where neutron stars (or other compact objects) obtain a significant scalar charge, the resulting attractive finite-range scalar force has implications for both the inspiral and merger phases of binary systems. We first present an analysis of the inspiral dynamics in Newtonian limit, and forecast the constraints on the mass of the scalar and charge of the compact objects for the Advanced LIGO gravitational wave observatory. We then perform a comparative study of binary neutron star mergers in general relativity with those of a one-parameter model of f (R ) gravity using fully relativistic hydrodynamical simulations. These simulations elucidate the effects of the scalar on the merger and postmerger dynamics. We comment on the utility of the full waveform (inspiral, merger, postmerger) to probe different regions of parameter space for both the particular model of f (R ) gravity studied here and for finite-range scalar forces more generally.

  4. Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics

    NASA Astrophysics Data System (ADS)

    Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.

    2017-11-01

    We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.

  5. An approach to the parametric design of ion thrusters

    NASA Technical Reports Server (NTRS)

    Wilbur, Paul J.; Beattie, John R.; Hyman, Jay, Jr.

    1988-01-01

    A methodology that can be used to determine which of several physical constraints can limit ion thruster power and thrust, under various design and operating conditions, is presented. The methodology is exercised to demonstrate typical limitations imposed by grid system span-to-gap ratio, intragrid electric field, discharge chamber power per unit beam area, screen grid lifetime, and accelerator grid lifetime constraints. Limitations on power and thrust for a thruster defined by typical discharge chamber and grid system parameters when it is operated at maximum thrust-to-power are discussed. It is pointed out that other operational objectives such as optimization of payload fraction or mission duration can be substituted for the thrust-to-power objective and that the methodology can be used as a tool for mission analysis.

  6. Hamiltonian BFV-BRST theory of closed quantum cosmological models

    NASA Astrophysics Data System (ADS)

    Kamenshchik, A. Yu.; Lyakhovich, S. L.

    1997-02-01

    We introduce and study a new discrete basis of gravity constraints by making use of harmonic expansion for closed cosmological models. The full set of constraints is split into area-preserving spatial diffeomorphisms, forming closed subalgebra, and Virasoro-like generators. Operational Hamiltonian BFV-BRST quantization is performed in the framework of perturbative expansion in the dimensionless parameter, which is a positive power of the ratio of Planckian volume to the volume of the Universe. For the (N + 1)-dimensional generalization of stationary closed Bianchi-I cosmology the nilpotency condition for the BRST operator is examined in the first quantum approximation. It turns out that a certain relationship between the dimensionality of the space and the spectrum of matter fields emerges from the requirement of quantum consistency of the model.

  7. Hamiltonian BFV-BRST theory of closed quantum cosmological models

    NASA Astrophysics Data System (ADS)

    Kamenshchik, A. Yu.; Lyakhovich, S. L.

    1997-08-01

    We introduce and study a new discrete basis of gravity constraints by making use of the harmonic expansion for closed cosmological models. The full set of constraints is split into area-preserving spatial diffeomorphisms, forming a closed subalgebra, and Virasoro-like generators. The operatorial Hamiltonian BFV-BRST quantization is performed in the framework of a perturbative expansion in the dimensionless parameter which is a positive power of the ratio of the Planck volume to the volume of the Universe. For the (N + 1) - dimensional generalization of a stationary closed Bianchi-I cosmology the nilpotency condition for the BRST operator is examined in the first quantum approximation. It turns out that a relationship between the dimensionality of the space and the spectrum of matter fields emerges from the requirement of quantum consistency of the model.

  8. On the linear programming bound for linear Lee codes.

    PubMed

    Astola, Helena; Tabus, Ioan

    2016-01-01

    Based on an invariance-type property of the Lee-compositions of a linear Lee code, additional equality constraints can be introduced to the linear programming problem of linear Lee codes. In this paper, we formulate this property in terms of an action of the multiplicative group of the field [Formula: see text] on the set of Lee-compositions. We show some useful properties of certain sums of Lee-numbers, which are the eigenvalues of the Lee association scheme, appearing in the linear programming problem of linear Lee codes. Using the additional equality constraints, we formulate the linear programming problem of linear Lee codes in a very compact form, leading to a fast execution, which allows to efficiently compute the bounds for large parameter values of the linear codes.

  9. Systematic study of 16O-induced fusion with the improved quantum molecular dynamics model

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Zhao, Kai; Li, Zhuxia

    2014-11-01

    The heavy-ion fusion reactions with 16O bombarding on 62Ni,65Cu,74Ge,148Nd,180Hf,186W,208Pb,238U are systematically investigated with the improved quantum molecular dynamics model. The fusion cross sections at energies near and above the Coulomb barriers can be reasonably well reproduced by using this semiclassical microscopic transport model with the parameter sets SkP* and IQ3a. The dynamical nucleus-nucleus potentials and the influence of Fermi constraint on the fusion process are also studied simultaneously. In addition to the mean field, the Fermi constraint also plays a key role for the reliable description of the fusion process and for improving the stability of fragments in heavy-ion collisions.

  10. Comparing UV/EUV line parameters and magnetic field in a quiescent prominence with tornadoes

    NASA Astrophysics Data System (ADS)

    Levens, P. J.; Labrosse, N.; Schmieder, B.; López Ariste, A.; Fletcher, L.

    2017-10-01

    Context. Understanding the relationship between plasma and the magnetic field is important for describing and explaining the observed dynamics of solar prominences. Aims: We determine if a close relationship can be found between plasma and magnetic field parameters, measured at high resolution in a well-observed prominence. Methods: A prominence observed on 15 July 2014 by the Interface Region Imaging Spectrograph (IRIS), Hinode, the Solar Dynamics Observatory (SDO), and the Télescope Héliographique pour l'Étude du Magnétisme et des Instabilités Solaires (THEMIS) is selected. We perform a robust co-alignment of data sets using a 2D cross-correlation technique. Magnetic field parameters are derived from spectropolarimetric measurements of the He I D3 line from THEMIS. Line ratios and line-of-sight velocities from the Mg II h and k lines observed by IRIS are compared with magnetic field strength, inclination, and azimuth. Electron densities are calculated using Fe xii line ratios from the Hinode Extreme-ultraviolet Imaging Spectrometer, which are compared to THEMIS and IRIS data. Results: We find Mg II k/h ratios of around 1.4 everywhere, similar to values found previously in prominences. Also, the magnetic field is strongest ( 30 G) and predominantly horizontal in the tornado-like legs of the prominence. The k3 Doppler shift is found to be between ±10 km s-1 everywhere. Electron densities at a temperature of 1.5 × 106 K are found to be around 109 cm-3. No significant correlations are found between the magnetic field parameters and any of the other plasma parameters inferred from spectroscopy, which may be explained by the large differences in the temperatures of the lines used in this study. Conclusions: This is the first time that a detailed statistical study of plasma and magnetic field parameters has been performed at high spatial resolution in a prominence. Our results provide important constraints on future models of the plasma and magnetic field in these structures.

  11. α-Attractor and reheating in a model with noncanonical scalar fields

    NASA Astrophysics Data System (ADS)

    Rashidi, Narges; Nozari, Kourosh

    We consider two noncanonical scalar fields [tachyon and Dirac-Born-Infeld (DBI)] with E-model type of the potential. We study cosmological inflation in these models to find possible α-attractors. We show that similar to the canonical scalar field case, in both tachyon and DBI models there is a value of the scalar spectral index in small α limit which is just a function of the e-folds number. However, the value of ns in DBI model is somewhat different from the other ones. We also compare the results with Planck2015 TT, TE, EE+lowP data. The reheating phase after inflation is studied in these models which gives some more constraints on the model parameters.

  12. Vacuum stability and naturalness in type-II seesaw

    DOE PAGES

    Haba, Naoyuki; Ishida, Hiroyuki; Okada, Nobuchika; ...

    2016-06-16

    Here, we study the vacuum stability and perturbativity conditions in the minimal type-II seesaw model. These conditions give characteristic constraints to the model parameters. In the model, there is a SU(2) L triplet scalar field, which could cause a large Higgs mass correction. From the naturalness point of view, heavy Higgs masses should be lower than 350GeV, which may be testable by the LHC Run-II results. Due to the effects of the triplet scalar field, the branching ratios of the Higgs decay (h → γγ,Zγ) deviate from the standard model, and a large parameter region is excluded by the recentmore » ATLAS and CMS combined analysis of h → γγ. Our result of the signal strength for h → γγ is R γγ ≲ 1.1, but its deviation is too small to observe at the LHC experiment.« less

  13. Constraints on mirror models of dark matter from observable neutron-mirror neutron oscillation

    NASA Astrophysics Data System (ADS)

    Mohapatra, Rabindra N.; Nussinov, Shmuel

    2018-01-01

    The process of neutron-mirror neutron oscillation, motivated by symmetric mirror dark matter models, is governed by two parameters: n -n‧ mixing parameter δ and n -n‧ mass splitting Δ. For neutron mirror neutron oscillation to be observable, the splitting between their masses Δ must be small and current experiments lead to δ ≤ 2 ×10-27 GeV and Δ ≤10-24 GeV. We show that in mirror universe models where this process is observable, this small mass splitting constrains the way that one must implement asymmetric inflation to satisfy the limits of Big Bang Nucleosynthesis on the number of effective light degrees of freedom. In particular we find that if asymmetric inflation is implemented by inflaton decay to color or electroweak charged particles, the oscillation is unobservable. Also if one uses SM singlet fields for this purpose, they must be weakly coupled to the SM fields.

  14. Multiloop atom interferometer measurements of chameleon dark energy in microgravity

    NASA Astrophysics Data System (ADS)

    Chiow, Sheng-wey; Yu, Nan

    2018-02-01

    Chameleon field is one of the promising candidates of dark energy scalar fields. As in all viable candidate field theories, a screening mechanism is implemented to be consistent with all existing tests of general relativity. The screening effect in the chameleon theory manifests its influence limited only to the thin outer layer of a bulk object, thus producing extra forces orders of magnitude weaker than that of the gravitational force of the bulk. For pointlike particles such as atoms, the depth of screening is larger than the size of the particle, such that the screening mechanism is ineffective and the chameleon force is fully expressed on the atomic test particles. Extra force measurements using atom interferometry are thus much more sensitive than bulk mass based measurements, and indeed have placed the most stringent constraints on the parameters characterizing chameleon field. In this paper, we present a conceptual measurement approach for chameleon force detection using atom interferometry in microgravity, in which multiloop atom interferometers exploit specially designed periodic modulation of chameleon fields. We show that major systematics of the dark energy force measurements, i.e., effects of gravitational forces and their gradients, can be suppressed below all hypothetical chameleon signals in the parameter space of interest.

  15. Data assimilation method based on the constraints of confidence region

    NASA Astrophysics Data System (ADS)

    Li, Yong; Li, Siming; Sheng, Yao; Wang, Luheng

    2018-03-01

    The ensemble Kalman filter (EnKF) is a distinguished data assimilation method that is widely used and studied in various fields including methodology and oceanography. However, due to the limited sample size or imprecise dynamics model, it is usually easy for the forecast error variance to be underestimated, which further leads to the phenomenon of filter divergence. Additionally, the assimilation results of the initial stage are poor if the initial condition settings differ greatly from the true initial state. To address these problems, the variance inflation procedure is usually adopted. In this paper, we propose a new method based on the constraints of a confidence region constructed by the observations, called EnCR, to estimate the inflation parameter of the forecast error variance of the EnKF method. In the new method, the state estimate is more robust to both the inaccurate forecast models and initial condition settings. The new method is compared with other adaptive data assimilation methods in the Lorenz-63 and Lorenz-96 models under various model parameter settings. The simulation results show that the new method performs better than the competing methods.

  16. Clustering fossils in solid inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhshik, Mohammad, E-mail: m.akhshik@ipm.ir

    In solid inflation the single field non-Gaussianity consistency condition is violated. As a result, the long tenor perturbation induces observable clustering fossils in the form of quadrupole anisotropy in large scale structure power spectrum. In this work we revisit the bispectrum analysis for the scalar-scalar-scalar and tensor-scalar-scalar bispectrum for the general parameter space of solid. We consider the parameter space of the model in which the level of non-Gaussianity generated is consistent with the Planck constraints. Specializing to this allowed range of model parameter we calculate the quadrupole anisotropy induced from the long tensor perturbations on the power spectrum ofmore » the scalar perturbations. We argue that the imprints of clustering fossil from primordial gravitational waves on large scale structures can be detected from the future galaxy surveys.« less

  17. Approximate probabilistic cellular automata for the dynamics of single-species populations under discrete logisticlike growth with and without weak Allee effects.

    PubMed

    Mendonça, J Ricardo G; Gevorgyan, Yeva

    2017-05-01

    We investigate one-dimensional elementary probabilistic cellular automata (PCA) whose dynamics in first-order mean-field approximation yields discrete logisticlike growth models for a single-species unstructured population with nonoverlapping generations. Beginning with a general six-parameter model, we find constraints on the transition probabilities of the PCA that guarantee that the ensuing approximations make sense in terms of population dynamics and classify the valid combinations thereof. Several possible models display a negative cubic term that can be interpreted as a weak Allee factor. We also investigate the conditions under which a one-parameter PCA derived from the more general six-parameter model can generate valid population growth dynamics. Numerical simulations illustrate the behavior of some of the PCA found.

  18. Control of thermal therapies with moving power deposition field.

    PubMed

    Arora, Dhiraj; Minor, Mark A; Skliar, Mikhail; Roemer, Robert B

    2006-03-07

    A thermal therapy feedback control approach to control thermal dose using a moving power deposition field is developed and evaluated using simulations. A normal tissue safety objective is incorporated in the controller design by imposing constraints on temperature elevations at selected normal tissue locations. The proposed control technique consists of two stages. The first stage uses a model-based sliding mode controller that dynamically generates an 'ideal' power deposition profile which is generally unrealizable with available heating modalities. Subsequently, in order to approximately realize this spatially distributed idealized power deposition, a constrained quadratic optimizer is implemented to compute intensities and dwell times for a set of pre-selected power deposition fields created by a scanned focused transducer. The dwell times for various power deposition profiles are dynamically generated online as opposed to the commonly employed a priori-decided heating strategies. Dynamic intensity and trajectory generation safeguards the treatment outcome against modelling uncertainties and unknown disturbances. The controller is designed to enforce simultaneous activation of multiple normal tissue temperature constraints by rapidly switching between various power deposition profiles. The hypothesis behind the controller design is that the simultaneous activation of multiple constraints substantially reduces treatment time without compromising normal tissue safety. The controller performance and robustness with respect to parameter uncertainties is evaluated using simulations. The results demonstrate that the proposed controller can successfully deliver the desired thermal dose to the target while maintaining the temperatures at the user-specified normal tissue locations at or below the maximum allowable values. Although demonstrated for the case of a scanned focused ultrasound transducer, the developed approach can be extended to other heating modalities with moving deposition fields, such as external and interstitial ultrasound phased arrays, multiple radiofrequency needle applicators and microwave antennae.

  19. Bound on largest r ∼< 0.1 from sub-Planckian excursions of inflaton

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Arindam; Mazumdar, Anupam, E-mail: arindam@hri.res.in, E-mail: a.mazumdar@lancaster.ac.uk

    2015-01-01

    In this paper we will discuss the range of large tensor to scalar ratio, r, obtainable from a sub-Planckian excursion of a single, slow roll driven inflaton field. In order to obtain a large r for such a scenario one has to depart from a monotonic evolution of the slow roll parameters in such a way that one still satisfies all the current constraints of \\texttt(Planck), such as the scalar amplitude, the tilt in the scalar power spectrum, running and running of the tilt close to the pivot scale. Since the slow roll parameters evolve non-monotonically, we will also considermore » the evolution of the power spectrum on the smallest scales, i.e. at P{sub s}(k ∼ 10{sup 16} Mpc{sup −1})∼< 10{sup −2}, to make sure that the amplitude does not become too large. All these constraints tend to keep the tensor to scalar ratio, r ∼< 0.1. We scan three different kinds of potential for supersymmetric flat directions and obtain the benchmark points which satisfy all the constraints. We also show that it is possible to go beyond r ∼> 0.1 provided we relax the upper bound on the power spectrum on the smallest scales.« less

  20. Cosmology with galaxy cluster phase spaces

    NASA Astrophysics Data System (ADS)

    Stark, Alejo; Miller, Christopher J.; Huterer, Dragan

    2017-07-01

    We present a novel approach to constrain accelerating cosmologies with galaxy cluster phase spaces. With the Fisher matrix formalism we forecast constraints on the cosmological parameters that describe the cosmological expansion history. We find that our probe has the potential of providing constraints comparable to, or even stronger than, those from other cosmological probes. More specifically, with 1000 (100) clusters uniformly distributed in the redshift range 0 ≤z ≤0.8 , after applying a conservative 80% mass scatter prior on each cluster and marginalizing over all other parameters, we forecast 1 σ constraints on the dark energy equation of state w and matter density parameter ΩM of σw=0.138 (0.431 ) and σΩM=0.007(0.025 ) in a flat universe. Assuming 40% mass scatter and adding a prior on the Hubble constant we can achieve a constraint on the Chevallier-Polarski-Linder parametrization of the dark energy equation of state parameters w0 and wa with 100 clusters in the same redshift range: σw 0=0.191 and σwa=2.712. Dropping the assumption of flatness and assuming w =-1 we also attain competitive constraints on the matter and dark energy density parameters: σΩ M=0.101 and σΩ Λ=0.197 for 100 clusters uniformly distributed in the range 0 ≤z ≤0.8 after applying a prior on the Hubble constant. We also discuss various observational strategies for tightening constraints in both the near and far future.

  1. New Boundary Constraints for Elliptic Systems used in Grid Generation Problems

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper discusses new boundary constraints for elliptic partial differential equations as used in grid generation problems in generalized curvilinear coordinate systems. These constraints, based on the principle of local conservation of thermal energy in the vicinity of the boundaries, are derived using the Green's Theorem. They uniquely determine the so called decay parameters in the source terms of these elliptic systems. These constraints' are designed for boundary clustered grids where large gradients in physical quantities need to be resolved adequately. It is observed that the present formulation also works satisfactorily for mild clustering. Therefore, a closure for the decay parameter specification for elliptic grid generation problems has been provided resulting in a fully automated elliptic grid generation technique. Thus, there is no need for a parametric study of these decay parameters since the new constraints fix them uniquely. It is also shown that for Neumann type boundary conditions, these boundary constraints uniquely determine the solution to the internal elliptic problem thus eliminating the non-uniqueness of the solution of an internal Neumann boundary value grid generation problem.

  2. A technique for automatically extracting useful field of view and central field of view images.

    PubMed

    Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar

    2016-01-01

    It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.

  3. A motion-constraint logic for moving-base simulators based on variable filter parameters

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.

    1974-01-01

    A motion-constraint logic for moving-base simulators has been developed that is a modification to the linear second-order filters generally employed in conventional constraints. In the modified constraint logic, the filter parameters are not constant but vary with the instantaneous motion-base position to increase the constraint as the system approaches the positional limits. With the modified constraint logic, accelerations larger than originally expected are limited while conventional linear filters would result in automatic shutdown of the motion base. In addition, the modified washout logic has frequency-response characteristics that are an improvement over conventional linear filters with braking for low-frequency pilot inputs. During simulated landing approaches of an externally blown flap short take-off and landing (STOL) transport using decoupled longitudinal controls, the pilots were unable to detect much difference between the modified constraint logic and the logic based on linear filters with braking.

  4. Impact of relativistic effects on cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Lorenz, Christiane S.; Alonso, David; Ferreira, Pedro G.

    2018-01-01

    Future surveys will access large volumes of space and hence very long wavelength fluctuations of the matter density and gravitational field. It has been argued that the set of secondary effects that affect the galaxy distribution, relativistic in nature, will bring new, complementary cosmological constraints. We study this claim in detail by focusing on a subset of wide-area future surveys: Stage-4 cosmic microwave background experiments and photometric redshift surveys. In particular, we look at the magnification lensing contribution to galaxy clustering and general-relativistic corrections to all observables. We quantify the amount of information encoded in these effects in terms of the tightening of the final cosmological constraints as well as the potential bias in inferred parameters associated with neglecting them. We do so for a wide range of cosmological parameters, covering neutrino masses, standard dark-energy parametrizations and scalar-tensor gravity theories. Our results show that, while the effect of lensing magnification to number counts does not contain a significant amount of information when galaxy clustering is combined with cosmic shear measurements, this contribution does play a significant role in biasing estimates on a host of parameter families if unaccounted for. Since the amplitude of the magnification term is controlled by the slope of the source number counts with apparent magnitude, s (z ), we also estimate the accuracy to which this quantity must be known to avoid systematic parameter biases, finding that future surveys will need to determine s (z ) to the ˜5 %- 10 % level. On the contrary, large-scale general-relativistic corrections are irrelevant both in terms of information content and parameter bias for most cosmological parameters but significant for the level of primordial non-Gaussianity.

  5. Causality constraints in conformal field theory

    DOE PAGES

    Hartman, Thomas; Jain, Sachin; Kundu, Sandipan

    2016-05-17

    Causality places nontrivial constraints on QFT in Lorentzian signature, for example fixing the signs of certain terms in the low energy Lagrangian. In d dimensional conformal field theory, we show how such constraints are encoded in crossing symmetry of Euclidean correlators, and derive analogous constraints directly from the conformal bootstrap (analytically). The bootstrap setup is a Lorentzian four-point function corresponding to propagation through a shockwave. Crossing symmetry fixes the signs of certain log terms that appear in the conformal block expansion, which constrains the interactions of low-lying operators. As an application, we use the bootstrap to rederive the well knownmore » sign constraint on the (Φ) 4 coupling in effective field theory, from a dual CFT. We also find constraints on theories with higher spin conserved currents. As a result, our analysis is restricted to scalar correlators, but we argue that similar methods should also impose nontrivial constraints on the interactions of spinning operators« less

  6. Constraints on Cosmological Parameters from the Angular Power Spectrum of a Combined 2500 deg$^2$ SPT-SZ and Planck Gravitational Lensing Map

    DOE PAGES

    Simard, G.; et al.

    2018-06-20

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\

  7. Constraints on Cosmological Parameters from the Angular Power Spectrum of a Combined 2500 deg$^2$ SPT-SZ and Planck Gravitational Lensing Map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simard, G.; et al.

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\

  8. Wind-driving protostellar accretion discs - I. Formulation and parameter constraints

    NASA Astrophysics Data System (ADS)

    Königl, Arieh; Salmeron, Raquel; Wardle, Mark

    2010-01-01

    We study a model of weakly ionized, protostellar accretion discs that are threaded by a large-scale, ordered magnetic field and power a centrifugally driven wind. We consider the limiting case where the wind is the main repository of the excess disc angular momentum and generalize the radially localized disc model of Wardle & Königl, which focused on the ambipolar diffusion regime, to other field diffusivity regimes, notably Hall and Ohm. We present a general formulation of the problem for nearly Keplerian, vertically isothermal discs using both the conductivity-tensor and the multifluid approaches and simplify it to a normalized system of ordinary differential equations in the vertical space coordinate. We determine the relevant parameters of the problem and investigate, using the vertical-hydrostatic-equilibrium approximation and other simplifications, the parameter constraints on physically viable solutions for discs in which the neutral particles are dynamically well coupled to the field already at the mid-plane. When the charged particles constitute a two-component ion-electron plasma, one can identify four distinct sub-regimes in the parameter domain where the Hall diffusivity dominates and three sub-regimes in the Ohm-dominated domain. Two of the Hall sub-regimes can be characterized as being ambipolar diffusion-like and two as being Ohm-like: the properties of one member of the first pair of sub-regimes are identical to those of the ambipolar diffusion regime, whereas one member of the second pair has the same characteristics as one of the Ohm sub-regimes. All the Hall sub-regimes have Brb/|Bφb| (ratio of radial-to-azimuthal magnetic field amplitudes at the disc surface) >1, whereas in two Ohm sub-regimes this ratio is <1. When the two-component plasma consists, instead, of positively and negatively charged grains of equal mass, the entire Hall domain and one of the Ohm sub-regimes with Brb/|Bφb| < 1 disappear. All viable solutions require the mid-plane neutral-ion momentum exchange time to be shorter than the local orbital time. We also infer that vertical magnetic squeezing always dominates over gravitational tidal compression in this model. In a follow-up paper we will present exact solutions that test the results of this analysis in the Hall regime.

  9. CMB constraints on the inflaton couplings and reheating temperature in α-attractor inflation

    NASA Astrophysics Data System (ADS)

    Drewes, Marco; Kang, Jin U.; Mun, Ui Ri

    2017-11-01

    We study reheating in α-attractor models of inflation in which the inflaton couples to other scalars or fermions. We show that the parameter space contains viable regions in which the inflaton couplings to radiation can be determined from the properties of CMB temperature fluctuations, in particular the spectral index. This may be the only way to measure these fundamental microphysical parameters, which shaped the universe by setting the initial temperature of the hot big bang and contain important information about the embedding of a given model of inflation into a more fundamental theory of physics. The method can be applied to other models of single field inflation.

  10. Magnetic design constraints of helical solenoids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopes, M. L.; Krave, S. T.; Tompkins, J. C.

    2015-01-30

    Helical solenoids have been proposed as an option for a Helical Cooling Channel for muons in a proposed Muon Collider. Helical solenoids can provide the required three main field components: solenoidal, helical dipole, and a helical gradient. In general terms, the last two are a function of many geometric parameters: coil aperture, coil radial and longitudinal dimensions, helix period and orbit radius. In this paper, we present design studies of a Helical Solenoid, addressing the geometric tunability limits and auxiliary correction system.

  11. Generation of a monodispersed aerosol

    NASA Technical Reports Server (NTRS)

    Schenck, H.; Mikasa, M.; Devicariis, R.

    1974-01-01

    The identity and laboratory test methods for the generation of a monodispersed aerosol are reported on, and are subjected to the following constraints and parameters; (1) size distribution; (2) specific gravity; (3) scattering properties; (4) costs; (5) production. The procedure called for the collection of information from the literature, commercial available products, and experts working in the field. The following topics were investigated: (1) aerosols; (2) air pollution -- analysis; (3) atomizers; (4) dispersion; (5) particles -- optics, size analysis; (6) smoke -- generators, density measurements; (7) sprays; (8) wind tunnels -- visualization.

  12. 6 DOF synchronized control for spacecraft formation flying with input constraint and parameter uncertainties.

    PubMed

    Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang

    2011-10-01

    This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Constraining neutron guide optimizations with phase-space considerations

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads; Lefmann, Kim

    2016-09-01

    We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.

  14. Computationally optimized ECoG stimulation with local safety constraints.

    PubMed

    Guler, Seyhmus; Dannhauer, Moritz; Roig-Solvas, Biel; Gkogkidis, Alexis; Macleod, Rob; Ball, Tonio; Ojemann, Jeffrey G; Brooks, Dana H

    2018-06-01

    Direct stimulation of the cortical surface is used clinically for cortical mapping and modulation of local activity. Future applications of cortical modulation and brain-computer interfaces may also use cortical stimulation methods. One common method to deliver current is through electrocorticography (ECoG) stimulation in which a dense array of electrodes are placed subdurally or epidurally to stimulate the cortex. However, proximity to cortical tissue limits the amount of current that can be delivered safely. It may be desirable to deliver higher current to a specific local region of interest (ROI) while limiting current to other local areas more stringently than is guaranteed by global safety limits. Two commonly used global safety constraints bound the total injected current and individual electrode currents. However, these two sets of constraints may not be sufficient to prevent high current density locally (hot-spots). In this work, we propose an efficient approach that prevents current density hot-spots in the entire brain while optimizing ECoG stimulus patterns for targeted stimulation. Specifically, we maximize the current along a particular desired directional field in the ROI while respecting three safety constraints: one on the total injected current, one on individual electrode currents, and the third on the local current density magnitude in the brain. This third set of constraints creates a computational barrier due to the huge number of constraints needed to bound the current density at every point in the entire brain. We overcome this barrier by adopting an efficient two-step approach. In the first step, the proposed method identifies the safe brain region, which cannot contain any hot-spots solely based on the global bounds on total injected current and individual electrode currents. In the second step, the proposed algorithm iteratively adjusts the stimulus pattern to arrive at a solution that exhibits no hot-spots in the remaining brain. We report on simulations on a realistic finite element (FE) head model with five anatomical ROIs and two desired directional fields. We also report on the effect of ROI depth and desired directional field on the focality of the stimulation. Finally, we provide an analysis of optimization runtime as a function of different safety and modeling parameters. Our results suggest that optimized stimulus patterns tend to differ from those used in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Multiparameter elastic full waveform inversion with facies-based constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-06-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  16. Model with two periods of inflation

    NASA Astrophysics Data System (ADS)

    Schettler, Simon; Schaffner-Bielich, Jürgen

    2016-01-01

    A scenario with two subsequent periods of inflationary expansion in the very early Universe is examined. The model is based on a potential motivated by symmetries being found in field theory at high energy. For various parameter sets of the potential, the spectra of scalar and tensor perturbations that are expected to originate from this scenario are calculated. Also the beginning of the reheating epoch connecting the second inflation with thermal equilibrium is studied. Perturbations with wavelengths leaving the horizon around the transition between the two inflations are special: It is demonstrated that the power spectrum at such scales deviates significantly from expectations based on measurements of the cosmic microwave background. This supports the conclusion that parameters for which this part of the spectrum leaves observable traces in the cosmic microwave background must be excluded. Parameters entailing a very efficient second inflation correspond to standard small-field inflation and can meet observational constraints. Particular attention is paid to the case where the second inflation leads solely to a shift of the observable spectrum from the first inflation. A viable scenario requires this shift to be small.

  17. Applications of holographic spacetime

    NASA Astrophysics Data System (ADS)

    Torres, Terrence J.

    Here we present an overview of the theory of holographic spacetime (HST), originally devised and primarily developed by Tom Banks and Willy Fischler, as well as its various applications and predictions for cosmology and particle phenomenology. First we cover the basic theory and motivation for holographic spacetime and move on to present the latest developments therein as of the time of this writing. Then we indicate the origin of the quantum degrees of freedom in the theory and then present a correspondence with low energy effective field theory. Further, we proceed to show the general origins of inflation and the cosmic microwave background (CMB) within the theory of HST as well as predict the functional forms of two and three point correlation functions for scalar and tensor curvature fluctuations in the early universe. Next, we constrain the theory parameters by insisting on agreement with observational bounds on the scalar spectral index of CMB fluctuations from the Planck experiment as well as theoretical bounds on the number of e-folds of inflation. Finally, we argue that HST predicts specific gauge structures for the low-energy effective field theory at the present era and proceed to construct a viable supersymmetric model extension. Constraints on model parameters and couplings are then calculated by numerically minimizing the theory's scalar potential and comparing the resultant model mass spectra to current observational limits from the LHC SUSY searches. In the end we find that the low-energy theory, while presenting a little hierarchy problem, is fully compatible with current observational limits. Additionally, the high-energy underlying theory is generically compatible with observational constraints stemming from inflation, and predictions on favored model parameters are given.

  18. Constraints on the Intergalactic Magnetic Field with Gamma-Ray Observations of Blazars

    NASA Astrophysics Data System (ADS)

    Finke, Justin D.; Reyes, Luis C.; Georganopoulos, Markos; Reynolds, Kaeleigh; Ajello, Marco; Fegan, Stephen J.; McCann, Kevin

    2015-11-01

    Distant BL Lacertae objects emit γ-rays that interact with the extragalactic background light (EBL), creating electron-positron pairs, and reducing the flux measured by ground-based imaging atmospheric Cherenkov telescopes (IACTs) at very-high energies (VHE). These pairs can Compton-scatter the cosmic microwave background, creating a γ-ray signature at slightly lower energies that is observable by the Fermi Large Area Telescope (LAT). This signal is strongly dependent on the intergalactic magnetic field (IGMF) strength (B) and its coherence length (LB). We use IACT spectra taken from the literature for 5 VHE-detected BL Lac objects and combine them with LAT spectra for these sources to constrain these IGMF parameters. Low B values can be ruled out by the constraint that the cascade flux cannot exceed that observed by the LAT. High values of B can be ruled out from the constraint that the EBL-deabsorbed IACT spectrum cannot be greater than the LAT spectrum extrapolated into the VHE band, unless the cascade spectrum contributes a sizable fraction of the LAT flux. We rule out low B values (B ≲ 10-19 G for LB ≥ 1 Mpc) at >5σ in all trials with different EBL models and data selection, except when using >1 GeV spectra and the lowest EBL models. We were not able to constrain high values of B.

  19. BBN for the LHC: Constraints on lifetimes of the Higgs portal scalars

    NASA Astrophysics Data System (ADS)

    Fradette, Anthony; Pospelov, Maxim

    2017-10-01

    LHC experiments can provide a remarkable sensitivity to exotic metastable massive particles, decaying with significant displacement from the interaction point. The best sensitivity is achieved with models where the production and decay occur due to different coupling constants, and the lifetime of exotic particles determines the probability of decay within a detector. The lifetimes of such particles can be independently limited from standard cosmology, in particular, the big bang nucleosynthesis (BBN). In this paper, we analyze the constraints on the simplest scalar model coupled through the Higgs portal, where the production occurs via h →S S , and the decay is induced by the small mixing angle of the Higgs field h and scalar S . We find that throughout most of the parameter space, 2 mμ

  20. Constraining the String Gauge Field by Galaxy Rotation Curves and Perihelion Precession of Planets

    NASA Astrophysics Data System (ADS)

    Cheung, Yeuk-Kwan E.; Xu, Feng

    2013-09-01

    We discuss a cosmological model in which the string gauge field coupled universally to matter gives rise to an extra centripetal force and will have observable signatures on cosmological and astronomical observations. Several tests are performed using data including galaxy rotation curves of 22 spiral galaxies of varied luminosities and sizes and perihelion precessions of planets in the solar system. The rotation curves of the same group of galaxies are independently fit using a dark matter model with the generalized Navarro-Frenk-White (NFW) profile and the string model. A remarkable fit of galaxy rotation curves is achieved using the one-parameter string model as compared to the three-parameter dark matter model with the NFW profile. The average χ2 value of the NFW fit is 9% better than that of the string model at a price of two more free parameters. Furthermore, from the string model, we can give a dynamical explanation for the phenomenological Tully-Fisher relation. We are able to derive a relation between field strength, galaxy size, and luminosity, which can be verified with data from the 22 galaxies. To further test the hypothesis of the universal existence of the string gauge field, we apply our string model to the solar system. Constraint on the magnitude of the string field in the solar system is deduced from the current ranges for any anomalous perihelion precession of planets allowed by the latest observations. The field distribution resembles a dipole field originating from the Sun. The string field strength deduced from the solar system observations is of a similar magnitude as the field strength needed to sustain the rotational speed of the Sun inside the Milky Way. This hypothesis can be tested further by future observations with higher precision.

  1. CONSTRAINTS ON SCALAR AND TENSOR PERTURBATIONS IN PHENOMENOLOGICAL AND TWO-FIELD INFLATION MODELS: BAYESIAN EVIDENCES FOR PRIMORDIAL ISOCURVATURE AND TENSOR MODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vaeliviita, Jussi; Savelainen, Matti; Talvitie, Marianne

    2012-07-10

    We constrain cosmological models where the primordial perturbations have an adiabatic and a (possibly correlated) cold dark matter (CDM) or baryon isocurvature component. We use both a phenomenological approach, where the power spectra of primordial perturbations are parameterized with amplitudes and spectral indices, and a slow-roll two-field inflation approach where slow-roll parameters are used as primary parameters, determining the spectral indices and the tensor-to-scalar ratio. In the phenomenological case, with CMB data, the upper limit to the CDM isocurvature fraction is {alpha} < 6.4% at k = 0.002 Mpc{sup -1} and 15.4% at k = 0.01 Mpc{sup -1}. The non-adiabaticmore » contribution to the CMB temperature variance is -0.030 < {alpha}{sub T} < 0.049 at the 95% confidence level. Including the supernova (SN) (or large-scale structure) data, these limits become {alpha} < 7.0%, 13.7%, and -0.048 < {alpha}{sub T} < 0.042 (or {alpha} < 10.2%, 16.0%, and -0.071 < {alpha}{sub T} < 0.024). The CMB constraint on the tensor-to-scalar ratio, r < 0.26 at k = 0.01 Mpc{sup -1}, is not affected by the non-adiabatic modes. In the slow-roll two-field inflation approach, the spectral indices are constrained close to 1. This leads to tighter limits on the isocurvature fraction; with the CMB data {alpha} < 2.6% at k = 0.01 Mpc{sup -1}, but the constraint on {alpha}{sub T} is not much affected, -0.058 < {alpha}{sub T} < 0.045. Including SN (or LSS) data, these limits become {alpha} < 3.2% and -0.056 < {alpha}{sub T} < 0.030 (or {alpha} < 3.4% and -0.063 < {alpha}{sub T} < -0.008). In addition to the generally correlated models, we study also special cases where the adiabatic and isocurvature modes are uncorrelated or fully (anti)correlated. We calculate Bayesian evidences (model probabilities) in 21 different non-adiabatic cases and compare them to the corresponding adiabatic models, and find that in all cases the data support the pure adiabatic model.« less

  2. First X-ray Statistical Tests for Clumpy Torii Models: Constraints from RXTE monitoring of Seyfert AGN

    NASA Astrophysics Data System (ADS)

    Markowitz, A.

    2015-09-01

    We summarize two papers providing the first X-ray-derived statistical constraints for both clumpy-torus model parameters and cloud ensemble properties. In Markowitz, Krumpe, & Nikutta (2014), we explored multi-timescale variability in line-of-sight X-ray absorbing gas as a function of optical classification. We examined 55 Seyferts monitored with the Rossi X-ray Timing Explorer, and found in 8 objects a total of 12 eclipses, with durations between hours and years. Most clouds are commensurate with the outer portions of the BLR, or the inner regions of infrared-emitting dusty tori. The detection of eclipses in type Is disfavors sharp-edged tori. We provide probabilities to observe a source undergoing an absorption event for both type Is and IIs, yielding constraints in [N_0, sigma, i] parameter space. In Nikutta et al., in prep., we infer that the small cloud angular sizes, as seen from the SMBH, imply the presence of >10^7 clouds in BLR+torus to explain observed covering factors. Cloud size is roughly proportional to distance from the SMBH, hinting at the formation processes (e.g. disk fragmentation). All observed clouds are sub-critical with respect to tidal disruption; self-gravity alone cannot contain them. External forces (e.g. magnetic fields, ambient pressure) are needed to contain them, or otherwise the clouds must be short-lived. Finally, we infer that the radial cloud density distribution behaves as 1/r^{0.7}, compatible with VLTI observations. Our results span both dusty and non-dusty clumpy media, and probe model parameter space complementary to that for short-term eclipses observed with XMM-Newton, Suzaku, and Chandra.

  3. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea will be illustrated using a short-arc, angles-only observation scenario.

  4. Adaptive neural output-feedback control for nonstrict-feedback time-delay fractional-order systems with output constraints and actuator nonlinearities.

    PubMed

    Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad

    2018-06-01

    This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Cosmological constraints from galaxy clustering in the presence of massive neutrinos

    NASA Astrophysics Data System (ADS)

    Zennaro, M.; Bel, J.; Dossett, J.; Carbone, C.; Guzzo, L.

    2018-06-01

    The clustering ratio is defined as the ratio between the correlation function and the variance of the smoothed overdensity field. In Λ cold dark matter (ΛCDM) cosmologies without massive neutrinos, it has already been proven to be independent of bias and redshift space distortions on a range of linear scales. It therefore can provide us with a direct comparison of predictions (for matter in real space) against measurements (from galaxies in redshift space). In this paper we first extend the applicability of such properties to cosmologies that account for massive neutrinos, by performing tests against simulated data. We then investigate the constraining power of the clustering ratio on cosmological parameters such as the total neutrino mass and the equation of state of dark energy. We analyse the joint posterior distribution of the parameters that satisfy both measurements of the galaxy clustering ratio in the SDSS-DR12, and the angular power spectra of cosmic microwave background temperature and polarization anisotropies measured by the Planck satellite. We find the clustering ratio to be very sensitive to the CDM density parameter, but less sensitive to the total neutrino mass. We also forecast the constraining power the clustering ratio will achieve, predicting the amplitude of its errors with a Euclid-like galaxy survey. First we compute parameter forecasts using the Planck covariance matrix alone, then we add information from the clustering ratio. We find a significant improvement on the constraint of all considered parameters, and in particular an improvement of 40 per cent for the CDM density and 14 per cent for the total neutrino mass.

  6. Designing an optimum pulsed magnetic field by a resistance/self-inductance/capacitance discharge system and alignment of carbon nanotubes embedded in polypyrrole matrix

    NASA Astrophysics Data System (ADS)

    Kazemikia, Kaveh; Bonabi, Fahimeh; Asadpoorchallo, Ali; Shokrzadeh, Majid

    2015-02-01

    In this work, an optimized pulsed magnetic field production apparatus is designed based on a RLC (Resistance/Self-inductance/Capacitance) discharge circuit. An algorithm for designing an optimum magnetic coil is presented. The coil is designed to work at room temperature. With a minor physical reinforcement, the magnetic flux density can be set up to 12 Tesla with 2 ms duration time. In our design process, the magnitude and the length of the magnetic pulse are the desired parameters. The magnetic field magnitude in the RLC circuit is maximized on the basis of the optimal design of the coil. The variables which are used in the optimization process are wire diameter and the number of coil layers. The coil design ensures the critically damped response of the RLC circuit. The electrical, mechanical, and thermal constraints are applied to the design process. A locus of probable magnetic flux density values versus wire diameter and coil layer is provided to locate the optimum coil parameters. Another locus of magnetic flux density values versus capacitance and initial voltage of the RLC circuit is extracted to locate the optimum circuit parameters. Finally, the application of high magnetic fields on carbon nanotube-PolyPyrrole (CNT-PPy) nano-composite is presented. Scanning probe microscopy technique is used to observe the orientation of CNTs after exposure to a magnetic field. The result shows alignment of CNTs in a 10.3 Tesla, 1.5 ms magnetic pulse.

  7. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  8. The coherence length of the peculiar velocity field in the universe and the large-scale galaxy correlation data

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    This study presents a method for obtaining the true rms peculiar flow in the universe on scales up to 100-120/h Mpc using APM data as an input assuming only that peculiar motions are caused by peculiar gravity. The comparison to the local (Great Attractor) flow is expected to give clear information on the density parameter, Omega, and the local bias parameter, b. The observed peculiar flows in the Great Attractor region are found to be in better agreement with the open (Omega = 0.1) universe in which light traces mass (b = 1) than with a flat (Omega = 1) universe unless the bias parameter is unrealistically large (b is not less than 4). Constraints on Omega from a comparison of the APM and PV samples are discussed.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goolsby-Cole, Cody; Sorbo, Lorenzo, E-mail: cgoolsby@physics.umass.edu, E-mail: sorbo@physics.umass.edu

    We discuss the possibility of a feature in the spectrum of inflationary gravitational waves sourced by a scalar field χ whose vacuum fluctuations are amplified by a rapidly time dependent mass. Unlike previous work which has focused on the case in which the mass of the field χ vanishes only for an instant before becoming massive again, we study a system where the scalar field becomes and remains massless through the end of inflation. After applying appropriate constraints to our parameters, we find, for future CMB experiments, a small contribution to the tensor-to-scalar ratio which can be at most ofmore » the order r ∼ 10{sup −5}. At smaller scales probed by gravitational interferometers, on the other hand, the energy density in the gravitational waves produced this way might be above the projected sensitivity of LISA, Ω{sub GW} h {sup 2} ∼ 10{sup −13}, in a narrow region of parameter space. If there is more than one χ species, then these amplitudes are enhanced by a factor equal to the number of those species.« less

  10. Bootstrapping N=2 chiral correlators

    NASA Astrophysics Data System (ADS)

    Lemos, Madalena; Liendo, Pedro

    2016-01-01

    We apply the numerical bootstrap program to chiral operators in four-dimensional N=2 SCFTs. In the first part of this work we study four-point functions in which all fields have the same conformal dimension. We give special emphasis to bootstrapping a specific theory: the simplest Argyres-Douglas fixed point with no flavor symmetry. In the second part we generalize our setup and consider correlators of fields with unequal dimension. This is an example of a mixed correlator and allows us to probe new regions in the parameter space of N=2 SCFTs. In particular, our results put constraints on relations in the Coulomb branch chiral ring and on the curvature of the Zamolodchikov metric.

  11. Two-dimensional tracking of ncd motility by back focal plane interferometry.

    PubMed Central

    Allersma, M W; Gittes, F; deCastro, M J; Stewart, R J; Schmidt, C F

    1998-01-01

    A technique for detecting the displacement of micron-sized optically trapped probes using far-field interference is introduced, theoretically explained, and used to study the motility of the ncd motor protein. Bead motions in the focal plane relative to the optical trap were detected by measuring laser intensity shifts in the back-focal plane of the microscope condenser by projection on a quadrant diode. This detection method is two-dimensional, largely independent of the position of the trap in the field of view and has approximately 10-micros time resolution. The high resolution makes it possible to apply spectral analysis to measure dynamic parameters such as local viscosity and attachment compliance. A simple quantitative theory for back-focal-plane detection was derived that shows that the laser intensity shifts are caused primarily by a far-field interference effect. The theory predicts the detector response to bead displacement, without adjustable parameters, with good accuracy. To demonstrate the potential of the method, the ATP-dependent motility of ncd, a kinesin-related motor protein, was observed with an in vitro bead assay. A fusion protein consisting of truncated ncd (amino acids 195-685) fused with glutathione-S-transferase was adsorbed to silica beads, and the axial and lateral motions of the beads along the microtubule surface were observed with high spatial and temporal resolution. The average axial velocity of the ncd-coated beads was 230 +/- 30 nm/s (average +/- SD). Spectral analysis of bead motion showed the increase in viscous drag near the surface; we also found that any elastic constraints of the moving motors are much smaller than the constraints due to binding in the presence of the nonhydrolyzable nucleotide adenylylimidodiphosphate. PMID:9533719

  12. Development of a Composite Tailoring Procedure for Airplane Wings

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi

    2000-01-01

    The quest for finding optimum solutions to engineering problems has existed for a long time. In modern times, the development of optimization as a branch of applied mathematics is regarded to have originated in the works of Newton, Bernoulli and Euler. Venkayya has presented a historical perspective on optimization in [1]. The term 'optimization' is defined by Ashley [2] as a procedure "...which attempts to choose the variables in a design process so as formally to achieve the best value of some performance index while not violating any of the associated conditions or constraints". Ashley presented an extensive review of practical applications of optimization in the aeronautical field till about 1980 [2]. It was noted that there existed an enormous amount of published literature in the field of optimization, but its practical applications in industry were very limited. Over the past 15 years, though, optimization has been widely applied to address practical problems in aerospace design [3-5]. The design of high performance aerospace systems is a complex task. It involves the integration of several disciplines such as aerodynamics, structural analysis, dynamics, and aeroelasticity. The problem involves multiple objectives and constraints pertaining to the design criteria associated with each of these disciplines. Many important trade-offs exist between the parameters involved which are used to define the different disciplines. Therefore, the development of multidisciplinary design optimization (MDO) techniques, in which different disciplines and design parameters are coupled into a closed loop numerical procedure, seems appropriate to address such a complex problem. The importance of MDO in successful design of aerospace systems has been long recognized. Recent developments in this field have been surveyed by Sobieszczanski-Sobieski and Haftka [6].

  13. Observational status of Tachyon Natural Inflation and reheating

    NASA Astrophysics Data System (ADS)

    Rashidi, Narges; Nozari, Kourosh; Grøn, Øyvind

    2018-05-01

    We study observational viability of Natural Inflation with a tachyon field as inflaton. By obtaining the main perturbation parameters in this model, we perform a numerical analysis on the parameter space of the model and in confrontation with 68% and 95% CL regions of Planck2015 data. By adopting a warped background geometry, we find some new constraints on the width of the potential in terms of its height and the warp factor. We show that the Tachyon Natural Inflation in the large width limit recovers the tachyon model with a phi2 potential which is consistent with Planck2015 observational data. Then we focus on the reheating era after inflation by treating the number of e-folds, temperature and the effective equation of state parameter in this era. Since it is likely that the value of the effective equation of state parameter during the reheating era to be in the range 0<= ωeff<= 1/3, we obtain some new constraints on the tensor to scalar ratio, r, as well as the e-folds number and reheating temperature in this Tachyon Natural Inflation model. In particular, we show that a prediction of this model is r<=8/3 δns, where δns is the scalar spectral tilt, δns=1‑ns. In this regard, given that from the Planck2015 data we have δns=0.032 (corresponding to ns=0.968), we get r<= 0.085.

  14. Chemical reaction for Carreau-Yasuda nanofluid flow past a nonlinear stretching sheet considering Joule heating

    NASA Astrophysics Data System (ADS)

    Khan, Mair; Shahid, Amna; Malik, M. Y.; Salahuddin, T.

    2018-03-01

    Current analysis has been made to scrutinize the consequences of chemical response against magneto-hydrodynamic Carreau-Yasuda nanofluid flow induced by a non-linear stretching surface considering zero normal flux, slip and convective boundary conditions. Joule heating effect is also considered. Appropriate similarity approach is used to convert leading system of PDE's for Carreau-Yasuda nanofluid into nonlinear ODE's. Well known mathematical scheme namely shooting method is utilized to solve the system numerically. Physical parameters, namely Weissenberg number We , thermal slip parameter δ , thermophoresis number NT, non-linear stretching parameter n, magnetic field parameter M, velocity slip parameter k , Lewis number Le, Brownian motion parameter NB, Prandtl number Pr, Eckert number Ec and chemical reaction parameter γ upon temperature, velocity and concentration profiles are visualized through graphs and tables. Numerical influence of mass and heat transfer rates and friction factor are also represented in tabular as well as graphical form respectively. Skin friction coefficient reduces when Weissenberg number We is incremented. Rate of heat transfer enhances for large values of Brownian motion constraint NB. By increasing Lewis quantity Le rate of mass transfer declines.

  15. THE LITTLEST HIGGS MODEL AND ONE-LOOP ELECTROWEAK PRECISION CONSTRAINTS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHEN, M.C.; DAWSON,S.

    2004-06-16

    We present in this talk the one-loop electroweak precision constraints in the Littlest Higgs model, including the logarithmically enhanced contributions from both fermion and scalar loops. We find the one-loop contributions are comparable to the tree level corrections in some regions of parameter space. A low cutoff scale is allowed for a non-zero triplet VEV. Constraints on various other parameters in the model are also discussed. The role of triplet scalars in constructing a consistent renormalization scheme is emphasized.

  16. Joint measurement of lensing-galaxy correlations using SPT and DES SV data

    DOE PAGES

    Baxter, E. J.

    2016-07-04

    We measure the correlation of galaxy lensing and cosmic microwave background lensing with a set of galaxies expected to trace the matter density field. The measurements are performed using pre-survey Dark Energy Survey (DES) Science Verification optical imaging data and millimeter-wave data from the 2500 square degree South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey. The two lensing-galaxy correlations are jointly fit to extract constraints on cosmological parameters, constraints on the redshift distribution of the lens galaxies, and constraints on the absolute shear calibration of DES galaxy lensing measurements. We show that an attractive feature of these fits is that they are fairly insensitive to the clustering bias of the galaxies used as matter tracers. The measurement presented in this work confirms that DES and SPT data are consistent with each other and with the currently favoredmore » $$\\Lambda$$CDM cosmological model. In conclusion, it also demonstrates that joint lensing-galaxy correlation measurement considered here contains a wealth of information that can be extracted using current and future surveys.« less

  17. Joint measurement of lensing-galaxy correlations using SPT and DES SV data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, E. J.

    We measure the correlation of galaxy lensing and cosmic microwave background lensing with a set of galaxies expected to trace the matter density field. The measurements are performed using pre-survey Dark Energy Survey (DES) Science Verification optical imaging data and millimeter-wave data from the 2500 square degree South Pole Telescope Sunyaev-Zel'dovich (SPT-SZ) survey. The two lensing-galaxy correlations are jointly fit to extract constraints on cosmological parameters, constraints on the redshift distribution of the lens galaxies, and constraints on the absolute shear calibration of DES galaxy lensing measurements. We show that an attractive feature of these fits is that they are fairly insensitive to the clustering bias of the galaxies used as matter tracers. The measurement presented in this work confirms that DES and SPT data are consistent with each other and with the currently favoredmore » $$\\Lambda$$CDM cosmological model. In conclusion, it also demonstrates that joint lensing-galaxy correlation measurement considered here contains a wealth of information that can be extracted using current and future surveys.« less

  18. Joint measurement of lensing–galaxy correlations using SPT and DES SV data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, E.; Clampitt, J.; Giannantonio, T.

    We measure the correlation of galaxy lensing and cosmic microwave background lensing with a set of galaxies expected to trace the matter density field. The measurements are performed using pre-survey Dark Energy Survey (DES) Science Verification optical imaging data and millimetre-wave data from the 2500 sq. deg. South Pole Telescope Sunyaev–Zel'dovich (SPT-SZ) survey. The two lensing–galaxy correlations are jointly fit to extract constraints on cosmological parameters, constraints on the redshift distribution of the lens galaxies, and constraints on the absolute shear calibration of DES galaxy-lensing measurements. We show that an attractive feature of these fits is that they are fairlymore » insensitive to the clustering bias of the galaxies used as matter tracers. The measurement presented in this work confirms that DES and SPT data are consistent with each other and with the currently favoured Λ cold dark matter cosmological model. It also demonstrates that joint lensing–galaxy correlation measurement considered here contains a wealth of information that can be extracted using current and future surveys.« less

  19. Indirect detection constraints on s- and t-channel simplified models of dark matter

    NASA Astrophysics Data System (ADS)

    Carpenter, Linda M.; Colburn, Russell; Goodman, Jessica; Linden, Tim

    2016-09-01

    Recent Fermi-LAT observations of dwarf spheroidal galaxies in the Milky Way have placed strong limits on the gamma-ray flux from dark matter annihilation. In order to produce the strongest limit on the dark matter annihilation cross section, the observations of each dwarf galaxy have typically been "stacked" in a joint-likelihood analysis, utilizing optical observations to constrain the dark matter density profile in each dwarf. These limits have typically been computed only for singular annihilation final states, such as b b ¯ or τ+τ- . In this paper, we generalize this approach by producing an independent joint-likelihood analysis to set constraints on models where the dark matter particle annihilates to multiple final-state fermions. We interpret these results in the context of the most popular simplified models, including those with s- and t-channel dark matter annihilation through scalar and vector mediators. We present our results as constraints on the minimum dark matter mass and the mediator sector parameters. Additionally, we compare our simplified model results to those of effective field theory contact interactions in the high-mass limit.

  20. Surveillance of a 2D Plane Area with 3D Deployed Cameras

    PubMed Central

    Fu, Yi-Ge; Zhou, Jie; Deng, Lei

    2014-01-01

    As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353

  1. Cosmological Constraints from Fourier Phase Statistics

    NASA Astrophysics Data System (ADS)

    Ali, Kamran; Obreschkow, Danail; Howlett, Cullan; Bonvin, Camille; Llinares, Claudio; Oliveira Franco, Felipe; Power, Chris

    2018-06-01

    Most statistical inference from cosmic large-scale structure relies on two-point statistics, i.e. on the galaxy-galaxy correlation function (2PCF) or the power spectrum. These statistics capture the full information encoded in the Fourier amplitudes of the galaxy density field but do not describe the Fourier phases of the field. Here, we quantify the information contained in the line correlation function (LCF), a three-point Fourier phase correlation function. Using cosmological simulations, we estimate the Fisher information (at redshift z = 0) of the 2PCF, LCF and their combination, regarding the cosmological parameters of the standard ΛCDM model, as well as a Warm Dark Matter (WDM) model and the f(R) and Symmetron modified gravity models. The galaxy bias is accounted for at the level of a linear bias. The relative information of the 2PCF and the LCF depends on the survey volume, sampling density (shot noise) and the bias uncertainty. For a volume of 1h^{-3}Gpc^3, sampled with points of mean density \\bar{n} = 2× 10^{-3} h3 Mpc^{-3} and a bias uncertainty of 13%, the LCF improves the parameter constraints by about 20% in the ΛCDM cosmology and potentially even more in alternative models. Finally, since a linear bias only affects the Fourier amplitudes (2PCF), but not the phases (LCF), the combination of the 2PCF and the LCF can be used to break the degeneracy between the linear bias and σ8, present in 2-point statistics.

  2. General gauge mediation at the weak scale

    DOE PAGES

    Knapen, Simon; Redigolo, Diego; Shih, David

    2016-03-09

    We completely characterize General Gauge Mediation (GGM) at the weak scale by solving all IR constraints over the full parameter space. This is made possible through a combination of numerical and analytical methods, based on a set of algebraic relations among the IR soft masses derived from the GGM boundary conditions in the UV. We show how tensions between just a few constraints determine the boundaries of the parameter space: electroweak symmetry breaking (EWSB), the Higgs mass, slepton tachyons, and left-handed stop/sbottom tachyons. While these constraints allow the left-handed squarks to be arbitrarily light, they place strong lower bounds onmore » all of the right-handed squarks. Meanwhile, light EW superpartners are generic throughout much of the parameter space. This is especially the case at lower messenger scales, where a positive threshold correction to m h coming from light Higgsinos and winos is essential in order to satisfy the Higgs mass constraint.« less

  3. Towards weakly constrained double field theory

    NASA Astrophysics Data System (ADS)

    Lee, Kanghoon

    2016-08-01

    We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon) transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  4. Optimal impulsive time-fixed orbital rendezvous and interception with path constraints

    NASA Technical Reports Server (NTRS)

    Taur, D.-R.; Prussing, J. E.; Coverstone-Carroll, V.

    1990-01-01

    Minimum-fuel, impulsive, time-fixed solutions are obtained for the problem of orbital rendezvous and interception with interior path constraints. Transfers between coplanar circular orbits in an inverse-square gravitational field are considered, subject to a circular path constraint representing a minimum or maximum permissible orbital radius. Primer vector theory is extended to incorporate path constraints. The optimal number of impulses, their times and positions, and the presence of initial or final coasting arcs are determined. The existence of constraint boundary arcs and boundary points is investigated as well as the optimality of a class of singular arc solutions. To illustrate the complexities introduced by path constraints, an analysis is made of optimal rendezvous in field-free space subject to a minimum radius constraint.

  5. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  6. Constraints on cosmological parameters in power-law cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rani, Sarita; Singh, J.K.; Altaibayeva, A.

    In this paper, we examine observational constraints on the power law cosmology; essentially dependent on two parameters H{sub 0} (Hubble constant) and q (deceleration parameter). We investigate the constraints on these parameters using the latest 28 points of H(z) data and 580 points of Union2.1 compilation data and, compare the results with the results of ΛCDM . We also forecast constraints using a simulated data set for the future JDEM, supernovae survey. Our studies give better insight into power law cosmology than the earlier done analysis by Kumar [arXiv:1109.6924] indicating it tuning well with Union2.1 compilation data but not withmore » H(z) data. However, the constraints obtained on and i.e. H{sub 0} average and q average using the simulated data set for the future JDEM, supernovae survey are found to be inconsistent with the values obtained from the H(z) and Union2.1 compilation data. We also perform the statefinder analysis and find that the power-law cosmological models approach the standard ΛCDM model as q → −1. Finally, we observe that although the power law cosmology explains several prominent features of evolution of the Universe, it fails in details.« less

  7. OPEN CLUSTERS AS PROBES OF THE GALACTIC MAGNETIC FIELD. I. CLUSTER PROPERTIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoq, Sadia; Clemens, D. P., E-mail: shoq@bu.edu, E-mail: clemens@bu.edu

    2015-10-15

    Stars in open clusters are powerful probes of the intervening Galactic magnetic field via background starlight polarimetry because they provide constraints on the magnetic field distances. We use 2MASS photometric data for a sample of 31 clusters in the outer Galaxy for which near-IR polarimetric data were obtained to determine the cluster distances, ages, and reddenings via fitting theoretical isochrones to cluster color–magnitude diagrams. The fitting approach uses an objective χ{sup 2} minimization technique to derive the cluster properties and their uncertainties. We found the ages, distances, and reddenings for 24 of the clusters, and the distances and reddenings formore » 6 additional clusters that were either sparse or faint in the near-IR. The derived ranges of log(age), distance, and E(B−V) were 7.25–9.63, ∼670–6160 pc, and 0.02–1.46 mag, respectively. The distance uncertainties ranged from ∼8% to 20%. The derived parameters were compared to previous studies, and most cluster parameters agree within our uncertainties. To test the accuracy of the fitting technique, synthetic clusters with 50, 100, or 200 cluster members and a wide range of ages were fit. These tests recovered the input parameters within their uncertainties for more than 90% of the individual synthetic cluster parameters. These results indicate that the fitting technique likely provides reliable estimates of cluster properties. The distances derived will be used in an upcoming study of the Galactic magnetic field in the outer Galaxy.« less

  8. Hybrid Natural Inflation

    NASA Astrophysics Data System (ADS)

    Ross, Graham G.; Germán, Gabriel; Vázquez, J. Alberto

    2016-05-01

    We construct two simple effective field theory versions of Hybrid Natural Inflation (HNI) that illustrate the range of its phenomenological implications. The resulting inflationary sector potential, V = Δ4(1 + acos( ϕ/f)), arises naturally, with the inflaton field a pseudo-Nambu-Goldstone boson. The end of inflation is triggered by a waterfall field and the conditions for this to happen are determined. Also of interest is the fact that the slow-roll parameter ɛ (and hence the tensor r) is a non-monotonic function of the field with a maximum where observables take universal values that determines the maximum possible tensor to scalar ratio r. In one of the models the inflationary scale can be as low as the electroweak scale. We explore in detail the associated HNI phenomenology, taking account of the constraints from Black Hole production, and perform a detailed fit to the Planck 2015 temperature and polarisation data.

  9. Fourier transform inequalities for phylogenetic trees.

    PubMed

    Matsen, Frederick A

    2009-01-01

    Phylogenetic invariants are not the only constraints on site-pattern frequency vectors for phylogenetic trees. A mutation matrix, by its definition, is the exponential of a matrix with non-negative off-diagonal entries; this positivity requirement implies non-trivial constraints on the site-pattern frequency vectors. We call these additional constraints "edge-parameter inequalities". In this paper, we first motivate the edge-parameter inequalities by considering a pathological site-pattern frequency vector corresponding to a quartet tree with a negative internal edge. This site-pattern frequency vector nevertheless satisfies all of the constraints described up to now in the literature. We next describe two complete sets of edge-parameter inequalities for the group-based models; these constraints are square-free monomial inequalities in the Fourier transformed coordinates. These inequalities, along with the phylogenetic invariants, form a complete description of the set of site-pattern frequency vectors corresponding to bona fide trees. Said in mathematical language, this paper explicitly presents two finite lists of inequalities in Fourier coordinates of the form "monomial < or = 1", each list characterizing the phylogenetically relevant semialgebraic subsets of the phylogenetic varieties.

  10. Gravitational Lensing 2.0

    NASA Astrophysics Data System (ADS)

    Wittman, David M.; Benson, Bryant

    2018-06-01

    Weak lensing analyses use the image---the intensity field---of a distant galaxy to infer gravitational effects on that line of sight. What if we analyze the velocity field instead? We show that lensing imprints much more information onto a highly ordered velocity field, such as that of a rotating disk galaxy, than onto an intensity field. This is because shuffling intensity pixels yields a post-lensed image quite similar to an unlensed galaxy with a different orientation, a problem known as "shape noise." We show that velocity field analysis can eliminate shape noise and yield much more precise lensing constraints. Furthermore, convergence as well as shear can be constrained using the same target, and there is no need to assume the weak lensing limit of small convergence. We present Fisher matrix forecasts of the precision achievable with this method. Velocity field observations are expensive, so we derive guidelines for choosing suitable targets by exploring how precision varies with source parameters such as inclination angle and redshift. Finally, we present simulations that support our Fisher matrix forecasts.

  11. Bottom quark anti-quark production and mixing in proton anti-proton collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Zhaoou

    2003-03-01

    The studies of bottom quark-antiquark production in proton-antiproton collisions play an important role in testing perturbative QCD. Measuring the mixing parameter of B mesons imposes constraints on the quark mixing (CKM) matrix and enhances the understanding of the Standard Model. Multi-GeV pmore » $$\\bar{p}$$ colliders produce a significant amount of b$$\\bar{b}$$ pairs and thus enable studies in both of these fields. This thesis presents results of the b$$\\bar{b}$$ production cross section from p$$\\bar{p}$$ collisions at √s = 1.8 TeV and the time-integrated average B$$\\bar{B}$$ mixing parameter ($$\\bar{χ}$$) using highmass dimuon d a ta collected by CDF during its Run IB.« less

  12. Shock Formation and Energy Dissipation of Slow Magnetosonic Waves in Coronal Plumes

    NASA Technical Reports Server (NTRS)

    Cuntz, M.; Suess, S. T.

    2003-01-01

    We study the shock formation and energy dissipation of slow magnetosonic waves in coronal plumes. The wave parameters and the spreading function of the plumes as well as the base magnetic field strength are given by empirical constraints mostly from SOHO/UVCS. Our models show that shock formation occurs at low coronal heights, i.e., within 1.3 bun, depending on the model parameters. In addition, following analytical estimates, we show that scale height of energy dissipation by the shocks ranges between 0.15 and 0.45 Rsun. This implies that shock heating by slow magnetosonic waves is relevant at most heights, even though this type of waves is apparently not a solely operating energy supply mechanism.

  13. A proposed experimental search for chameleons using asymmetric parallel plates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrage, Clare; Copeland, Edmund J.; Stevenson, James A., E-mail: Clare.Burrage@nottingham.ac.uk, E-mail: ed.copeland@nottingham.ac.uk, E-mail: james.stevenson@nottingham.ac.uk

    2016-08-01

    Light scalar fields coupled to matter are a common consequence of theories of dark energy and attempts to solve the cosmological constant problem. The chameleon screening mechanism is commonly invoked in order to suppress the fifth forces mediated by these scalars, sufficiently to avoid current experimental constraints, without fine tuning. The force is suppressed dynamically by allowing the mass of the scalar to vary with the local density. Recently it has been shown that near future cold atoms experiments using atom-interferometry have the ability to access a large proportion of the chameleon parameter space. In this work we demonstrate howmore » experiments utilising asymmetric parallel plates can push deeper into the remaining parameter space available to the chameleon.« less

  14. Delayed response and biosonar perception explain movement coordination in trawling bats.

    PubMed

    Giuggioli, Luca; McKetterick, Thomas J; Holderied, Marc

    2015-03-01

    Animal coordinated movement interactions are commonly explained by assuming unspecified social forces of attraction, repulsion and alignment with parameters drawn from observed movement data. Here we propose and test a biologically realistic and quantifiable biosonar movement interaction mechanism for echolocating bats based on spatial perceptual bias, i.e. actual sound field, a reaction delay, and observed motor constraints in speed and acceleration. We found that foraging pairs of bats flying over a water surface swapped leader-follower roles and performed chases or coordinated manoeuvres by copying the heading a nearby individual has had up to 500 ms earlier. Our proposed mechanism based on the interplay between sensory-motor constraints and delayed alignment was able to recreate the observed spatial actor-reactor patterns. Remarkably, when we varied model parameters (response delay, hearing threshold and echolocation directionality) beyond those observed in nature, the spatio-temporal interaction patterns created by the model only recreated the observed interactions, i.e. chases, and best matched the observed spatial patterns for just those response delays, hearing thresholds and echolocation directionalities found to be used by bats. This supports the validity of our sensory ecology approach of movement coordination, where interacting bats localise each other by active echolocation rather than eavesdropping.

  15. Conceptual design study of the moderate size superconducting spherical tokamak power plant

    NASA Astrophysics Data System (ADS)

    Gi, Keii; Ono, Yasushi; Nakamura, Makoto; Someya, Youji; Utoh, Hiroyasu; Tobita, Kenji; Ono, Masayuki

    2015-06-01

    A new conceptual design of the superconducting spherical tokamak (ST) power plant was proposed as an attractive choice for tokamak fusion reactors. We reassessed a possibility of the ST as a power plant using the conservative reactor engineering constraints often used for the conventional tokamak reactor design. An extensive parameters scan which covers all ranges of feasible superconducting ST reactors was completed, and five constraints which include already achieved plasma magnetohydrodynamic (MHD) and confinement parameters in ST experiments were established for the purpose of choosing the optimum operation point. Based on comparison with the estimated future energy costs of electricity (COEs) in Japan, cost-effective ST reactors can be designed if their COEs are smaller than 120 mills kW-1 h-1 (2013). We selected the optimized design point: A = 2.0 and Rp = 5.4 m after considering the maintenance scheme and TF ripple. A self-consistent free-boundary MHD equilibrium and poloidal field coil configuration of the ST reactor were designed by modifying the neutral beam injection system and plasma profiles. The MHD stability of the equilibrium was analysed and a ramp-up scenario was considered for ensuring the new ST design. The optimized moderate-size ST power plant conceptual design realizes realistic plasma and fusion engineering parameters keeping its economic competitiveness against existing energy sources in Japan.

  16. Constraints on the Energy Content of the Universe from a Combination of Galaxy Cluster Observables

    NASA Technical Reports Server (NTRS)

    Molnar, Sandor M.; Haiman, Zoltan; Birkinshaw, Mark; Mushotzky, Richard F.

    2003-01-01

    We demonstrate that constraints on cosmological parameters from the distribution of clusters as a function of redshift (dN/dz) are complementary to accurate angular diameter distance (D(sub A)) measurements to clusters, and their combination significantly tightens constraints on the energy density content of the Universe. The number counts can be obtained from X-ray and/or SZ (Sunyaev-Ze'dovich effect) surveys, and the angular diameter distances can be determined from deep observations of the intra-cluster gas using their thermal bremsstrahlung X-ray emission and the SZ effect. We combine constraints from simulated cluster number counts expected from a 12 deg(sup 2) SZ cluster survey and constraints from simulated angular diameter distance measurements based on the X-ray/SZ method assuming a statistical accuracy of 10% in the angular diameter distance determination of 100 clusters with redshifts less than 1.5. We find that Omega(sub m), can be determined within about 25%, Omega(sub lambda) within 20% and w within 16%. We show that combined dN/dz+(sub lambda) constraints can be used to constrain the different energy densities in the Universe even in the presence of a few percent redshift dependent systematic error in D(sub lambda). We also address the question of how best to select clusters of galaxies for accurate diameter distance determinations. We show that the joint dN/dz+ D(lambda) constraints on cosmological parameters for a fixed target accuracy in the energy density parameters are optimized by selecting clusters with redshift upper cut-offs in the range 0.55 approx. less than 1. Subject headings: cosmological parameters - cosmology: theory - galaxies:clusters: general

  17. Image-optimized Coronal Magnetic Field Models

    NASA Astrophysics Data System (ADS)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-08-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.

  18. Image-Optimized Coronal Magnetic Field Models

    NASA Technical Reports Server (NTRS)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-01-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.

  19. Dark energy fingerprints in the nonminimal Wu-Yang wormhole structure

    NASA Astrophysics Data System (ADS)

    Balakin, Alexander B.; Zayats, Alexei E.

    2014-08-01

    We discuss new exact solutions to nonminimally extended Einstein-Yang-Mills equations describing spherically symmetric static wormholes supported by the gauge field of the Wu-Yang type in a dark energy environment. We focus on the analysis of three types of exact solutions to the gravitational field equations. Solutions of the first type relate to the model, in which the dark energy is anisotropic; i.e., the radial and tangential pressures do not coincide. Solutions of the second type correspond to the isotropic pressure tensor; in particular, we discuss the exact solution, for which the dark energy is characterized by the equation of state for a string gas. Solutions of the third type describe the dark energy model with constant pressure and energy density. For the solutions of the third type, we consider in detail the problem of horizons and find constraints for the parameters of nonminimal coupling and for the constitutive parameters of the dark energy equation of state, which guarantee that the nonminimal wormholes are traversable.

  20. The bias of the log power spectrum for discrete surveys

    NASA Astrophysics Data System (ADS)

    Repp, Andrew; Szapudi, István

    2018-03-01

    A primary goal of galaxy surveys is to tighten constraints on cosmological parameters, and the power spectrum P(k) is the standard means of doing so. However, at translinear scales P(k) is blind to much of these surveys' information - information which the log density power spectrum recovers. For discrete fields (such as the galaxy density), A* denotes the statistic analogous to the log density: A* is a `sufficient statistic' in that its power spectrum (and mean) capture virtually all of a discrete survey's information. However, the power spectrum of A* is biased with respect to the corresponding log spectrum for continuous fields, and to use P_{A^*}(k) to constrain the values of cosmological parameters, we require some means of predicting this bias. Here, we present a prescription for doing so; for Euclid-like surveys (with cubical cells 16h-1 Mpc across) our bias prescription's error is less than 3 per cent. This prediction will facilitate optimal utilization of the information in future galaxy surveys.

  1. Structure simulation with calculated NMR parameters - integrating COSMOS into the CCPN framework.

    PubMed

    Schneider, Olaf; Fogh, Rasmus H; Sternberg, Ulrich; Klenin, Konstantin; Kondov, Ivan

    2012-01-01

    The Collaborative Computing Project for NMR (CCPN) has build a software framework consisting of the CCPN data model (with APIs) for NMR related data, the CcpNmr Analysis program and additional tools like CcpNmr FormatConverter. The open architecture allows for the integration of external software to extend the abilities of the CCPN framework with additional calculation methods. Recently, we have carried out the first steps for integrating our software Computer Simulation of Molecular Structures (COSMOS) into the CCPN framework. The COSMOS-NMR force field unites quantum chemical routines for the calculation of molecular properties with a molecular mechanics force field yielding the relative molecular energies. COSMOS-NMR allows introducing NMR parameters as constraints into molecular mechanics calculations. The resulting infrastructure will be made available for the NMR community. As a first application we have tested the evaluation of calculated protein structures using COSMOS-derived 13C Cα and Cβ chemical shifts. In this paper we give an overview of the methodology and a roadmap for future developments and applications.

  2. The optimization of wireless power transmission: design and realization.

    PubMed

    Jia, Zhiwei; Yan, Guozheng; Liu, Hua; Wang, Zhiwu; Jiang, Pingping; Shi, Yu

    2012-09-01

    A wireless power transmission system is regarded as a practical way of solving power-shortage problems in multifunctional active capsule endoscopes. The uniformity of magnetic flux density, frequency stability and orientation stability are used to evaluate power transmission stability, taking into consideration size and safety constraints. Magnetic field safety and temperature rise are also considered. Test benches are designed to measure the relevent parameters. Finally, a mathematical programming model in which these constraints are considered is proposed to improve transmission efficiency. To verify the feasibility of the proposed method, various systems for a wireless active capsule endoscope are designed and evaluated. The optimal power transmission system has the capability to supply continuously at least 500 mW of power with a transmission efficiency of 4.08%. The example validates the feasibility of the proposed method. Introduction of novel designs enables further improvement of this method. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Anthropic versus cosmological solutions to the coincidence problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barreira, A.; Avelino, P. P.; Departamento de Fisica da Faculdade de Ciencias da Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto

    2011-05-15

    In this paper, we investigate possible solutions to the coincidence problem in flat phantom dark-energy models with a constant dark-energy equation of state and quintessence models with a linear scalar field potential. These models are representative of a broader class of cosmological scenarios in which the universe has a finite lifetime. We show that, in the absence of anthropic constraints, including a prior probability for the models inversely proportional to the total lifetime of the universe excludes models very close to the {Lambda} cold dark matter model. This relates a cosmological solution to the coincidence problem with a dynamical dark-energymore » component having an equation-of-state parameter not too close to -1 at the present time. We further show that anthropic constraints, if they are sufficiently stringent, may solve the coincidence problem without the need for dynamical dark energy.« less

  4. The detection of planetary systems from Space Station - A star observation strategy

    NASA Technical Reports Server (NTRS)

    Mascy, Alfred C.; Nishioka, Ken; Jorgensen, Helen; Swenson, Byron L.

    1987-01-01

    A 10-20-yr star-observation program for the Space Station Astrometric Telescope Facility (ATF) is proposed and evaluated by means of computer simulations. The primary aim of the program is to detect stars with planetary systems by precise determination of their motion relative to reference stars. The designs proposed for the ATF are described and illustrated; the basic parameters of the 127 stars selected for the program are listed in a table; spacecraft and science constraints, telescope slewing rates, and the possibility of limiting the program sample to stars near the Galactic equator are discussed; and the effects of these constraints are investigated by simulating 1 yr of ATF operation. Viewing all sky regions, the ATF would have 81-percent active viewing time, observing each star about 200 times (56 h) per yr; only small decrements in this performance would result from limiting the viewing field.

  5. Cosmological constraints from the convergence 1-point probability distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus

    2017-06-29

    Here, we examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast l-picola simulations and a Fisher analysis. We find competitive constraints in the Ωm–σ8 plane from the convergence PDF with 188 arcmin 2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is lessmore » susceptible, and improves the total figure of merit by a factor of 2–3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.« less

  6. Cosmological constraints from the convergence 1-point probability distribution

    NASA Astrophysics Data System (ADS)

    Patton, Kenneth; Blazek, Jonathan; Honscheid, Klaus; Huff, Eric; Melchior, Peter; Ross, Ashley J.; Suchyta, Eric

    2017-11-01

    We examine the cosmological information available from the 1-point probability density function (PDF) of the weak-lensing convergence field, utilizing fast L-PICOLA simulations and a Fisher analysis. We find competitive constraints in the Ωm-σ8 plane from the convergence PDF with 188 arcmin2 pixels compared to the cosmic shear power spectrum with an equivalent number of modes (ℓ < 886). The convergence PDF also partially breaks the degeneracy cosmic shear exhibits in that parameter space. A joint analysis of the convergence PDF and shear 2-point function also reduces the impact of shape measurement systematics, to which the PDF is less susceptible, and improves the total figure of merit by a factor of 2-3, depending on the level of systematics. Finally, we present a correction factor necessary for calculating the unbiased Fisher information from finite differences using a limited number of cosmological simulations.

  7. Performance optimization of an MHD generator with physical constraints

    NASA Technical Reports Server (NTRS)

    Pian, C. C. P.; Seikel, G. R.; Smith, J. M.

    1979-01-01

    A technique has been described which optimizes the power out of a Faraday MHD generator operating under a prescribed set of electrical and magnetic constraints. The method does not rely on complicated numerical optimization techniques. Instead the magnetic field and the electrical loading are adjusted at each streamwise location such that the resultant generator design operates at the most limiting of the cited stress levels. The simplicity of the procedure makes it ideal for optimizing generator designs for system analysis studies of power plants. The resultant locally optimum channel designs are, however, not necessarily the global optimum designs. The results of generator performance calculations are presented for an approximately 2000 MWe size plant. The difference between the maximum power generator design and the optimal design which maximizes net MHD power are described. The sensitivity of the generator performance to the various operational parameters are also presented.

  8. Generation of density perturbations by inflation in scalar-tensor gravity theories

    NASA Astrophysics Data System (ADS)

    Seshadri, T. R.

    1992-02-01

    Density perturbations arising out of the quantum fluctuations in a Brans-Dicke field in the context of extended inflation have been studied. We have used a model in which the Brans-Dicke parameter varies with time. We find that the density perturbations are large in magnitude and have a scale invariant spectrum. The origin of these is discussed and it is shown that these place further constraints on the model. Address after 15 Octobr 1991: Department of Physics and Astrophysics, University of Delhi 110 007, India.

  9. General Methodology for Designing Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Condon, Gerald; Ocampo, Cesar; Mathur, Ravishankar; Morcos, Fady; Senent, Juan; Williams, Jacob; Davis, Elizabeth C.

    2012-01-01

    A methodology for designing spacecraft trajectories in any gravitational environment within the solar system has been developed. The methodology facilitates modeling and optimization for problems ranging from that of a single spacecraft orbiting a single celestial body to that of a mission involving multiple spacecraft and multiple propulsion systems operating in gravitational fields of multiple celestial bodies. The methodology consolidates almost all spacecraft trajectory design and optimization problems into a single conceptual framework requiring solution of either a system of nonlinear equations or a parameter-optimization problem with equality and/or inequality constraints.

  10. QCD unitarity constraints on Reggeon Field Theory

    NASA Astrophysics Data System (ADS)

    Kovner, Alex; Levin, Eugene; Lublinsky, Michael

    2016-08-01

    We point out that the s-channel unitarity of QCD imposes meaningful constraints on a possible form of the QCD Reggeon Field Theory. We show that neither the BFKL nor JIMWLK nor Braun's Hamiltonian satisfy the said constraints. In a toy, zero transverse dimensional case we construct a model that satisfies the analogous constraint and show that at infinite energy it indeed tends to a "black disk limit" as opposed to the model with triple Pomeron vertex only, routinely used as a toy model in the literature.

  11. Power spectrum and non-Gaussianities in anisotropic inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dey, Anindya; Kovetz, Ely D.; Paban, Sonia, E-mail: anindya@physics.utexas.edu, E-mail: elykovetz@gmail.com, E-mail: paban@physics.utexas.edu

    2014-06-01

    We study the planar regime of curvature perturbations for single field inflationary models in an axially symmetric Bianchi I background. In a theory with standard scalar field action, the power spectrum for such modes has a pole as the planarity parameter goes to zero. We show that constraints from back reaction lead to a strong lower bound on the planarity parameter for high-momentum planar modes and use this bound to calculate the signal-to-noise ratio of the anisotropic power spectrum in the CMB, which in turn places an upper bound on the Hubble scale during inflation allowed in our model. Wemore » find that non-Gaussianities for these planar modes are enhanced for the flattened triangle and the squeezed triangle configurations, but show that the estimated values of the f{sub NL} parameters remain well below the experimental bounds from the CMB for generic planar modes (other, more promising signatures are also discussed). For a standard action, f{sub NL} from the squeezed configuration turns out to be larger compared to that from the flattened triangle configuration in the planar regime. However, in a theory with higher derivative operators, non-Gaussianities from the flattened triangle can become larger than the squeezed configuration in a certain limit of the planarity parameter.« less

  12. Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.

    PubMed

    Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S

    2018-02-05

    To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.

  13. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  14. Test Design and Speededness

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2011-01-01

    A critical component of test speededness is the distribution of the test taker's total time on the test. A simple set of constraints on the item parameters in the lognormal model for response times is derived that can be used to control the distribution when assembling a new test form. As the constraints are linear in the item parameters, they can…

  15. LHC constraints on color octet scalars

    NASA Astrophysics Data System (ADS)

    Hayreter, Alper; Valencia, German

    2017-08-01

    We extract constraints on the parameter space of the Manohar and Wise model by comparing the cross sections for dijet, top-pair, dijet-pair, t t ¯t t ¯ and b b ¯b b ¯ productions at the LHC with the strongest available experimental limits from ATLAS or CMS at 8 or 13 TeV. Overall we find mass limits around 1 TeV in the most sensitive regions of parameter space, and lower elsewhere. This is at odds with generic limits for color octet scalars often quoted in the literature where much larger production cross sections are assumed. The constraints that can be placed on coupling constants are typically weaker than those from existing theoretical considerations, with the exception of the parameter ηD.

  16. Parameter and prediction uncertainty in an optimized terrestrial carbon cycle model: Effects of constraining variables and data record length

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.

    2011-03-01

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.

  17. Cosmological constraints from Galaxy Clusters in 2500 square-degree SPT-SZ survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haan, T. de; Benson, B. A.; Bleem, L. E.

    We present cosmological parameter constraints obtained from galaxy clusters identified by their SunyaevZel'dovich effect signature in the 2500 square-degree South Pole Telescope Sunyaev Zel'dovich (SPT-SZ) survey. We consider the 377 cluster candidates identified at z > 0.25 with a detection significance greater than five, corresponding to the 95% purity threshold for the survey. We compute constraints on cosmological models using the measured cluster abundance as a function of mass and redshift. We include additional constraints from multi-wavelength observations, including Chandra X-ray data for 82 clusters and a weak lensing-based prior on the normalization of the mass-observable scaling relations. Assuming amore » spatially flat Lambda CDM cosmology, we combine the cluster data with a prior on H-0 and find sigma(8)= 0.784. +/- 0.039 and Omega(m) = 0.289. +/- 0.042, with the parameter combination sigma(8) (Omega(m)/0.27)(0.3) = 0.797 +/- 0.031. These results are in good agreement with constraints from the cosmic microwave background (CMB) from SPT, WMAP, and Planck, as well as with constraints from other cluster data sets. We also consider several extensions to Lambda CDM, including models in which the equation of state of dark energy w, the species-summed neutrino mass, and/or the effective number of relativistic species (N-eff) are free parameters. When combined with constraints from the Planck CMB, H-0, baryon acoustic oscillation, and SNe, adding the SPT cluster data improves the w constraint by 14%, to w = -1.023 +/- 0.042.« less

  18. Magnetic Doppler imaging of 53 Camelopardalis in all four Stokes parameters

    NASA Astrophysics Data System (ADS)

    Kochukhov, O.; Bagnulo, S.; Wade, G. A.; Sangalli, L.; Piskunov, N.; Landstreet, J. D.; Petit, P.; Sigut, T. A. A.

    2004-02-01

    We present the first investigation of the structure of the stellar surface magnetic field using line profiles in all four Stokes parameters. We extract the information about the magnetic field geometry and abundance distributions of the chemically peculiar star 53 Cam by modelling time-series of high-resolution spectropolarimetric observations with the help of a new magnetic Doppler imaging code. This combination of the unique four Stokes parameter data and state-of-the-art magnetic imaging technique makes it possible to infer the stellar magnetic field topology directly from the rotational variability of the Stokes spectra. In the magnetic imaging of 53 Cam we discard the traditional multipolar assumptions about the structure of magnetic fields in Ap stars and explore the stellar magnetic topology without introducing any global a priori constraints on the field structure. The complex magnetic model of 53 Cam derived with our magnetic Doppler imaging method achieves a good fit to the observed intensity, circular and linear polarization profiles of strong magnetically sensitive Fe II spectral lines. Such an agreement between observations and model predictions was not possible with any earlier multipolar magnetic models, based on modelling Stokes I spectra and fitting surface averaged magnetic observables (e.g., longitudinal field, magnetic field modulus, etc.). Furthermore, we demonstrate that even the direct inversion of the four Stokes parameters of 53 Cam assuming a low-order multipolar magnetic geometry is incapable of achieving an adequate fit to our spectropolarimetric observations. Thus, as a main result of our investigation, we discover that the magnetic field topology of 53 Cam is considerably more complex than any low-order multipolar expansion, raising a general question about the validity of the multipolar assumption in the studies of magnetic field structures of Ap stars. In addition to the analysis of the magnetic field of 53 Cam, we reconstruct surface abundance distributions of Si, Ca, Ti, Fe and Nd. These abundance maps confirm results of the previous studies of 53 Cam, in particular dramatic antiphase variation of Ca and Ti abundances. Based on observations obtained with the Bernard Lyot telescope of the Pic du Midi Observatory and Isaac Newton Telescope of the La Palma Observatory.

  19. Estimating Crustal Properties Directly from Satellite Tracking Data by Using a Topography-based Constraint

    NASA Astrophysics Data System (ADS)

    Goossens, S. J.; Sabaka, T. J.; Genova, A.; Mazarico, E. M.; Nicholas, J. B.; Neumann, G. A.; Lemoine, F. G.

    2017-12-01

    The crust of a terrestrial planet is formed by differentiation processes in its early history, followed by magmatic evolution of the planetary surface. It is further modified through impact processes. Knowledge of the crustal structure can thus place constraints on the planet's formation and evolution. In particular, the average bulk density of the crust is a fundamental parameter in geophysical studies, such as the determination of crustal thickness, studies of the mechanisms of topography support, and the planet's thermo-chemical evolution. Yet even with in-situ samples available, the crustal density is difficult to determine unambiguously, as exemplified by the results for the Gravity and Recovery Interior Laboratory (GRAIL) mission, which found an average crustal density for the Moon that was lower than generally assumed. The GRAIL results were possible owing to the combination of its high-resolution gravity and high-resolution topography obtained by the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar Reconnaissance Orbiter (LRO), and high correlations between the two datasets. The crustal density can be determined by its contribution to the gravity field of a planet, but at long wavelengths flexure effects can dominate. On the other hand, short-wavelength gravity anomalies are difficult to measure, and either not determined well enough (other than at the Moon), or their power is suppressed by the standard `Kaula' regularization constraint applied during inversion of the gravity field from satellite tracking data. We introduce a new constraint that has infinite variance in one direction, called xa . For constraint damping factors that go to infinity, it can be shown that the solution x becomes equal to a scale factor times xa. This scale factor is completely determined by the data, and we call our constraint rank-minus-1 (RM1). If we choose xa to be topography-induced gravity, then we can estimate the average bulk crustal density directly from the data (assuming uncompensated topography). We validate our constraint with pre-GRAIL lunar data, showing that we obtain the same bulk density from data, of much lower resolution than GRAIL's. We will present the results of our new methodology applied to the case of Mars. We will discuss the results, namely an average crustal density lower than generally assumed.

  20. Post-Newtonian parameter γ in generalized non-local gravity

    NASA Astrophysics Data System (ADS)

    Zhang, Xue; Wu, YaBo; Yang, WeiQiang; Zhang, ChengYuan; Chen, BoHai; Zhang, Nan

    2017-10-01

    We investigate the post-Newtonian parameter γ and derive its formalism in generalized non-local (GNL) gravity, which is the modified theory of general relativity (GR) obtained by adding a term m 2 n-2 R☐-n R to the Einstein-Hilbert action. Concretely, based on parametrizing the generalized non-local action in which gravity is described by a series of dynamical scalar fields ϕ i in addition to the metric tensor g μν, the post-Newtonian limit is computed, and the effective gravitational constant as well as the post-Newtonian parameters are directly obtained from the generalized non-local gravity. Moreover, by discussing the values of the parametrized post-Newtonian parameters γ, we can compare our expressions and results with those in Hohmann and Järv et al. (2016), as well as current observational constraints on the values of γ in Will (2006). Hence, we draw restrictions on the nonminimal coupling terms F̅ around their background values.

  1. Hawking-Moss instanton in nonlinear massive gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Ying-li; Saito, Ryo; Sasaki, Misao, E-mail: yingli@yukawa.kyoto-u.ac.jp, E-mail: rsaito@yukawa.kyoto-u.ac.jp, E-mail: misao@yukawa.kyoto-u.ac.jp

    2013-02-01

    As a first step toward understanding a lanscape of vacua in a theory of non-linear massive gravity, we consider a landscape of a single scalar field and study tunneling between a pair of adjacent vacua. We study the Hawking-Moss (HM) instanton that sits at a local maximum of the potential, and evaluate the dependence of the tunneling rate on the parameters of the theory. It is found that provided with the same physical HM Hubble parameter H{sub HM}, depending on the values of parameters α{sub 3} and α{sub 4} in the action (2.2), the corresponding tunneling rate can be eithermore » enhanced or suppressed when compared to the one in the context of General Relativity (GR). Furthermore, we find the constraint on the ratio of the physical Hubble parameter to the fiducial one, which constrains the form of potential. This result is in sharp contrast to GR where there is no bound on the minimum value of the potential.« less

  2. Field theory of hyperfluid

    NASA Astrophysics Data System (ADS)

    Ariki, Taketo

    2018-02-01

    A hyperfluid model is constructed on the basis of its action entirely free from external constraints, regarding the hyperfluid as a self-consistent classical field. Intrinsic hypermomentum is no longer a supplemental variable given by external constraints, but arises purely from the diffeomorphism covariance of dynamical field. The field-theoretic approach allows natural classification of a hyperfluid on the basis of its symmetry group and corresponding homogeneous space; scalar, spinor, vector, and tensor fluids are introduced as simple examples. Apart from phenomenological constraints, the theory predicts the hypermomentum exchange of fluid via field-theoretic interactions of various classes; fluid–fluid interactions, minimal and non-minimal SU(n) -gauge couplings, and coupling with metric-affine gravity are all successfully formulated within the classical regime.

  3. Deformations of vector-scalar models

    NASA Astrophysics Data System (ADS)

    Barnich, Glenn; Boulanger, Nicolas; Henneaux, Marc; Julia, Bernard; Lekeu, Victor; Ranjbar, Arash

    2018-02-01

    Abelian vector fields non-minimally coupled to uncharged scalar fields arise in many contexts. We investigate here through algebraic methods their consistent deformations ("gaugings"), i.e., the deformations that preserve the number (but not necessarily the form or the algebra) of the gauge symmetries. Infinitesimal consistent deformations are given by the BRST cohomology classes at ghost number zero. We parametrize explicitly these classes in terms of various types of global symmetries and corresponding Noether currents through the characteristic cohomology related to antifields and equations of motion. The analysis applies to all ghost numbers and not just ghost number zero. We also provide a systematic discussion of the linear and quadratic constraints on these parameters that follow from higher-order consistency. Our work is relevant to the gaugings of extended supergravities.

  4. Three-dimensional self-adaptive grid method for complex flows

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Deiwert, George S.

    1988-01-01

    A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.

  5. A lattice approach to spinorial quantum gravity

    NASA Technical Reports Server (NTRS)

    Renteln, Paul; Smolin, Lee

    1989-01-01

    A new lattice regularization of quantum general relativity based on Ashtekar's reformulation of Hamiltonian general relativity is presented. In this form, quantum states of the gravitational field are represented within the physical Hilbert space of a Kogut-Susskind lattice gauge theory. The gauge field of the theory is a complexified SU(2) connection which is the gravitational connection for left-handed spinor fields. The physical states of the gravitational field are those which are annihilated by additional constraints which correspond to the four constraints of general relativity. Lattice versions of these constraints are constructed. Those corresponding to the three-dimensional diffeomorphism generators move states associated with Wilson loops around on the lattice. The lattice Hamiltonian constraint has a simple form, and a correspondingly simple interpretation: it is an operator which cuts and joins Wilson loops at points of intersection.

  6. Efficient and stable exponential time differencing Runge-Kutta methods for phase field elastic bending energy models

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqiang; Ju, Lili; Du, Qiang

    2016-07-01

    The Willmore flow formulated by phase field dynamics based on the elastic bending energy model has been widely used to describe the shape transformation of biological lipid vesicles. In this paper, we develop and investigate some efficient and stable numerical methods for simulating the unconstrained phase field Willmore dynamics and the phase field Willmore dynamics with fixed volume and surface area constraints. The proposed methods can be high-order accurate and are completely explicit in nature, by combining exponential time differencing Runge-Kutta approximations for time integration with spectral discretizations for spatial operators on regular meshes. We also incorporate novel linear operator splitting techniques into the numerical schemes to improve the discrete energy stability. In order to avoid extra numerical instability brought by use of large penalty parameters in solving the constrained phase field Willmore dynamics problem, a modified augmented Lagrange multiplier approach is proposed and adopted. Various numerical experiments are performed to demonstrate accuracy and stability of the proposed methods.

  7. Variation of thermal parameters in two different color morphs of a diurnal poison toad, Melanophryniscus rubriventris (Anura: Bufonidae).

    PubMed

    Sanabria, Eduardo A; Vaira, Marcos; Quiroga, Lorena B; Akmentins, Mauricio S; Pereyra, Laura C

    2014-04-01

    We study the variation in thermal parameters in two contrasting populations Yungas Redbelly Toads (Melanophryniscus rubriventris) with different discrete color phenotypes comparing field body temperatures, critical thermal maximum and heating rates. We found significant differences in field body temperatures of the different morphs. Temperatures were higher in toads with a high extent of dorsal melanization. No variation was registered in operative temperatures between the study locations at the moment of capture and processing. Critical thermal maximum of toads was positively related with the extent of dorsal melanization. Furthermore, we founded significant differences in heating rates between morphs, where individuals with a high extent of dorsal melanization showed greater heating rates than toads with lower dorsal melanization. The color pattern-thermal parameter relationship observed may influence the activity patterns and body size of individuals. Body temperature is a modulator of physiological and behavioral functions in amphibians, influencing daily and seasonal activity, locomotor performance, digestion rate and growth rate. It is possible that some growth constraints may arise due to the relationship of color pattern-metabolism allowing different morphs to attain similar sizes at different locations instead of body-size clines. Copyright © 2014. Published by Elsevier Ltd.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alves, Daniele S.M.; Fox, Patrick J.; Weiner, Neal J.

    In models where an additional SU(2)-doublet that does not have couplings to fermions participates in electroweak symmetry breaking, the properties of the Higgs boson are changed. At tree level, in the neighborhood of the SM-like range of parameter space, it is natural to have the coupling to vectors, cV, approximately constant, while the coupling to fermions, cf, is suppressed. This leads to enhanced VBF signals of gamma gamma while keeping other signals of Higgses approximately constant (such as WW* and ZZ*), and suppressing higgs to tau tau. Sizable tree-level effects are often accompanied by light charged Higgs states, which leadmore » to important constraints from b to s gamma and top to b H+, but also often to similarly sizable contributions to the inclusive h to gamma gamma signal from radiative effects. In the simplest model, this is described by a Type I 2HDM, and in supersymmetry is naturally realized with 'sister Higgs' fields. In such a scenario, additional light charged states can contribute further with fewer constraints from heavy flavor decays. With supersymmetry, Grand Unification motivates the inclusion of colored partner fields. These G-quarks may provide additional evidence for such a model.« less

  9. Unsupervised change detection of multispectral images based on spatial constraint chi-squared transform and Markov random field model

    NASA Astrophysics Data System (ADS)

    Shi, Aiye; Wang, Chao; Shen, Shaohong; Huang, Fengchen; Ma, Zhenli

    2016-10-01

    Chi-squared transform (CST), as a statistical method, can describe the difference degree between vectors. The CST-based methods operate directly on information stored in the difference image and are simple and effective methods for detecting changes in remotely sensed images that have been registered and aligned. However, the technique does not take spatial information into consideration, which leads to much noise in the result of change detection. An improved unsupervised change detection method is proposed based on spatial constraint CST (SCCST) in combination with a Markov random field (MRF) model. First, the mean and variance matrix of the difference image of bitemporal images are estimated by an iterative trimming method. In each iteration, spatial information is injected to reduce scattered changed points (also known as "salt and pepper" noise). To determine the key parameter confidence level in the SCCST method, a pseudotraining dataset is constructed to estimate the optimal value. Then, the result of SCCST, as an initial solution of change detection, is further improved by the MRF model. The experiments on simulated and real multitemporal and multispectral images indicate that the proposed method performs well in comprehensive indices compared with other methods.

  10. Sliding mode control based impact angle control guidance considering the seeker׳s field-of-view constraint.

    PubMed

    Wang, Xingliang; Zhang, Youan; Wu, Huali

    2016-03-01

    The problem of impact angle control guidance for a field-of-view constrained missile against non-maneuvering or maneuvering targets is solved by using the sliding mode control theory. The existing impact angle control guidance laws with field-of-view constraint are only applicable against stationary targets and most of them suffer abrupt-jumping of guidance command due to the application of additional guidance mode switching logic. In this paper, the field-of-view constraint is handled without using any additional switching logic. In particular, a novel time-varying sliding surface is first designed to achieve zero miss distance and zero impact angle error without violating the field-of-view constraint during the sliding mode phase. Then a control integral barrier Lyapunov function is used to design the reaching law so that the sliding mode can be reached within finite time and the field-of-view constraint is not violated during the reaching phase as well. A nonlinear extended state observer is constructed to estimate the disturbance caused by unknown target maneuver, and the undesirable chattering is alleviated effectively by using the estimation as a compensation item in the guidance law. The performance of the proposed guidance law is illustrated with simulations. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. How CMB and large-scale structure constrain chameleon interacting dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength,more » can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.« less

  12. Propagation of error from parameter constraints in quantitative MRI: Example application of multiple spin echo T2 mapping.

    PubMed

    Lankford, Christopher L; Does, Mark D

    2018-02-01

    Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less

  14. Possible signature of the magnetic fields related to quasi-periodic oscillations observed in microquasars

    NASA Astrophysics Data System (ADS)

    Kološ, Martin; Tursunov, Arman; Stuchlík, Zdeněk

    2017-12-01

    The study of quasi-periodic oscillations (QPOs) of X-ray flux observed in the stellar-mass black hole binaries can provide a powerful tool for testing of the phenomena occurring in the strong gravity regime. Magnetized versions of the standard geodesic models of QPOs can explain the observationally fixed data from the three microquasars. We perform a successful fitting of the HF QPOs observed for three microquasars, GRS 1915+105, XTE 1550-564 and GRO 1655-40, containing black holes, for magnetized versions of both epicyclic resonance and relativistic precession models and discuss the corresponding constraints of parameters of the model, which are the mass and spin of the black hole and the parameter related to the external magnetic field. The estimated magnetic field intensity strongly depends on the type of objects giving the observed HF QPOs. It can be as small as 10^{-5} G if electron oscillatory motion is relevant, but it can be by many orders higher for protons or ions (0.02-1 G), or even higher for charged dust or such exotic objects as lighting balls, etc. On the other hand, if we know by any means the magnetic field intensity, our model implies strong limit on the character of the oscillating matter, namely its specific charge.

  15. Some problems of control of dynamical conditions of technological vibrating machines

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. K.; Lapshin, V. L.; Eliseev, A. V.

    2017-10-01

    The possibility of control of dynamical condition of the shakers that are designed for vibration treatment of parts interacting with granular media is discussed. The aim of this article is to develop the methodological basis of technology of creation of mathematical models of shake tables and the development of principles of formation of vibrational fields, estimation of their parameters and control of the structure vibration fields. Approaches to build mathematical models that take into account unilateral constraints, the relationships between elements, with the vibrating surface are developed. Methods intended to construct mathematical model of linear mechanical oscillation systems are used. Small oscillations about the position of static equilibrium are performed. The original method of correction of vibration fields by introduction of the oscillating system additional ties to the structure are proposed. Additional ties are implemented in the form of a mass-inertial device for changing the inertial parameters of the working body of the vibration table by moving the mass-inertial elements. The concept of monitoring the dynamic state of the vibration table based on the original measuring devices is proposed. Estimation for possible changes in dynamic properties is produced. The article is of interest for specialists in the field of creation of vibration technology machines and equipment.

  16. Rapid mapping of compound eye visual sampling parameters with FACETS, a highly automated wide-field goniometer.

    PubMed

    Douglass, John K; Wehling, Martin F

    2016-12-01

    A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.

  17. Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data

    NASA Astrophysics Data System (ADS)

    Kacprzak, T.; Kirk, D.; Friedrich, O.; Amara, A.; Refregier, A.; Marian, L.; Dietrich, J. P.; Suchyta, E.; Aleksić, J.; Bacon, D.; Becker, M. R.; Bonnett, C.; Bridle, S. L.; Chang, C.; Eifler, T. F.; Hartley, W. G.; Huff, E. M.; Krause, E.; MacCrann, N.; Melchior, P.; Nicola, A.; Samuroff, S.; Sheldon, E.; Troxel, M. A.; Weller, J.; Zuntz, J.; Abbott, T. M. C.; Abdalla, F. B.; Armstrong, R.; Benoit-Lévy, A.; Bernstein, G. M.; Bernstein, R. A.; Bertin, E.; Brooks, D.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Castander, F. J.; Crocce, M.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Evrard, A. E.; Neto, A. Fausti; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Jain, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Rykoff, E. S.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.; Zhang, Y.; DES Collaboration

    2016-12-01

    Shear peak statistics has gained a lot of attention recently as a practical alternative to the two-point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 deg2 field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range 04 would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two-point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. We discuss prospects for future peak statistics analysis with upcoming DES data.

  18. Hořava Gravity in the Effective Field Theory formalism: From cosmology to observational constraints

    NASA Astrophysics Data System (ADS)

    Frusciante, Noemi; Raveri, Marco; Vernieri, Daniele; Hu, Bin; Silvestri, Alessandra

    2016-09-01

    We consider Hořava gravity within the framework of the effective field theory (EFT) of dark energy and modified gravity. We work out a complete mapping of the theory into the EFT language for an action including all the operators which are relevant for linear perturbations with up to sixth order spatial derivatives. We then employ an updated version of the EFTCAMB/EFTCosmoMC package to study the cosmology of the low-energy limit of Hořava gravity and place constraints on its parameters using several cosmological data sets. In particular we use cosmic microwave background (CMB) temperature-temperature and lensing power spectra by Planck 2013, WMAP low- ℓ polarization spectra, WiggleZ galaxy power spectrum, local Hubble measurements, Supernovae data from SNLS, SDSS and HST and the baryon acoustic oscillations measurements from BOSS, SDSS and 6dFGS. We get improved upper bounds, with respect to those from Big Bang Nucleosynthesis, on the deviation of the cosmological gravitational constant from the local Newtonian one. At the level of the background phenomenology, we find a relevant rescaling of the Hubble rate at all epoch, which has a strong impact on the cosmological observables; at the level of perturbations, we discuss in details all the relevant effects on the observables and find that in general the quasi-static approximation is not safe to describe the evolution of perturbations. Overall we find that the effects of the modifications induced by the low-energy Hořava gravity action are quite dramatic and current data place tight bounds on the theory parameters.

  19. Model-independent cosmological constraints from growth and expansion

    NASA Astrophysics Data System (ADS)

    L'Huillier, Benjamin; Shafieloo, Arman; Kim, Hyungjin

    2018-05-01

    Reconstructing the expansion history of the Universe from Type Ia supernovae data, we fit the growth rate measurements and put model-independent constraints on some key cosmological parameters, namely, Ωm, γ, and σ8. The constraints are consistent with those from the concordance model within the framework of general relativity, but the current quality of the data is not sufficient to rule out modified gravity models. Adding the condition that dark energy density should be positive at all redshifts, independently of its equation of state, further constrains the parameters and interestingly supports the concordance model.

  20. Image-optimized Coronal Magnetic Field Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outsidemore » of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.« less

  1. Parameter design considerations for an oscillator IR-FEL

    NASA Astrophysics Data System (ADS)

    Jia, Qi-Ka

    2017-01-01

    An infrared oscillator FEL user facility will be built at the National Synchrotron Radiation Laboratory at in Hefei, China. In this paper, the parameter design of the oscillator FEL is discussed, and some original relevant approaches and expressions are presented. Analytic formulae are used to estimate the optical field gain and saturation power for the preliminary design. By considering both physical and technical constraints, the relation of the deflection parameter K to the undulator period is analyzed. This helps us to determine the ranges of the magnetic pole gap, the electron energy and the radiation wavelength. The relations and design of the optical resonator parameters are analyzed. Using dimensionless quantities, the interdependences between the radii of curvature of the resonator mirror and the various parameters of the optical resonator are clearly demonstrated. The effect of the parallel-plate waveguide is analyzed for the far-infrared oscillator FEL. The condition of the necessity of using a waveguide and the modified filling factor in the case of the waveguide are given, respectively. Supported by National Nature Science Foundation of China (21327901, 11375199)

  2. Cosmic Ray Propagation through the Magnetic Fields of the Galaxy with Extended Halo

    NASA Technical Reports Server (NTRS)

    Zhang, Ming

    2005-01-01

    In this project we perform theoretical studies of 3-dimensional cosmic ray propagation in magnetic field configurations of the Galaxy with an extended halo. We employ our newly developed Markov stochastic process methods to solve the diffusive cosmic ray transport equation. We seek to understand observations of cosmic ray spectra, composition under the constraints of the observations of diffuse gamma ray and radio emission from the Galaxy. The model parameters are directly are related to properties of our Galaxy, such as the size of the Galactic halo, particle transport in Galactic magnetic fields, distribution of interstellar gas, primary cosmic ray source distribution and their confinement in the Galaxy. The core of this investigation is the development of software for cosmic ray propagation models with the Markov stochastic process approach. Values of important model parameters for the halo diffusion model are examined in comparison with observations of cosmic ray spectra, composition and the diffuse gamma-ray background. This report summarizes our achievement in the grant period at the Florida Institute of Technology. Work at the co-investigator's institution, the University of New Hampshire, under a companion grant, will be covered in detail by a separate report.

  3. Unified Dark Matter scalar field models with fast transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertacca, Daniele; Bruni, Marco; Piattella, Oliver F.

    2011-02-01

    We investigate the general properties of Unified Dark Matter (UDM) scalar field models with Lagrangians with a non-canonical kinetic term, looking specifically for models that can produce a fast transition between an early Einstein-de Sitter CDM-like era and a later Dark Energy like phase, similarly to the barotropic fluid UDM models in JCAP01(2010)014. However, while the background evolution can be very similar in the two cases, the perturbations are naturally adiabatic in fluid models, while in the scalar field case they are necessarily non-adiabatic. The new approach to building UDM Lagrangians proposed here allows to escape the common problem ofmore » the fine-tuning of the parameters which plague many UDM models. We analyse the properties of perturbations in our model, focusing on the the evolution of the effective speed of sound and that of the Jeans length. With this insight, we can set theoretical constraints on the parameters of the model, predicting sufficient conditions for the model to be viable. An interesting feature of our models is that what can be interpreted as w{sub DE} can be < −1 without violating the null energy conditions.« less

  4. Asymmetric dark matter and baryogenesis from pseudoscalar inflation

    NASA Astrophysics Data System (ADS)

    Cado, Yann; Sabancilar, Eray

    2017-04-01

    We show that both the baryon asymmetry of the Universe and the dark matter abundance can be explained within a single framework that makes use of maximally helical hypermagnetic fields produced during pseudoscalar inflation and the chiral anomaly in the Standard Model. We consider a minimal asymmetric dark matter model free from anomalies and constraints. We find that the observed baryon and the dark matter abundances are achieved for a wide range of inflationary parameters, and the dark matter mass ranges between 7-15 GeV . The novelty of our mechanism stems from the fact that the same source of CP violation occurring during inflation explains both baryonic and dark matter in the Universe with two inflationary parameters, hence addressing all the initial condition problems in an economical way.

  5. Asymmetric dark matter and baryogenesis from pseudoscalar inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cado, Yann; Sabancilar, Eray, E-mail: yann.cado@epfl.ch, E-mail: eray.sabancilar@epfl.ch

    2017-04-01

    We show that both the baryon asymmetry of the Universe and the dark matter abundance can be explained within a single framework that makes use of maximally helical hypermagnetic fields produced during pseudoscalar inflation and the chiral anomaly in the Standard Model. We consider a minimal asymmetric dark matter model free from anomalies and constraints. We find that the observed baryon and the dark matter abundances are achieved for a wide range of inflationary parameters, and the dark matter mass ranges between 7–15 GeV . The novelty of our mechanism stems from the fact that the same source of CPmore » violation occurring during inflation explains both baryonic and dark matter in the Universe with two inflationary parameters, hence addressing all the initial condition problems in an economical way.« less

  6. Fluid pressure arrival time tomography: Estimation and assessment in the presence of inequality constraints, with an application to a producing gas field at Krechba, Algeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rucci, A.; Vasco, D.W.; Novali, F.

    2010-04-01

    Deformation in the overburden proves useful in deducing spatial and temporal changes in the volume of a producing reservoir. Based upon these changes we estimate diffusive travel times associated with the transient flow due to production, and then, as the solution of a linear inverse problem, the effective permeability of the reservoir. An advantage an approach based upon travel times, as opposed to one based upon the amplitude of surface deformation, is that it is much less sensitive to the exact geomechanical properties of the reservoir and overburden. Inequalities constrain the inversion, under the assumption that the fluid production onlymore » results in pore volume decreases within the reservoir. We apply the formulation to satellite-based estimates of deformation in the material overlying a thin gas production zone at the Krechba field in Algeria. The peak displacement after three years of gas production is approximately 0.5 cm, overlying the eastern margin of the anticlinal structure defining the gas field. Using data from 15 irregularly-spaced images of range change, we calculate the diffusive travel times associated with the startup of a gas production well. The inequality constraints are incorporated into the estimates of model parameter resolution and covariance, improving the resolution by roughly 30 to 40%.« less

  7. Thermalized axion inflation

    NASA Astrophysics Data System (ADS)

    Ferreira, Ricardo Z.; Notari, Alessio

    2017-09-01

    We analyze the dynamics of inflationary models with a coupling of the inflaton phi to gauge fields of the form phi F tilde F/f, as in the case of axions. It is known that this leads to an instability, with exponential amplification of gauge fields, controlled by the parameter ξ= dot phi/(2fH), which can strongly affect the generation of cosmological perturbations and even the background. We show that scattering rates involving gauge fields can become larger than the expansion rate H, due to the very large occupation numbers, and create a thermal bath of particles of temperature T during inflation. In the thermal regime, energy is transferred to smaller scales, radically modifying the predictions of this scenario. We thus argue that previous constraints on ξ are alleviated. If the gauge fields have Standard Model interactions, which naturally provides reheating, they thermalize already at ξgtrsim2.9, before perturbativity constraints and also before backreaction takes place. In absence of SM interactions (i.e. for a dark photon), we find that gauge fields and inflaton perturbations thermalize if ξgtrsim3.4 however, observations require ξgtrsim6, which is above the perturbativity and backreaction bounds and so a dedicated study is required. After thermalization, though, the system should evolve non-trivially due to the competition between the instability and the gauge field thermal mass. If the thermal mass and the instabilities equilibrate, we expect an equilibrium temperature of Teq simeq ξ H/bar g where bar g is the effective gauge coupling. Finally, we estimate the spectrum of perturbations if phi is thermal and find that the tensor to scalar ratio is suppressed by H/(2T), if tensors do not thermalize.

  8. Thermalized axion inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira, Ricardo Z.; Notari, Alessio, E-mail: rferreira@icc.ub.edu, E-mail: notari@ub.edu

    2017-09-01

    We analyze the dynamics of inflationary models with a coupling of the inflaton φ to gauge fields of the form φ F F-tilde / f , as in the case of axions. It is known that this leads to an instability, with exponential amplification of gauge fields, controlled by the parameter ξ= φ-dot /(2 fH ), which can strongly affect the generation of cosmological perturbations and even the background. We show that scattering rates involving gauge fields can become larger than the expansion rate H , due to the very large occupation numbers, and create a thermal bath of particlesmore » of temperature T during inflation. In the thermal regime, energy is transferred to smaller scales, radically modifying the predictions of this scenario. We thus argue that previous constraints on ξ are alleviated. If the gauge fields have Standard Model interactions, which naturally provides reheating, they thermalize already at ξ∼>2.9, before perturbativity constraints and also before backreaction takes place. In absence of SM interactions (i.e. for a dark photon), we find that gauge fields and inflaton perturbations thermalize if ξ∼>3.4; however, observations require ξ∼>6, which is above the perturbativity and backreaction bounds and so a dedicated study is required. After thermalization, though, the system should evolve non-trivially due to the competition between the instability and the gauge field thermal mass. If the thermal mass and the instabilities equilibrate, we expect an equilibrium temperature of T {sub eq} ≅ ξ H / g-bar where g-bar is the effective gauge coupling. Finally, we estimate the spectrum of perturbations if φ is thermal and find that the tensor to scalar ratio is suppressed by H /(2 T ), if tensors do not thermalize.« less

  9. Probing Inflation Using Galaxy Clustering On Ultra-Large Scales

    NASA Astrophysics Data System (ADS)

    Dalal, Roohi; de Putter, Roland; Dore, Olivier

    2018-01-01

    A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.

  10. Figures of merit for present and future dark energy probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mortonson, Michael J.; Huterer, Dragan; Hu, Wayne

    2010-09-15

    We compare current and forecasted constraints on dynamical dark energy models from Type Ia supernovae and the cosmic microwave background using figures of merit based on the volume of the allowed dark energy parameter space. For a two-parameter dark energy equation of state that varies linearly with the scale factor, and assuming a flat universe, the area of the error ellipse can be reduced by a factor of {approx}10 relative to current constraints by future space-based supernova data and CMB measurements from the Planck satellite. If the dark energy equation of state is described by a more general basis ofmore » principal components, the expected improvement in volume-based figures of merit is much greater. While the forecasted precision for any single parameter is only a factor of 2-5 smaller than current uncertainties, the constraints on dark energy models bounded by -1{<=}w{<=}1 improve for approximately 6 independent dark energy parameters resulting in a reduction of the total allowed volume of principal component parameter space by a factor of {approx}100. Typical quintessence models can be adequately described by just 2-3 of these parameters even given the precision of future data, leading to a more modest but still significant improvement. In addition to advances in supernova and CMB data, percent-level measurement of absolute distance and/or the expansion rate is required to ensure that dark energy constraints remain robust to variations in spatial curvature.« less

  11. Cosmological constraints from galaxy clusters in the 2500 square-degree SPT-SZ survey

    DOE PAGES

    Haan, T. de; Benson, B. A.; Bleem, L. E.; ...

    2016-11-18

    Here, we present cosmological parameter constraints obtained from galaxy clusters identified by their Sunyaev–Zel’dovich effect signature in the 2500 square-degree South Pole Telescope Sunyaev Zel’dovich (SPT-SZ) survey. We consider the 377 cluster candidates identified atmore » $$z\\gt 0.25$$ with a detection significance greater than five, corresponding to the 95% purity threshold for the survey. We compute constraints on cosmological models using the measured cluster abundance as a function of mass and redshift. We include additional constraints from multi-wavelength observations, including Chandra X-ray data for 82 clusters and a weak lensing-based prior on the normalization of the mass-observable scaling relations. Assuming a spatially flat ΛCDM cosmology, we combine the cluster data with a prior on H (0) and find $${\\sigma }_{8}=0.784\\pm 0.039$$ and $${{\\rm{\\Omega }}}_{m}=0.289\\pm 0.042$$, with the parameter combination $${\\sigma }_{8}{({{\\rm{\\Omega }}}_{m}/0.27)}^{0.3}=0.797\\pm 0.031$$. These results are in good agreement with constraints from the cosmic microwave background (CMB) from SPT, WMAP, and Planck, as well as with constraints from other cluster data sets. We also consider several extensions to ΛCDM, including models in which the equation of state of dark energy w, the species-summed neutrino mass, and/or the effective number of relativistic species ($${N}_{\\mathrm{eff}}$$) are free parameters. When combined with constraints from the Planck CMB, H (0), baryon acoustic oscillation, and SNe, adding the SPT cluster data improves the w constraint by 14%, to $$w=-1.023\\pm 0.042$$.« less

  12. Cosmological constraints from galaxy clusters in the 2500 square-degree SPT-SZ survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haan, T. de; Benson, B. A.; Bleem, L. E.

    Here, we present cosmological parameter constraints obtained from galaxy clusters identified by their Sunyaev–Zel’dovich effect signature in the 2500 square-degree South Pole Telescope Sunyaev Zel’dovich (SPT-SZ) survey. We consider the 377 cluster candidates identified atmore » $$z\\gt 0.25$$ with a detection significance greater than five, corresponding to the 95% purity threshold for the survey. We compute constraints on cosmological models using the measured cluster abundance as a function of mass and redshift. We include additional constraints from multi-wavelength observations, including Chandra X-ray data for 82 clusters and a weak lensing-based prior on the normalization of the mass-observable scaling relations. Assuming a spatially flat ΛCDM cosmology, we combine the cluster data with a prior on H (0) and find $${\\sigma }_{8}=0.784\\pm 0.039$$ and $${{\\rm{\\Omega }}}_{m}=0.289\\pm 0.042$$, with the parameter combination $${\\sigma }_{8}{({{\\rm{\\Omega }}}_{m}/0.27)}^{0.3}=0.797\\pm 0.031$$. These results are in good agreement with constraints from the cosmic microwave background (CMB) from SPT, WMAP, and Planck, as well as with constraints from other cluster data sets. We also consider several extensions to ΛCDM, including models in which the equation of state of dark energy w, the species-summed neutrino mass, and/or the effective number of relativistic species ($${N}_{\\mathrm{eff}}$$) are free parameters. When combined with constraints from the Planck CMB, H (0), baryon acoustic oscillation, and SNe, adding the SPT cluster data improves the w constraint by 14%, to $$w=-1.023\\pm 0.042$$.« less

  13. Throughput and latency programmable optical transceiver by using DSP and FEC control.

    PubMed

    Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki

    2017-05-15

    We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.

  14. Finding Mass Constraints Through Third Neutrino Mass Eigenstate Decay

    NASA Astrophysics Data System (ADS)

    Gangolli, Nakul; de Gouvêa, André; Kelly, Kevin

    2018-01-01

    In this paper we aim to constrain the decay parameter for the third neutrino mass utilizing already accepted constraints on the other mixing parameters from the Pontecorvo-Maki-Nakagawa-Sakata matrix (PMNS). The main purpose of this project is to determine the parameters that will allow the Jiangmen Underground Neutrino Observatory (JUNO) to observe a decay parameter with some statistical significance. Another goal is to determine the parameters that JUNO could detect in the case that the third neutrino mass is lighter than the first two neutrino species. We also replicate the results that were found in the JUNO Conceptual Design Report (CDR). By utilizing Χ2-squared analysis constraints have been put on the mixing angles, mass squared differences, and the third neutrino decay parameter. These statistical tests take into account background noise and normalization corrections and thus the finalized bounds are a good approximation for the true bounds that JUNO can detect. If the decay parameter is not included in our models, the 99% confidence interval lies within The bounds 0s to 2.80x10-12s. However, if we account for a decay parameter of 3x10-5 ev2, then 99% confidence interval lies within 8.73x10-12s to 8.73x10-11s.

  15. Primordial perturbations from dilaton-induced gauge fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Kiwoon; Choi, Ki-Young; Kim, Hyungjin

    2015-10-01

    We study the primordial scalar and tensor perturbations in inflation scenario involving a spectator dilaton field. In our setup, the rolling spectator dilaton causes a tachyonic instability of gauge fields, leading to a copious production of gauge fields in the superhorizon regime, which generates additional scalar and tensor perturbations through gravitational interactions. Our prime concern is the possibility to enhance the tensor-to-scalar ratio r relative to the standard result, while satisfying the observational constraints. To this end, we allow the dilaton field to be stabilized before the end of inflation, but after the CMB scales exit the horizon. We showmore » that for the inflaton slow roll parameter ε ∼> 10{sup −3}, the tensor-to-scalar ratio in our setup can be enhanced only by a factor of O(1) compared to the standard result. On the other hand, for smaller ε corresponding to a lower inflation energy scale, a much bigger enhancement can be achieved, so that our setup can give rise to an observably large r∼> 10{sup −2} even when ε|| 10{sup −3}. The tensor perturbation sourced by the spectator dilaton can have a strong scale dependence, and is generically red-tilted. We also discuss a specific model to realize our scenario, and identify the parameter region giving an observably large r for relatively low inflation energy scales.« less

  16. CONSTRAINING THE STRING GAUGE FIELD BY GALAXY ROTATION CURVES AND PERIHELION PRECESSION OF PLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Yeuk-Kwan E.; Xu Feng, E-mail: cheung@nju.edu.cn

    2013-09-01

    We discuss a cosmological model in which the string gauge field coupled universally to matter gives rise to an extra centripetal force and will have observable signatures on cosmological and astronomical observations. Several tests are performed using data including galaxy rotation curves of 22 spiral galaxies of varied luminosities and sizes and perihelion precessions of planets in the solar system. The rotation curves of the same group of galaxies are independently fit using a dark matter model with the generalized Navarro-Frenk-White (NFW) profile and the string model. A remarkable fit of galaxy rotation curves is achieved using the one-parameter stringmore » model as compared to the three-parameter dark matter model with the NFW profile. The average {chi}{sup 2} value of the NFW fit is 9% better than that of the string model at a price of two more free parameters. Furthermore, from the string model, we can give a dynamical explanation for the phenomenological Tully-Fisher relation. We are able to derive a relation between field strength, galaxy size, and luminosity, which can be verified with data from the 22 galaxies. To further test the hypothesis of the universal existence of the string gauge field, we apply our string model to the solar system. Constraint on the magnitude of the string field in the solar system is deduced from the current ranges for any anomalous perihelion precession of planets allowed by the latest observations. The field distribution resembles a dipole field originating from the Sun. The string field strength deduced from the solar system observations is of a similar magnitude as the field strength needed to sustain the rotational speed of the Sun inside the Milky Way. This hypothesis can be tested further by future observations with higher precision.« less

  17. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  18. Modeling Real-Time Human-Automation Collaborative Scheduling of Unmanned Vehicles

    DTIC Science & Technology

    2013-06-01

    that they can only take into account those quantifiable variables, parameters, objectives, and constraints identified in the design stages that were... account those quantifiable variables, parameters, objectives, and constraints identified in the design stages that were deemed to be critical. Previous...increased training and operating costs (Haddal & Gertler, 2010) and challenges in meeting the ever-increasing demand for more UV operations (U.S. Air

  19. Theoretical and observational constraints on Tachyon Inflation

    NASA Astrophysics Data System (ADS)

    Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel; Hidalgo, Juan Carlos; Rigel Mora-Luna, Refugio

    2018-03-01

    We constrain several models in Tachyonic Inflation derived from the large-N formalism by considering theoretical aspects as well as the latest observational data. On the theoretical side, we assess the field range of our models by means of the excursion of the equivalent canonical field. On the observational side, we employ BK14+PLANCK+BAO data to perform a parameter estimation analysis as well as a Bayesian model selection to distinguish the most favoured models among all four classes here presented. We observe that the original potential V propto sech(T) is strongly disfavoured by observations with respect to a reference model with flat priors on inflationary observables. This realisation of Tachyon inflation also presents a large field range which may demand further quantum corrections. We also provide examples of potentials derived from the polynomial and the perturbative classes which are both statistically favoured and theoretically acceptable.

  20. Speededness and Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Xiong, Xinhui

    2013-01-01

    Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…

  1. Three-dimensional elastic-plastic finite-element analyses of constraint variations in cracked bodies

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.; Bigelow, C. A.; Shivakumar, K. N.

    1993-01-01

    Three-dimensional elastic-plastic (small-strain) finite-element analyses were used to study the stresses, deformations, and constraint variations around a straight-through crack in finite-thickness plates for an elastic-perfectly plastic material under monotonic and cyclic loading. Middle-crack tension specimens were analyzed for thicknesses ranging from 1.25 to 20 mm with various crack lengths. Three local constraint parameters, related to the normal, tangential, and hydrostatic stresses, showed similar variations along the crack front for a given thickness and applied stress level. Numerical analyses indicated that cyclic stress history and crack growth reduced the local constraint parameters in the interior of a plate, especially at high applied stress levels. A global constraint factor alpha(sub g) was defined to simulate three-dimensional effects in two-dimensional crack analyses. The global constraint factor was calculated as an average through-the-thickness value over the crack-front plastic region. Values of alpha(sub g) were found to be nearly independent of crack length and were related to the stress-intensity factor for a given thickness.

  2. Texture analysis as a predictor of radiation-induced xerostomia in head and neck patients undergoing IMRT.

    PubMed

    Nardone, Valerio; Tini, Paolo; Nioche, Christophe; Mazzei, Maria Antonietta; Carfagno, Tommaso; Battaglia, Giuseppe; Pastina, Pierpaolo; Grassi, Roberta; Sebaste, Lucio; Pirtoli, Luigi

    2018-06-01

    Image texture analysis (TA) is a heterogeneity quantifying approach that cannot be appreciated by the naked eye, and early evidence suggests that TA has great potential in the field of oncology. The aim of this study is to evaluate parotid gland texture analysis (TA) combined with formal dosimetry as a factor for predicting severe late xerostomia in patients undergoing radiation therapy for head and neck cancers. We performed a retrospective analysis of patients treated at our Radiation Oncology Unit between January 2010 and December 2015, and selected the patients whose normal dose constraints for the parotid gland (mean dose < 26 Gy for the bilateral gland) could not be satisfied due to the presence of positive nodes close to the parotid glands. The parotid gland that showed the higher V30 was contoured on CT simulation and analysed with LifeX Software©. TA parameters included features of grey-level co-occurrence matrix (GLCM), neighbourhood grey-level dependence matrix (NGLDM), grey-level run length matrix (GLRLM), grey-level zone length matrix (GLZLM), sphericity, and indices from the grey-level histogram. We performed a univariate and multivariate analysis between all the texture parameters, the volume of the gland, the normal dose parameters (V30 and Mean Dose), and the development of severe chronic xerostomia. Seventy-eight patients were included and 25 (31%) developed chronic xerostomia. The TA parameters correlated with severe chronic xerostomia included V30 (OR 5.63), Dmean (OR 5.71), Kurtosis (OR 0.78), GLCM Correlation (OR 1.34), and RLNU (OR 2.12). The multivariate logistic regression showed a significant correlation between V30 (0.001), GLCM correlation (p: 0.026), RLNU (p: 0.011), and chronic xerostomia (p < 0.001, R2:0.664). Xerostomia represents an important cause of morbidity for head and neck cancer survivors after radiation therapy, and in certain cases normal dose constraints cannot be satisfied. Our results seem promising as texture analysis could enhance the normal dose constraints for the prediction of xerostomia.

  3. Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme

    NASA Astrophysics Data System (ADS)

    Hsin, Cheng-Ho; Inigo, Rafael M.

    1990-03-01

    The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.

  4. Model-independent constraints on modified gravity from current data and from the Euclid and SKA future surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddei, Laura; Martinelli, Matteo; Amendola, Luca, E-mail: taddei@thphys.uni-heidelberg.de, E-mail: martinelli@lorentz.leidenuniv.nl, E-mail: amendola@thphys.uni-heidelberg.de

    2016-12-01

    The aim of this paper is to constrain modified gravity with redshift space distortion observations and supernovae measurements. Compared with a standard ΛCDM analysis, we include three additional free parameters, namely the initial conditions of the matter perturbations, the overall perturbation normalization, and a scale-dependent modified gravity parameter modifying the Poisson equation, in an attempt to perform a more model-independent analysis. First, we constrain the Poisson parameter Y (also called G {sub eff}) by using currently available f σ{sub 8} data and the recent SN catalog JLA. We find that the inclusion of the additional free parameters makes the constraintsmore » significantly weaker than when fixing them to the standard cosmological value. Second, we forecast future constraints on Y by using the predicted growth-rate data for Euclid and SKA missions. Here again we point out the weakening of the constraints when the additional parameters are included. Finally, we adopt as modified gravity Poisson parameter the specific Horndeski form, and use scale-dependent forecasts to build an exclusion plot for the Yukawa potential akin to the ones realized in laboratory experiments, both for the Euclid and the SKA surveys.« less

  5. WAMA: a method of optimizing reticle/die placement to increase litho cell productivity

    NASA Astrophysics Data System (ADS)

    Dor, Amos; Schwarz, Yoram

    2005-05-01

    This paper focuses on reticle/field placement methodology issues, the disadvantages of typical methods used in the industry, and the innovative way that the WAMA software solution achieves optimized placement. Typical wafer placement methodologies used in the semiconductor industry considers a very limited number of parameters, like placing the maximum amount of die on the wafer circle and manually modifying die placement to minimize edge yield degradation. This paper describes how WAMA software takes into account process characteristics, manufacturing constraints and business objectives to optimize placement for maximum stepper productivity and maximum good die (yield) on the wafer.

  6. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  7. European Train Control System: A Case Study in Formal Verification

    NASA Astrophysics Data System (ADS)

    Platzer, André; Quesel, Jan-David

    Complex physical systems have several degrees of freedom. They only work correctly when their control parameters obey corresponding constraints. Based on the informal specification of the European Train Control System (ETCS), we design a controller for its cooperation protocol. For its free parameters, we successively identify constraints that are required to ensure collision freedom. We formally prove the parameter constraints to be sharp by characterizing them equivalently in terms of reachability properties of the hybrid system dynamics. Using our deductive verification tool KeYmaera, we formally verify controllability, safety, liveness, and reactivity properties of the ETCS protocol that entail collision freedom. We prove that the ETCS protocol remains correct even in the presence of perturbation by disturbances in the dynamics. We verify that safety is preserved when a PI controlled speed supervision is used.

  8. Multi-objective control of nonlinear boiler-turbine dynamics with actuator magnitude and rate constraints.

    PubMed

    Chen, Pang-Chia

    2013-01-01

    This paper investigates multi-objective controller design approaches for nonlinear boiler-turbine dynamics subject to actuator magnitude and rate constraints. System nonlinearity is handled by a suitable linear parameter varying system representation with drum pressure as the system varying parameter. Variation of the drum pressure is represented by suitable norm-bounded uncertainty and affine dependence on system matrices. Based on linear matrix inequality algorithms, the magnitude and rate constraints on the actuator and the deviations of fluid density and water level are formulated while the tracking abilities on the drum pressure and power output are optimized. Variation ranges of drum pressure and magnitude tracking commands are used as controller design parameters, determined according to the boiler-turbine's operation range. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Interpreting the 750 GeV diphoton excess by the singlet extension of the Manohar-Wise model

    NASA Astrophysics Data System (ADS)

    Cao, Junjie; Han, Chengcheng; Shang, Liangliang; Su, Wei; Yang, Jin Min; Zhang, Yang

    2016-04-01

    The evidence of a new scalar particle X from the 750 GeV diphoton excess, and the absence of any other signal of new physics at the LHC so far suggest the existence of new colored scalars, which may be moderately light and thus can induce sizable Xgg and Xγγ couplings without resorting to very strong interactions. Motivated by this speculation, we extend the Manohar-Wise model by adding one gauge singlet scalar field. The resulting theory then predicts one singlet dominated scalar ϕ as well as three kinds of color-octet scalars, which can mediate through loops the ϕgg and ϕγγ interactions. After fitting the model to the diphoton data at the LHC, we find that in reasonable parameter regions the excess can be explained at 1σ level by the process gg → ϕ → γγ, and the best points predict the central value of the excess rate with χmin2 = 2.32, which corresponds to a p-value of 0.68. We also consider the constraints from various LHC Run I signals, and we conclude that, although these constraints are powerful in excluding the parameter space of the model, the best points are still experimentally allowed.

  10. Cross-correlating 2D and 3D galaxy surveys

    DOE PAGES

    Passaglia, Samuel; Manzotti, Alessandro; Dodelson, Scott

    2017-06-08

    Galaxy surveys probe both structure formation and the expansion rate, making them promising avenues for understanding the dark universe. Photometric surveys accurately map the 2D distribution of galaxy positions and shapes in a given redshift range, while spectroscopic surveys provide sparser 3D maps of the galaxy distribution. We present a way to analyse overlapping 2D and 3D maps jointly and without loss of information. We represent 3D maps using spherical Fourier-Bessel (sFB) modes, which preserve radial coverage while accounting for the spherical sky geometry, and we decompose 2D maps in a spherical harmonic basis. In these bases, a simple expression exists for the cross-correlation of the two fields. One very powerful application is the ability to simultaneously constrain the redshift distribution of the photometric sample, the sample biases, and cosmological parameters. We use our framework to show that combined analysis of DESI and LSST can improve cosmological constraints by factors ofmore » $${\\sim}1.2$$ to $${\\sim}1.8$$ on the region where they overlap relative to identically sized disjoint regions. We also show that in the overlap of DES and SDSS-III in Stripe 82, cross-correlating improves photo-$z$ parameter constraints by factors of $${\\sim}2$$ to $${\\sim}12$$ over internal photo-$z$ reconstructions.« less

  11. Robust Notion Vision For A Vehicle Moving On A Plane

    NASA Astrophysics Data System (ADS)

    Moni, Shankar; Weldon, E. J.

    1987-05-01

    A vehicle equipped with a cemputer vision system moves on a plane. We show that subject to certain constraints, the system can determine the motion of the vehicle (one rotational and two translational degrees of freedom) and the depth of the scene in front of the vehicle. The constraints include limits on the speed of the vehicle, presence of texture on the plane and absence of pitch and roll in the vehicular motion. It is possible to decouple the problems of finding the vehicle's motion and the depth of the scene in front of the vehicle by using two rigidly connected cameras. One views a field with known depth (i.e. the ground plane) and estimates the motion parameters and the other determines the depth map knowing the motion parameters. The motion is constrained to be planar to increase robustness. We use a least squares method of fitting the vehicle motion to observer brightness gradients. With this method, no correspondence between image points needs to be established and information fran the entire image is used in calculating notion. The algorithm performs very reliably on real image sequences and these results have been included. The results compare favourably to the performance of the algorithm of Negandaripour and Horn [2] where six degrees of freedom are assumed.

  12. Delayed Response and Biosonar Perception Explain Movement Coordination in Trawling Bats

    PubMed Central

    Giuggioli, Luca; McKetterick, Thomas J.; Holderied, Marc

    2015-01-01

    Animal coordinated movement interactions are commonly explained by assuming unspecified social forces of attraction, repulsion and alignment with parameters drawn from observed movement data. Here we propose and test a biologically realistic and quantifiable biosonar movement interaction mechanism for echolocating bats based on spatial perceptual bias, i.e. actual sound field, a reaction delay, and observed motor constraints in speed and acceleration. We found that foraging pairs of bats flying over a water surface swapped leader-follower roles and performed chases or coordinated manoeuvres by copying the heading a nearby individual has had up to 500 ms earlier. Our proposed mechanism based on the interplay between sensory-motor constraints and delayed alignment was able to recreate the observed spatial actor-reactor patterns. Remarkably, when we varied model parameters (response delay, hearing threshold and echolocation directionality) beyond those observed in nature, the spatio-temporal interaction patterns created by the model only recreated the observed interactions, i.e. chases, and best matched the observed spatial patterns for just those response delays, hearing thresholds and echolocation directionalities found to be used by bats. This supports the validity of our sensory ecology approach of movement coordination, where interacting bats localise each other by active echolocation rather than eavesdropping. PMID:25811627

  13. Cross-correlating 2D and 3D galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passaglia, Samuel; Manzotti, Alessandro; Dodelson, Scott

    Galaxy surveys probe both structure formation and the expansion rate, making them promising avenues for understanding the dark universe. Photometric surveys accurately map the 2D distribution of galaxy positions and shapes in a given redshift range, while spectroscopic surveys provide sparser 3D maps of the galaxy distribution. We present a way to analyse overlapping 2D and 3D maps jointly and without loss of information. We represent 3D maps using spherical Fourier-Bessel (sFB) modes, which preserve radial coverage while accounting for the spherical sky geometry, and we decompose 2D maps in a spherical harmonic basis. In these bases, a simple expression exists for the cross-correlation of the two fields. One very powerful application is the ability to simultaneously constrain the redshift distribution of the photometric sample, the sample biases, and cosmological parameters. We use our framework to show that combined analysis of DESI and LSST can improve cosmological constraints by factors ofmore » $${\\sim}1.2$$ to $${\\sim}1.8$$ on the region where they overlap relative to identically sized disjoint regions. We also show that in the overlap of DES and SDSS-III in Stripe 82, cross-correlating improves photo-$z$ parameter constraints by factors of $${\\sim}2$$ to $${\\sim}12$$ over internal photo-$z$ reconstructions.« less

  14. Lightning Charge Retrievals: Dimensional Reduction, LDAR Constraints, and a First Comparison w/ LIS Satellite Data

    NASA Technical Reports Server (NTRS)

    Koshak, William; Krider, E. Philip; Murray, Natalie; Boccippio, Dennis

    2007-01-01

    A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight to just four. The four unknowns are found by performing a numerical minimization of a chi-squared goodness-of-fit function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source". In this way, all 8 parameters are found, yet a numerical search of only 4 parameters is required. The inversion method is applied to the understanding of lightning charge retrievals. The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) instrument. Because lightning effectively deposits charge within thundercloud charge centers and because LDAR traces the geometrical development of the lightning channel with high precision, the LDAR data provides an ideal constraint for finding the best model charge solutions. In particular, LDAR data can be used to help determine both the horizontal and vertical positions of the model charges, thereby eliminating dipole ambiguities. The results of the LDAR-constrained charge retrieval method have been compared to the locations of optical pulses/flash locations detected by the Lightning Imaging Sensor (LIS).

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simard, G.; et al.

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\

  16. Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters

    NASA Astrophysics Data System (ADS)

    Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei

    2018-05-01

    In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.

  17. Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey

    NASA Astrophysics Data System (ADS)

    Peel, Austin; Lin, Chieh-An; Lanusse, François; Leonard, Adrienne; Starck, Jean-Luc; Kilbinger, Martin

    2017-03-01

    Peak statistics in weak-lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complementary probe to two-point and higher-order statistics to constrain our cosmological models. Next-generation galaxy surveys, with their advanced optics and large areas, will measure the cosmic weak-lensing signal with unprecedented precision. To prepare for these anticipated data sets, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm, σ8, and w0de. In particular, we study how Camelus, a fast stochastic model for predicting peaks, can be applied to such large surveys. The algorithm avoids the need for time-costly N-body simulations, and its stochastic approach provides full PDF information of observables. Considering peaks with a signal-to-noise ratio ≥ 1, we measure the abundance histogram in a mock shear catalogue of approximately 5000 deg2 using a multiscale mass-map filtering technique. We constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation, a robust likelihood-free inference algorithm. Peak statistics yield a tight but significantly biased constraint in the σ8-Ωm plane, as measured by the width ΔΣ8 of the 1σ contour. We find Σ8 = σ8(Ωm/ 0.27)α = 0.77-0.05+0.06 with α = 0.75 for a flat ΛCDM model. The strong bias indicates the need to better understand and control the model systematics before applying it to a real survey of this size or larger. We perform a calibration of the model and compare results to those from the two-point correlation functions ξ± measured on the same field. We calibrate the ξ± result as well, since its contours are also biased, although not as severely as for peaks. In this case, we find for peaks Σ8 = 0.76-0.03+0.02 with α = 0.65, while for the combined ξ+ and ξ- statistics the values are Σ8 = 0.76-0.01+0.02 and α = 0.70. We conclude that the constraining power can therefore be comparable between the two weak-lensing observables in large-field surveys. Furthermore, the tilt in the σ8-Ωm degeneracy direction for peaks with respect to that of ξ± suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0de cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and ξ±.

  18. Theoretical prediction of Grüneisen parameter for SiO{sub 2}.TiO{sub 2} bulk metallic glasses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Chandra K.; Pandey, Brijesh K., E-mail: bkpmmmec11@gmail.com; Pandey, Anjani K.

    2016-05-23

    The Grüneisen parameter (γ) is very important to decide the limitations for the prediction of thermoelastic properties of bulk metallic glasses. It can be defined in terms of microscopic and macroscopic parameters of the material in which former is based on vibrational frequencies of atoms in the material while later is closely related to its thermodynamic properties. Different formulation and equation of states are used by the pioneer researchers of this field to predict the true sense of Gruneisen parameter for BMG but for SiO{sub 2}.TiO{sub 2} very few and insufficient information is available till now. In the present workmore » we have tested the validity of two different isothermal EOS viz. Poirrior-Tarantola EOS and Usual-Tait EOS to predict the true value of Gruneisen parameter for SiO{sub 2}.TiO{sub 2} as a function of compression. Using different thermodynamic limitations related to the material constraints and analyzing obtained result it is concluded that the Poirrior-Tarantola EOS gives better numeric values of Grüneisen parameter (γ) for SiO{sub 2}.TiO{sub 2} BMG.« less

  19. Broadband spectral fitting of blazars using XSPEC

    NASA Astrophysics Data System (ADS)

    Sahayanathan, Sunder; Sinha, Atreyee; Misra, Ranjeev

    2018-03-01

    The broadband spectral energy distribution (SED) of blazars is generally interpreted as radiation arising from synchrotron and inverse Compton mechanisms. Traditionally, the underlying source parameters responsible for these emission processes, like particle energy density, magnetic field, etc., are obtained through simple visual reproduction of the observed fluxes. However, this procedure is incapable of providing confidence ranges for the estimated parameters. In this work, we propose an efficient algorithm to perform a statistical fit of the observed broadband spectrum of blazars using different emission models. Moreover, we use the observable quantities as the fit parameters, rather than the direct source parameters which govern the resultant SED. This significantly improves the convergence time and eliminates the uncertainty regarding initial guess parameters. This approach also has an added advantage of identifying the degenerate parameters, which can be removed by including more observable information and/or additional constraints. A computer code developed based on this algorithm is implemented as a user-defined routine in the standard X-ray spectral fitting package, XSPEC. Further, we demonstrate the efficacy of the algorithm by fitting the well sampled SED of blazar 3C 279 during its gamma ray flare in 2014.

  20. Improving the Efficiency and Effectiveness of Community Detection via Prior-Induced Equivalent Super-Network.

    PubMed

    Yang, Liang; Jin, Di; He, Dongxiao; Fu, Huazhu; Cao, Xiaochun; Fogelman-Soulie, Francoise

    2017-03-29

    Due to the importance of community structure in understanding network and a surge of interest aroused on community detectability, how to improve the community identification performance with pairwise prior information becomes a hot topic. However, most existing semi-supervised community detection algorithms only focus on improving the accuracy but ignore the impacts of priors on speeding detection. Besides, they always require to tune additional parameters and cannot guarantee pairwise constraints. To address these drawbacks, we propose a general, high-speed, effective and parameter-free semi-supervised community detection framework. By constructing the indivisible super-nodes according to the connected subgraph of the must-link constraints and by forming the weighted super-edge based on network topology and cannot-link constraints, our new framework transforms the original network into an equivalent but much smaller Super-Network. Super-Network perfectly ensures the must-link constraints and effectively encodes cannot-link constraints. Furthermore, the time complexity of super-network construction process is linear in the original network size, which makes it efficient. Meanwhile, since the constructed super-network is much smaller than the original one, any existing community detection algorithm is much faster when using our framework. Besides, the overall process will not introduce any additional parameters, making it more practical.

  1. A 750 GeV portal: LHC phenomenology and dark matter candidates

    DOE PAGES

    D’Eramo, Francesco; de Vries, Jordy; Panci, Paolo

    2016-05-16

    We study the effective field theory obtained by extending the Standard Model field content with two singlets: a 750 GeV (pseudo-)scalar and a stable fermion. Accounting for collider productions initiated by both gluon and photon fusion, we investigate where the theory is consistent with both the LHC diphoton excess and bounds from Run 1. We analyze dark matter phenomenology in such regions, including relic density constraints as well as collider, direct, and indirect bounds. Scalar portal dark matter models are very close to limits from direct detection and mono-jet searches if gluon fusion dominates, and not constrained at all otherwise.more » In conclusion, pseudo-scalar models are challenged by photon line limits and mono-jet searches in most of the parameter space.« less

  2. A 750 GeV portal: LHC phenomenology and dark matter candidates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Eramo, Francesco; de Vries, Jordy; Panci, Paolo

    We study the effective field theory obtained by extending the Standard Model field content with two singlets: a 750 GeV (pseudo-)scalar and a stable fermion. Accounting for collider productions initiated by both gluon and photon fusion, we investigate where the theory is consistent with both the LHC diphoton excess and bounds from Run 1. We analyze dark matter phenomenology in such regions, including relic density constraints as well as collider, direct, and indirect bounds. Scalar portal dark matter models are very close to limits from direct detection and mono-jet searches if gluon fusion dominates, and not constrained at all otherwise.more » In conclusion, pseudo-scalar models are challenged by photon line limits and mono-jet searches in most of the parameter space.« less

  3. Optical rectification using geometrical field enhancement in gold nano-arrays

    NASA Astrophysics Data System (ADS)

    Piltan, S.; Sievenpiper, D.

    2017-11-01

    Conversion of photons to electrical energy has a wide variety of applications including imaging, solar energy harvesting, and IR detection. A rectenna device consists of an antenna in addition to a rectifying element to absorb the incident radiation within a certain frequency range. We designed, fabricated, and measured an optical rectifier taking advantage of asymmetrical field enhancement for forward and reverse currents due to geometrical constraints. The gold nano-structures as well as the geometrical parameters offer enhanced light-matter interaction at 382 THz. Using the Taylor expansion of the time-dependent current as a function of the external bias and oscillating optical excitation, we obtained responsivities close to quantum limit of operation. This geometrical approach can offer an efficient, broadband, and scalable solution for energy conversion and detection in the future.

  4. Gravitational wave signals and cosmological consequences of gravitational reheating

    NASA Astrophysics Data System (ADS)

    Artymowski, Michał; Czerwińska, Olga; Lalak, Zygmunt; Lewicki, Marek

    2018-04-01

    Reheating after inflation can proceed even if the inflaton couples to Standard Model (SM) particles only gravitationally. However, particle production during the transition between de-Sitter expansion and a decelerating Universe is rather inefficient and the necessity to recover the visible Universe leads to a non-standard cosmological evolution initially dominated by remnants of the inflaton field. We remain agnostic to the specific dynamics of the inflaton field and discuss a generic scenario in which its remnants behave as a perfect fluid with a general barotropic parameter w. Using CMB and BBN constraints we derive the allowed range of inflationary scales. We also show that this scenario results in a characteristic primordial Gravitational Wave (GW) spectrum which gives hope for observation in upcoming runs of LIGO as well as in other planned experiments.

  5. Geometric low-energy effective action in a doubled spacetime

    NASA Astrophysics Data System (ADS)

    Ma, Chen-Te; Pezzella, Franco

    2018-05-01

    The ten-dimensional supergravity theory is a geometric low-energy effective theory and the equations of motion for its fields can be obtained from string theory by computing β functions. With d compact dimensions, an O (d , d ; Z) geometric structure can be added to it giving the supergravity theory with T-duality manifest. In this paper, this is constructed through the use of a suitable star product whose role is the one to implement the weak constraint on the fields and the gauge parameters in order to have a closed gauge symmetry algebra. The consistency of the action here proposed is based on the orthogonality of the momenta associated with fields in their triple star products in the cubic terms defined for d ≥ 1. This orthogonality holds also for an arbitrary number of star products of fields for d = 1. Finally, we extend our analysis to the double sigma model, non-commutative geometry and open string theory.

  6. Rate-independent dissipation in phase-field modelling of displacive transformations

    NASA Astrophysics Data System (ADS)

    Tůma, K.; Stupkiewicz, S.; Petryk, H.

    2018-05-01

    In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.

  7. Dynamics of relaxed inflation

    NASA Astrophysics Data System (ADS)

    Tangarife, Walter; Tobioka, Kohsaku; Ubaldi, Lorenzo; Volansky, Tomer

    2018-02-01

    The cosmological relaxation of the electroweak scale has been proposed as a mechanism to address the hierarchy problem of the Standard Model. A field, the relaxion, rolls down its potential and, in doing so, scans the squared mass parameter of the Higgs, relaxing it to a parametrically small value. In this work, we promote the relaxion to an inflaton. We couple it to Abelian gauge bosons, thereby introducing the necessary dissipation mechanism which slows down the field in the last stages. We describe a novel reheating mechanism, which relies on the gauge-boson production leading to strong electro-magnetic fields, and proceeds via the vacuum production of electron-positron pairs through the Schwinger effect. We refer to this mechanism as Schwinger reheating. We discuss the cosmological dynamics of the model and the phenomenological constraints from CMB and other experiments. We find that a cutoff close to the Planck scale may be achieved. In its minimal form, the model does not generate sufficient curvature perturbations and additional ingredients, such as a curvaton field, are needed.

  8. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    PubMed

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  9. Performance of convolutional codes on fading channels typical of planetary entry missions

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.; Reale, T. J.

    1974-01-01

    The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.

  10. Constraints on supersymmetric dark matter for heavy scalar superpartners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Peisi; Roglans, Roger A.; Spiegel, Daniel D.

    2017-05-01

    We study the constraints on neutralino dark matter in minimal low energy supersymmetry models and the case of heavy lepton and quark scalar superpartners. For values of the Higgsino and gaugino mass parameters of the order of the weak scale, direct detection experiments are already putting strong bounds on models in which the dominant interactions between the dark matter candidates and nuclei are governed by Higgs boson exchange processes, particularly for positive values of the Higgsino mass parameter mu. For negative values of mu, there can be destructive interference between the amplitudes associated with the exchange of the standard CP-evenmore » Higgs boson and the exchange of the nonstandard one. This leads to specific regions of parameter space which are consistent with the current experimental constraints and a thermal origin of the observed relic density. In this article, we study the current experimental constraints on these scenarios, as well as the future experimental probes, using a combination of direct and indirect dark matter detection and heavy Higgs and electroweakino searches at hadron colliders« less

  11. Slowly-rotating neutron stars in massive bigravity

    NASA Astrophysics Data System (ADS)

    Sullivan, A.; Yunes, N.

    2018-02-01

    We study slowly-rotating neutron stars in ghost-free massive bigravity. This theory modifies general relativity by introducing a second, auxiliary but dynamical tensor field that couples to matter through the physical metric tensor through non-linear interactions. We expand the field equations to linear order in slow rotation and numerically construct solutions in the interior and exterior of the star with a set of realistic equations of state. We calculate the physical mass function with respect to observer radius and find that, unlike in general relativity, this function does not remain constant outside the star; rather, it asymptotes to a constant a distance away from the surface, whose magnitude is controlled by the ratio of gravitational constants. The Vainshtein-like radius at which the physical and auxiliary mass functions asymptote to a constant is controlled by the graviton mass scaling parameter, and outside this radius, bigravity modifications are suppressed. We also calculate the frame-dragging metric function and find that bigravity modifications are typically small in the entire range of coupling parameters explored. We finally calculate both the mass-radius and the moment of inertia-mass relations for a wide range of coupling parameters and find that both the graviton mass scaling parameter and the ratio of the gravitational constants introduce large modifications to both. These results could be used to place future constraints on bigravity with electromagnetic and gravitational-wave observations of isolated and binary neutron stars.

  12. Constraints on Non-Newtonian Gravity From the Experiment on Neutron Quantum States in the Earth's Gravitational Field.

    PubMed

    Nesvizhevsky, V V; Protasov, K V

    2005-01-01

    An upper limit to non-Newtonian attractive forces is obtained from the measurement of quantum states of neutrons in the Earth's gravitational field. This limit improves the existing constraints in the nanometer range.

  13. Generalized expectation-maximization segmentation of brain MR images

    NASA Astrophysics Data System (ADS)

    Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.

    2006-03-01

    Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.

  14. Thermodynamically consistent model calibration in chemical kinetics

    PubMed Central

    2011-01-01

    Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948

  15. An implicit adaptation algorithm for a linear model reference control system

    NASA Technical Reports Server (NTRS)

    Mabius, L.; Kaufman, H.

    1975-01-01

    This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.

  16. Inflation from a nonlinear magnetic monopole field nonminimally coupled to curvature

    NASA Astrophysics Data System (ADS)

    Otalora, Giovanni; Övgün, Ali; Saavedra, Joel; Videla, Nelson

    2018-06-01

    In the context of nonminimally coupled f(R) gravity theories, we study early inflation driven by a nonlinear monopole magnetic field which is nonminimally coupled to curvature. In order to isolate the effects of the nonminimal coupling between matter and curvature we assume the pure gravitational sector to have the Einstein-Hilbert form. Thus, we study the most simple model with a nonminimal coupling function which is linear in the Ricci scalar. From an effective fluid description, we show the existence of an early exponential expansion regime of the Universe, followed by a transition to a radiation-dominated era. In particular, by applying the most recent results of the Planck collaboration we set the limits on the parameter of the nonminimal coupling, and the quotient of the nonminimal coupling and the nonlinear monopole magnetic scales. We found that these parameters must take large values in order to satisfy the observational constraints. Furthermore, by obtaining the relation for the graviton mass, we show the consistency of our results with the recent gravitational wave data GW170817 of LIGO and Virgo.

  17. Cosmic structures and gravitational waves in ghost-free scalar-tensor theories of gravity

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Karmakar, Purnendu; Matarrese, Sabino; Scomparin, Mattia

    2018-05-01

    We study cosmic structures in the quadratic Degenerate Higher Order Scalar Tensor (qDHOST) model, which has been proposed as the most general scalar-tensor theory (up to quadratic dependence on the covariant derivatives of the scalar field), which is not plagued by the presence of ghost instabilities. We then study a static, spherically symmetric object embedded in de Sitter space-time for the qDHOST model. This model exhibits breaking of the Vainshtein mechanism inside the cosmic structure and Schwarzschild-de Sitter space-time outside, where General Relativity (GR) can be recovered within the Vainshtein radius. We constrained the parameters of the qDHOST model by requiring the validity of the Vainshtein screening mechanism inside the cosmic structures and the consistency with the recently established bounds on gravitational wave speed from GW170817/GRB170817A event. We find that these two constraints rule out the same set of parameters, corresponding to the Lagrangians that are quadratic in second-order derivatives of the scalar field, for the shift symmetric qDHOST.

  18. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    NASA Astrophysics Data System (ADS)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.

  19. Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints

    NASA Astrophysics Data System (ADS)

    Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.

    2018-05-01

    Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.

  20. Combining cluster number counts and galaxy clustering

    NASA Astrophysics Data System (ADS)

    Lacasa, Fabien; Rosenfeld, Rogerio

    2016-08-01

    The abundance of clusters and the clustering of galaxies are two of the important cosmological probes for current and future large scale surveys of galaxies, such as the Dark Energy Survey. In order to combine them one has to account for the fact that they are not independent quantities, since they probe the same density field. It is important to develop a good understanding of their correlation in order to extract parameter constraints. We present a detailed modelling of the joint covariance matrix between cluster number counts and the galaxy angular power spectrum. We employ the framework of the halo model complemented by a Halo Occupation Distribution model (HOD). We demonstrate the importance of accounting for non-Gaussianity to produce accurate covariance predictions. Indeed, we show that the non-Gaussian covariance becomes dominant at small scales, low redshifts or high cluster masses. We discuss in particular the case of the super-sample covariance (SSC), including the effects of galaxy shot-noise, halo second order bias and non-local bias. We demonstrate that the SSC obeys mathematical inequalities and positivity. Using the joint covariance matrix and a Fisher matrix methodology, we examine the prospects of combining these two probes to constrain cosmological and HOD parameters. We find that the combination indeed results in noticeably better constraints, with improvements of order 20% on cosmological parameters compared to the best single probe, and even greater improvement on HOD parameters, with reduction of error bars by a factor 1.4-4.8. This happens in particular because the cross-covariance introduces a synergy between the probes on small scales. We conclude that accounting for non-Gaussian effects is required for the joint analysis of these observables in galaxy surveys.

  1. Constraining cosmologies with fundamental constants - I. Quintessence and K-essence

    NASA Astrophysics Data System (ADS)

    Thompson, Rodger I.; Martins, C. J. A. P.; Vielzeuf, P. E.

    2013-01-01

    Many cosmological models invoke rolling scalar fields to account for the observed acceleration of the expansion of the Universe. These theories generally include a potential V(φ) which is a function of the scalar field φ. Although V(φ) can be represented by a very diverse set of functions, recent work has shown that under some conditions, such as the slow-roll conditions, the equation of state parameter w is either independent of the form of V(φ) or part of family of solutions with only a few parameters. In realistic models of this type the scalar field couples to other sectors of the model leading to possibly observable changes in the fundamental constants such as the fine structure constant α and the proton to electron mass ratio μ. Although the current situation on a possible variance of α is complicated, there are firm limitations on the variance of μ in the early universe. This paper explores the limits this puts on the validity of various cosmologies that invoke rolling scalar fields. We find that the limit on the variation of μ puts significant constraints on the product of a cosmological parameter w + 1 and a new physics parameter ζ2μ, the coupling constant between μ and the rolling scalar field. Even when the cosmologies are restricted to very slow roll conditions either the value of ζμ must be at the lower end of or less than its expected values or the value of w + 1 must be restricted to values vanishingly close to 0. This implies that either the rolling scalar field is very weakly coupled to the electromagnetic field, small ζμ, very weakly coupled to gravity, (w + 1) ≈ 0 or both. These results stress that adherence to the measured invariance in μ is a very significant test of the validity of any proposed cosmology and any new physics it requires. The limits on the variation of μ also produces a significant tension with the reported changes in the value of α.

  2. Radiofrequency pulse design in parallel transmission under strict temperature constraints.

    PubMed

    Boulant, Nicolas; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre

    2014-09-01

    To gain radiofrequency (RF) pulse performance by directly addressing the temperature constraints, as opposed to the specific absorption rate (SAR) constraints, in parallel transmission at ultra-high field. The magnitude least-squares RF pulse design problem under hard SAR constraints was solved repeatedly by using the virtual observation points and an active-set algorithm. The SAR constraints were updated at each iteration based on the result of a thermal simulation. The numerical study was performed for an SAR-demanding and simplified time of flight sequence using B1 and ΔB0 maps obtained in vivo on a human brain at 7T. The proposed adjustment of the SAR constraints combined with an active-set algorithm provided higher flexibility in RF pulse design within a reasonable time. The modifications of those constraints acted directly upon the thermal response as desired. Although further confidence in the thermal models is needed, this study shows that RF pulse design under strict temperature constraints is within reach, allowing better RF pulse performance and faster acquisitions at ultra-high fields at the cost of higher sequence complexity. Copyright © 2013 Wiley Periodicals, Inc.

  3. Physical retrieval of precipitation water contents from Special Sensor Microwave/Imager (SSM/I) data. Part 2: Retrieval method and applications (report version)

    NASA Technical Reports Server (NTRS)

    Olson, William S.

    1990-01-01

    A physical retrieval method for estimating precipitating water distributions and other geophysical parameters based upon measurements from the DMSP-F8 SSM/I is developed. Three unique features of the retrieval method are (1) sensor antenna patterns are explicitly included to accommodate varying channel resolution; (2) precipitation-brightness temperature relationships are quantified using the cloud ensemble/radiative parameterization; and (3) spatial constraints are imposed for certain background parameters, such as humidity, which vary more slowly in the horizontal than the cloud and precipitation water contents. The general framework of the method will facilitate the incorporation of measurements from the SSMJT, SSM/T-2 and geostationary infrared measurements, as well as information from conventional sources (e.g., radiosondes) or numerical forecast model fields.

  4. Thermodynamics of hairy black holes in Lovelock gravity

    NASA Astrophysics Data System (ADS)

    Hennigar, Robie A.; Tjoa, Erickson; Mann, Robert B.

    2017-02-01

    We perform a thorough study of the thermodynamic properties of a class of Lovelock black holes with conformal scalar hair arising from coupling of a real scalar field to the dimensionally extended Euler densities. We study the linearized equations of motion of the theory and describe constraints under which the theory is free from ghosts/tachyons. We then consider, within the context of black hole chemistry, the thermodynamics of the hairy black holes in the Gauss-Bonnet and cubic Lovelock theories. We clarify the connection between isolated critical points and thermodynamic singularities, finding a one parameter family of these critical points which occur for well-defined thermodynamic parameters. We also report on a number of novel results, including `virtual triple points' and the first example of a `λ-line' — a line of second order phase transitions — in black hole thermodynamics.

  5. Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  6. Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1993-01-01

    The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.

  7. Constraint damping for the Z4c formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Weyhausen, Andreas; Bernuzzi, Sebastiano; Hilditch, David

    2012-01-01

    One possibility for avoiding constraint violation in numerical relativity simulations adopting free-evolution schemes is to modify the continuum evolution equations so that constraint violations are damped away. Gundlach et al. demonstrated that such a scheme damps low-amplitude, high-frequency constraint-violating modes exponentially for the Z4 formulation of general relativity. Here we analyze the effect of the damping scheme in numerical applications on a conformal decomposition of Z4. After reproducing the theoretically predicted damping rates of constraint violations in the linear regime, we explore numerical solutions not covered by the theoretical analysis. In particular we examine the effect of the damping scheme on low-frequency and on high-amplitude perturbations of flat spacetime as well and on the long-term dynamics of puncture and compact star initial data in the context of spherical symmetry. We find that the damping scheme is effective provided that the constraint violation is resolved on the numerical grid. On grid noise the combination of artificial dissipation and damping helps to suppress constraint violations. We find that care must be taken in choosing the damping parameter in simulations of puncture black holes. Otherwise the damping scheme can cause undesirable growth of the constraints, and even qualitatively incorrect evolutions. In the numerical evolution of a compact static star we find that the choice of the damping parameter is even more delicate, but may lead to a small decrease of constraint violation. For a large range of values it results in unphysical behavior.

  8. Required Accuracy of Structural Constraints in the Inversion of Electrical Resistivity Data for Improved Water Content Estimation

    NASA Astrophysics Data System (ADS)

    Heinze, T.; Budler, J.; Weigand, M.; Kemna, A.

    2017-12-01

    Water content distribution in the ground is essential for hazard analysis during monitoring of landslide prone hills. Geophysical methods like electrical resistivity tomography (ERT) can be utilized to determine the spatial distribution of water content using established soil physical relationships between bulk electrical resistivity and water content. However, often more dominant electrical contrasts due to lithological structures outplay these hydraulic signatures and blur the results in the inversion process. Additionally, the inversion of ERT data requires further constraints. In the standard Occam inversion method, a smoothness constraint is used, assuming that soil properties change softly in space. While this applies in many scenarios, sharp lithological layers with strongly divergent hydrological parameters, as often found in landslide prone hillslopes, are typically badly resolved by standard ERT. We use a structurally constrained ERT inversion approach for improving water content estimation in landslide prone hills by including a-priori information about lithological layers. The smoothness constraint is reduced along layer boundaries identified using seismic data. This approach significantly improves water content estimations, because in landslide prone hills often a layer of rather high hydraulic conductivity is followed by a hydraulic barrier like clay-rich soil, causing higher pore pressures. One saturated layer and one almost drained layer typically result also in a sharp contrast in electrical resistivity, assuming that surface conductivity of the soil does not change in similar order. Using synthetic data, we study the influence of uncertainties in the a-priori information on the inverted resistivity and estimated water content distribution. We find a similar behavior over a broad range of models and depths. Based on our simulation results, we provide best-practice recommendations for field applications and suggest important tests to obtain reliable, reproducible and trustworthy results. We finally apply our findings to field data, compare conventional and improved analysis results, and discuss limitations of the structurally-constrained inversion approach.

  9. Effects of anisotropies in turbulent magnetic diffusion in mean-field solar dynamo models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pipin, V. V.; Kosovichev, A. G.

    2014-04-10

    We study how anisotropies of turbulent diffusion affect the evolution of large-scale magnetic fields and the dynamo process on the Sun. The effect of anisotropy is calculated in a mean-field magnetohydrodynamics framework assuming that triple correlations provide relaxation to the turbulent electromotive force (so-called the 'minimal τ-approximation'). We examine two types of mean-field dynamo models: the well-known benchmark flux-transport model and a distributed-dynamo model with a subsurface rotational shear layer. For both models, we investigate effects of the double- and triple-cell meridional circulation, recently suggested by helioseismology and numerical simulations. To characterize the anisotropy effects, we introduce a parameter ofmore » anisotropy as a ratio of the radial and horizontal intensities of turbulent mixing. It is found that the anisotropy affects the distribution of magnetic fields inside the convection zone. The concentration of the magnetic flux near the bottom and top boundaries of the convection zone is greater when the anisotropy is stronger. It is shown that the critical dynamo number and the dynamo period approach to constant values for large values of the anisotropy parameter. The anisotropy reduces the overlap of toroidal magnetic fields generated in subsequent dynamo cycles, in the time-latitude 'butterfly' diagram. If we assume that sunspots are formed in the vicinity of the subsurface shear layer, then the distributed dynamo model with the anisotropic diffusivity satisfies the observational constraints from helioseismology and is consistent with the value of effective turbulent diffusion estimated from the dynamics of surface magnetic fields.« less

  10. The ΩDE-ΩM Plane in Dark Energy Cosmology

    NASA Astrophysics Data System (ADS)

    Qiang, Yuan; Zhang, Tong-Jie

    The dark energy cosmology with the equation of state w=const. is considered in this paper. The ΩDE-ΩM plane has been used to study the present state and expansion history of the universe. Through the mathematical analysis, we give the theoretical constraint of cosmological parameters. Together with some observations such as the transition redshift from deceleration to acceleration, more precise constraint on cosmological parameters can be acquired.

  11. Patchy screening of the cosmic microwave background by inhomogeneous reionization

    NASA Astrophysics Data System (ADS)

    Gluscevic, Vera; Kamionkowski, Marc; Hanson, Duncan

    2013-02-01

    We derive a constraint on patchy screening of the cosmic microwave background from inhomogeneous reionization using off-diagonal TB and TT correlations in WMAP-7 temperature/polarization data. We interpret this as a constraint on the rms optical-depth fluctuation Δτ as a function of a coherence multipole LC. We relate these parameters to a comoving coherence scale, of bubble size RC, in a phenomenological model where reionization is instantaneous but occurs on a crinkly surface, and also to the bubble size in a model of “Swiss cheese” reionization where bubbles of fixed size are spread over some range of redshifts. The current WMAP data are still too weak, by several orders of magnitude, to constrain reasonable models, but forthcoming Planck and future EPIC data should begin to approach interesting regimes of parameter space. We also present constraints on the parameter space imposed by the recent results from the EDGES experiment.

  12. Improvement of Gaofen-3 Absolute Positioning Accuracy Based on Cross-Calibration

    PubMed Central

    Deng, Mingjun; Li, Jiansong

    2017-01-01

    The Chinese Gaofen-3 (GF-3) mission was launched in August 2016, equipped with a full polarimetric synthetic aperture radar (SAR) sensor in the C-band, with a resolution of up to 1 m. The absolute positioning accuracy of GF-3 is of great importance, and in-orbit geometric calibration is a key technology for improving absolute positioning accuracy. Conventional geometric calibration is used to accurately calibrate the geometric calibration parameters of the image (internal delay and azimuth shifts) using high-precision ground control data, which are highly dependent on the control data of the calibration field, but it remains costly and labor-intensive to monitor changes in GF-3’s geometric calibration parameters. Based on the positioning consistency constraint of the conjugate points, this study presents a geometric cross-calibration method for the rapid and accurate calibration of GF-3. The proposed method can accurately calibrate geometric calibration parameters without using corner reflectors and high-precision digital elevation models, thus improving absolute positioning accuracy of the GF-3 image. GF-3 images from multiple regions were collected to verify the absolute positioning accuracy after cross-calibration. The results show that this method can achieve a calibration accuracy as high as that achieved by the conventional field calibration method. PMID:29240675

  13. A general framework to test gravity using galaxy clusters - I. Modelling the dynamical mass of haloes in f(R) gravity

    NASA Astrophysics Data System (ADS)

    Mitchell, Myles A.; He, Jian-hua; Arnold, Christian; Li, Baojiu

    2018-06-01

    We propose a new framework for testing gravity using cluster observations, which aims to provide an unbiased constraint on modified gravity models from Sunyaev-Zel'dovich (SZ) and X-ray cluster counts and the cluster gas fraction, among other possible observables. Focusing on a popular f(R) model of gravity, we propose a novel procedure to recalibrate mass scaling relations from Λ cold dark matter (ΛCDM) to f(R) gravity for SZ and X-ray cluster observables. We find that the complicated modified gravity effects can be simply modelled as a dependence on a combination of the background scalar field and redshift, fR(z)/(1 + z), regardless of the f(R) model parameter. By employing a large suite of N-body simulations, we demonstrate that a theoretically derived tanh fitting formula is in excellent agreement with the dynamical mass enhancement of dark matter haloes for a large range of background field parameters and redshifts. Our framework is sufficiently flexible to allow for tests of other models and inclusion of further observables, and the one-parameter description of the dynamical mass enhancement can have important implications on the theoretical modelling of observables and on practical tests of gravity.

  14. Parameter study and optimization for piezoelectric energy harvester for TPMS considering speed variation

    NASA Astrophysics Data System (ADS)

    Toghi Eshghi, Amin; Lee, Soobum; Lee, Hanmin; Kim, Young-Cheol

    2016-04-01

    In this paper, we perform design parameter study and design optimization for a piezoelectric energy harvester considering vehicle speed variation. Initially, a FEM model using ANSYS is developed to appraise the performance of a piezoelectric harvester in a rotating tire. The energy harvester proposed here uses the vertical deformation at contact patch area from the car weight and centrifugal acceleration. This harvester is composed of a beam which is clamped at both ends and a piezoelectric material is attached on the top of that. The piezoelectric material possesses the 31 mode of transduction in which the direction of applied field is perpendicular to that of the electric field. To optimize the harvester performance, we would change the geometrical parameters of the harvester to obtain the maximum power. One of the main challenges in the design process is obtaining the required power while considering the constraints for harvester weight and volume. These two concerns are addressed in this paper. Since the final goal of this study is the development of an energy harvester with a wireless sensor system installed in a real car, the real time data for varied velocity of a vehicle are taken into account for power measurements. This study concludes that the proposed design is applicable to wireless tire sensor systems.

  15. Phase-field model of vapor-liquid-solid nanowire growth

    NASA Astrophysics Data System (ADS)

    Wang, Nan; Upmanyu, Moneesh; Karma, Alain

    2018-03-01

    We present a multiphase-field model to describe quantitatively nanowire growth by the vapor-liquid-solid (VLS) process. The free-energy functional of this model depends on three nonconserved order parameters that distinguish the vapor, liquid, and solid phases and describe the energetic properties of various interfaces, including arbitrary forms of anisotropic γ plots for the solid-vapor and solid-liquid interfaces. The evolution equations for those order parameters describe basic kinetic processes including the rapid (quasi-instantaneous) equilibration of the liquid catalyst to a droplet shape with constant mean curvature, the slow incorporation of growth atoms at the droplet surface, and crystallization within the droplet. The standard constraint that the sum of the phase fields equals unity and the conservation of the number of catalyst atoms, which relates the catalyst volume to the concentration of growth atoms inside the droplet, are handled via separate Lagrange multipliers. An analysis of the model is presented that rigorously maps the phase-field equations to a desired set of sharp-interface equations for the evolution of the phase boundaries under the constraint of force balance at three-phase junctions (triple points) given by the Young-Herring relation that includes torque term related to the anisotropy of the solid-liquid and solid-vapor interface excess free energies. Numerical examples of growth in two dimensions are presented for the simplest case of vanishing crystalline anisotropy and the more realistic case of a solid-liquid γ plot with cusped minima corresponding to two sets of (10 ) and (11 ) facets. The simulations reproduce many of the salient features of nanowire growth observed experimentally, including growth normal to the substrate with tapering of the side walls, transitions between different growth orientations, and crawling growth along the substrate. They also reproduce different observed relationships between the nanowire growth velocity and radius depending on the growth condition. For the basic normal growth mode, the steady-state solid-liquid interface tip shape consists of a main facet intersected by two truncated side facets ending at triple points. The ratio of truncated and main facet lengths are in quantitative agreement with the prediction of sharp-interface theory that is developed here for faceted nanowire growth in two dimensions.

  16. Constraints on modified gravity from Planck 2015: when the health of your theory makes the difference

    NASA Astrophysics Data System (ADS)

    Salvatelli, Valentina; Piazza, Federico; Marinoni, Christian

    2016-09-01

    We use the effective field theory of dark energy (EFT of DE) formalism to constrain dark energy models belonging to the Horndeski class with the recent Planck 2015 CMB data. The space of theories is spanned by a certain number of parameters determining the linear cosmological perturbations, while the expansion history is set to that of a standard ΛCDM model. We always demand that the theories be free of fatal instabilities. Additionally, we consider two optional conditions, namely that scalar and tensor perturbations propagate with subliminal speed. Such criteria severely restrict the allowed parameter space and are thus very effective in shaping the posteriors. As a result, we confirm that no theory performs better than ΛCDM when CMB data alone are analysed. Indeed, the healthy dark energy models considered here are not able to reproduce those phenomenological behaviours of the effective Newton constant and gravitational slip parameters that, according to previous studies, best fit the data.

  17. Constraints on modified gravity from Planck 2015: when the health of your theory makes the difference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvatelli, Valentina; Piazza, Federico; Marinoni, Christian, E-mail: Valentina.Salvatelli@cpt.univ-mrs.fr, E-mail: Federico.Piazza@cpt.univ-mrs.fr, E-mail: Christian.Marinoni@cpt.univ-mrs.fr

    We use the effective field theory of dark energy (EFT of DE) formalism to constrain dark energy models belonging to the Horndeski class with the recent Planck 2015 CMB data. The space of theories is spanned by a certain number of parameters determining the linear cosmological perturbations, while the expansion history is set to that of a standard ΛCDM model. We always demand that the theories be free of fatal instabilities. Additionally, we consider two optional conditions, namely that scalar and tensor perturbations propagate with subliminal speed. Such criteria severely restrict the allowed parameter space and are thus very effectivemore » in shaping the posteriors. As a result, we confirm that no theory performs better than ΛCDM when CMB data alone are analysed. Indeed, the healthy dark energy models considered here are not able to reproduce those phenomenological behaviours of the effective Newton constant and gravitational slip parameters that, according to previous studies, best fit the data.« less

  18. A first class constraint generates not a gauge transformation, but a bad physical change: The case of electromagnetism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitts, J. Brian, E-mail: jbp25@cam.ac.uk

    In Dirac–Bergmann constrained dynamics, a first-class constraint typically does not alone generate a gauge transformation. By direct calculation it is found that each first-class constraint in Maxwell’s theory generates a change in the electric field E{sup →} by an arbitrary gradient, spoiling Gauss’s law. The secondary first-class constraint p{sup i},{sub i}=0 still holds, but being a function of derivatives of momenta (mere auxiliary fields), it is not directly about the observable electric field (a function of derivatives of A{sub μ}), which couples to charge. Only a special combination of the two first-class constraints, the Anderson–Bergmann–Castellani gauge generator G, leaves E{supmore » →} unchanged. Likewise only that combination leaves the canonical action invariant—an argument independent of observables. If one uses a first-class constraint to generate instead a canonical transformation, one partly strips the canonical coordinates of physical meaning as electromagnetic potentials, vindicating the Anderson–Bergmann Lagrangian orientation of interesting canonical transformations. The need to keep gauge-invariant the relation q-dot −(δH)/(δp) =−E{sub i}−p{sup i}=0 supports using the gauge generator and primary Hamiltonian rather than the separate first-class constraints and the extended Hamiltonian. Partly paralleling Pons’s criticism, it is shown that Dirac’s proof that a first-class primary constraint generates a gauge transformation, by comparing evolutions from identical initial data, cancels out and hence fails to detect the alterations made to the initial state. It also neglects the arbitrary coordinates multiplying the secondary constraints inside the canonical Hamiltonian. Thus the gauge-generating property has been ascribed to the primaries alone, not the primary–secondary team G. Hence the Dirac conjecture about secondary first-class constraints as generating gauge transformations rests upon a false presupposition about primary first-class constraints. Clarity about Hamiltonian electromagnetism will be useful for an analogous treatment of GR. - Highlights: • A first-class constraint changes the electric field E, spoiling Gauss’s law. • A first-class constraint does not leave the action invariant or preserve q,0−dH/dp. • The gauge generator preserves E,q,0−dH/dp, and the canonical action. • The error in proofs that first-class primaries generating gauge is shown. • Dirac’s conjecture about secondary first-class constraints is blocked.« less

  19. Cosmology constraints from shear peak statistics in Dark Energy Survey Science Verification data

    DOE PAGES

    Kacprzak, T.; Kirk, D.; Friedrich, O.; ...

    2016-08-19

    Shear peak statistics has gained a lot of attention recently as a practical alternative to the two point statistics for constraining cosmological parameters. We perform a shear peak statistics analysis of the Dark Energy Survey (DES) Science Verification (SV) data, using weak gravitational lensing measurements from a 139 degmore » $^2$ field. We measure the abundance of peaks identified in aperture mass maps, as a function of their signal-to-noise ratio, in the signal-to-noise range $$0<\\mathcal S / \\mathcal N<4$$. To predict the peak counts as a function of cosmological parameters we use a suite of $N$-body simulations spanning 158 models with varying $$\\Omega_{\\rm m}$$ and $$\\sigma_8$$, fixing $w = -1$, $$\\Omega_{\\rm b} = 0.04$$, $h = 0.7$ and $$n_s=1$$, to which we have applied the DES SV mask and redshift distribution. In our fiducial analysis we measure $$\\sigma_{8}(\\Omega_{\\rm m}/0.3)^{0.6}=0.77 \\pm 0.07$$, after marginalising over the shear multiplicative bias and the error on the mean redshift of the galaxy sample. We introduce models of intrinsic alignments, blending, and source contamination by cluster members. These models indicate that peaks with $$\\mathcal S / \\mathcal N>4$$ would require significant corrections, which is why we do not include them in our analysis. We compare our results to the cosmological constraints from the two point analysis on the SV field and find them to be in good agreement in both the central value and its uncertainty. As a result, we discuss prospects for future peak statistics analysis with upcoming DES data.« less

  20. Observational constraints on successful model of quintessential Inflation

    NASA Astrophysics Data System (ADS)

    Geng, Chao-Qiang; Lee, Chung-Chi; Sami, M.; Saridakis, Emmanuel N.; Starobinsky, Alexei A.

    2017-06-01

    We study quintessential inflation using a generalized exponential potential V(phi)propto \\exp(-λ phin/MPln), n>1, the model admits slow-roll inflation at early times and leads to close-to-scaling behaviour in the post inflationary era with an exit to dark energy at late times. We present detailed investigations of the inflationary stage in the light of the Planck 2015 results, study post-inflationary dynamics and analytically confirm the existence of an approximately scaling solution. Additionally, assuming that standard massive neutrinos are non-minimally coupled, makes the field phi dominant once again at late times giving rise to present accelerated expansion of the Universe. We derive observational constraints on the field and time-dependent neutrino masses. In particular, for n=6 (8), the parameter λ is constrained to be, log λ > -7.29 (-11.7) the model produces the spectral index of the power spectrum of primordial scalar (matter density) perturbations as ns = 0.959 ± 0.001 (0.961 ± 0.001) and tiny tensor-to-scalar ratio, r<1.72 × 10-2 (2.32 × 10-2) respectively. Consequently, the upper bound on possible values of the sum of neutrino masses Σ mν lesssim 2.5 eV significantly enhances compared to that in the standard ΛCDM model.

  1. Optimization of HTS superconducting magnetic energy storage magnet volume

    NASA Astrophysics Data System (ADS)

    Korpela, Aki; Lehtonen, Jorma; Mikkonen, Risto

    2003-08-01

    Nonlinear optimization problems in the field of electromagnetics have been successfully solved by means of sequential quadratic programming (SQP) and the finite element method (FEM). For example, the combination of SQP and FEM has been proven to be an efficient tool in the optimization of low temperature superconductors (LTS) superconducting magnetic energy storage (SMES) magnets. The procedure can also be applied for the optimization of HTS magnets. However, due to a strongly anisotropic material and a slanted electric field, current density characteristic high temperature superconductors HTS optimization is quite different from that of the LTS. In this paper the volumes of solenoidal conduction-cooled Bi-2223/Ag SMES magnets have been optimized at the operation temperature of 20 K. In addition to the electromagnetic constraints the stress caused by the tape bending has also been taken into account. Several optimization runs with different initial geometries were performed in order to find the best possible solution for a certain energy requirement. The optimization constraints describe the steady-state operation, thus the presented coil geometries are designed for slow ramping rates. Different energy requirements were investigated in order to find the energy dependence of the design parameters of optimized solenoidal HTS coils. According to the results, these dependences can be described with polynomial expressions.

  2. BOOK REVIEW: Modern Canonical Quantum General Relativity

    NASA Astrophysics Data System (ADS)

    Kiefer, Claus

    2008-06-01

    The open problem of constructing a consistent and experimentally tested quantum theory of the gravitational field has its place at the heart of fundamental physics. The main approaches can be roughly divided into two classes: either one seeks a unified quantum framework of all interactions or one starts with a direct quantization of general relativity. In the first class, string theory (M-theory) is the only known example. In the second class, one can make an additional methodological distinction: while covariant approaches such as path-integral quantization use the four-dimensional metric as an essential ingredient of their formalism, canonical approaches start with a foliation of spacetime into spacelike hypersurfaces in order to arrive at a Hamiltonian formulation. The present book is devoted to one of the canonical approaches—loop quantum gravity. It is named modern canonical quantum general relativity by the author because it uses connections and holonomies as central variables, which are analogous to the variables used in Yang Mills theories. In fact, the canonically conjugate variables are a holonomy of a connection and the flux of a non-Abelian electric field. This has to be contrasted with the older geometrodynamical approach in which the metric of three-dimensional space and the second fundamental form are the fundamental entities, an approach which is still actively being pursued. It is the author's ambition to present loop quantum gravity in a way in which every step is formulated in a mathematically rigorous form. In his own words: 'loop quantum gravity is an attempt to construct a mathematically rigorous, background-independent, non-perturbative quantum field theory of Lorentzian general relativity and all known matter in four spacetime dimensions, not more and not less'. The formal Leitmotiv of loop quantum gravity is background independence. Non-gravitational theories are usually quantized on a given non-dynamical background. In contrast, due to the geometrical nature of gravity, no such background exists in quantum gravity. Instead, the notion of a background is supposed to emerge a posteriori as an approximate notion from quantum states of geometry. As a consequence, the standard ultraviolet divergences of quantum field theory do not show up because there is no limit of Δx → 0 to be taken in a given spacetime. On the other hand, it is open whether the theory is free of any type of divergences and anomalies. A central feature of any canonical approach, independent of the choice of variables, is the existence of constraints. In geometrodynamics, these are the Hamiltonian and diffeomorphism constraints. They also hold in loop quantum gravity, but are supplemented there by the Gauss constraint, which emerges due to the use of triads in the formalism. These constraints capture all the physics of the quantum theory because no spacetime is present anymore (analogous to the absence of trajectories in quantum mechanics), so no additional equations of motion are needed. This book presents a careful and comprehensive discussion of these constraints. In particular, the constraint algebra is calculated in a transparent and explicit way. The author makes the important assumption that a Hilbert-space structure is still needed on the fundamental level of quantum gravity. In ordinary quantum theory, such a structure is needed for the probability interpretation, in particular for the conservation of probability with respect to external time. It is thus interesting to see how far this concept can be extrapolated into the timeless realm of quantum gravity. On the kinematical level, that is, before the constraints are imposed, an essentially unique Hilbert space can be constructed in terms of spin-network states. Potentially problematic features are the implementation of the diffeomorphism and Hamiltonian constraints. The Hilbert space Hdiff defined on the diffeomorphism subspace can throw states out of the kinematical Hilbert space and is thus not contained in it. Moreover, the Hamiltonian constraint does not seem to preserve Hdiff, so its implementation remains open. To avoid some of these problems, the author proposes his 'master constraint programme' in which the infinitely many local Hamiltonian constraints are combined into one master constraint. This is a subject of his current research. With regard to this situation, it is not surprising that the main results in loop quantum gravity are found on the kinematical level. An especially important feature are the discrete spectra of geometric operators such as the area operator. This quantifies the earlier heuristic ideas about a discreteness at the Planck scale. The hope is that these results survive the consistent implementation of all constraints. The status of loop quantum gravity is concisely and competently summarized in this volume, whose author is himself one of the pioneers of this approach. What is the relation of this book to the other monograph on loop quantum gravity, written by Carlo Rovelli and published in 2004 under the title Quantum Gravity with the same company? In the words of the present author: 'the two books are complementary in the sense that they can be regarded almost as volume I ('introduction and conceptual framework') and volume II ('mathematical framework and applications') of a general presentation of quantum general relativity in general and loop quantum gravity in particular'. In fact, the present volume gives a complete and self-contained presentation of the required mathematics, especially on the approximately 200 pages of chapters 18 33. As for the physical applications, the main topic is the microscopic derivation of the black-hole entropy. This is presented in a clear and detailed form. Employing the concept of an isolated horizon (a local generalization of an event horizon), the counting of surface states gives an entropy proportional to the horizon area. It also contains the Barbero Immirzi parameter β, which is a free parameter of the theory. Demanding, on the other hand, that the entropy be equal to the Bekenstein Hawking entropy would fix this parameter. Other applications such as loop quantum cosmology are only briefly touched upon. Since loop quantum gravity is a very active field of research, the author warns that the present book can at best be seen as a snapshot. Part of the overall picture may thus in the future be subject to modifications. For example, recent work by the author using a concept of dust time is not yet covered here. Nevertheless, I expect that this volume will continue to serve as a valuable introduction and reference book. It is essential reading for everyone working on loop quantum gravity.

  3. Optimization of structures to satisfy a flutter velocity constraint by use of quadratic equation fitting. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Motiwalla, S. K.

    1973-01-01

    Using the first and the second derivative of flutter velocity with respect to the parameters, the velocity hypersurface is made quadratic. This greatly simplifies the numerical procedure developed for determining the values of the design parameters such that a specified flutter velocity constraint is satisfied and the total structural mass is near a relative minimum. A search procedure is presented utilizing two gradient search methods and a gradient projection method. The procedure is applied to the design of a box beam, using finite-element representation. The results indicate that the procedure developed yields substantial design improvement satisfying the specified constraint and does converge to near a local optimum.

  4. Likelihood analysis of the pMSSM11 in light of LHC 13-TeV data

    NASA Astrophysics Data System (ADS)

    Bagnaschi, E.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Citron, M.; Costa, J. C.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Lucio, M.; Martínez Santos, D.; Olive, K. A.; Richards, A.; Spanos, V. C.; Suárez Fernández, I.; Weiglein, G.

    2018-03-01

    We use MasterCode to perform a frequentist analysis of the constraints on a phenomenological MSSM model with 11 parameters, the pMSSM11, including constraints from ˜ 36/fb of LHC data at 13 TeV and PICO, XENON1T and PandaX-II searches for dark matter scattering, as well as previous accelerator and astrophysical measurements, presenting fits both with and without the (g-2)_μ constraint. The pMSSM11 is specified by the following parameters: 3 gaugino masses M_{1,2,3}, a common mass for the first-and second-generation squarks m_{\\tilde{q}} and a distinct third-generation squark mass m_{\\tilde{q}_3}, a common mass for the first-and second-generation sleptons m_{\\tilde{ℓ }} and a distinct third-generation slepton mass m_{\\tilde{τ }}, a common trilinear mixing parameter A, the Higgs mixing parameter μ , the pseudoscalar Higgs mass M_A and tan β . In the fit including (g-2)_μ , a Bino-like \\tilde{χ }^01 is preferred, whereas a Higgsino-like \\tilde{χ }^01 is mildly favoured when the (g-2)_μ constraint is dropped. We identify the mechanisms that operate in different regions of the pMSSM11 parameter space to bring the relic density of the lightest neutralino, \\tilde{χ }^01, into the range indicated by cosmological data. In the fit including (g-2)_μ , coannihilations with \\tilde{χ }^02 and the Wino-like \\tilde{χ }^± 1 or with nearly-degenerate first- and second-generation sleptons are active, whereas coannihilations with the \\tilde{χ }^02 and the Higgsino-like \\tilde{χ }^± 1 or with first- and second-generation squarks may be important when the (g-2)_μ constraint is dropped. In the two cases, we present χ ^2 functions in two-dimensional mass planes as well as their one-dimensional profile projections and best-fit spectra. Prospects remain for discovering strongly-interacting sparticles at the LHC, in both the scenarios with and without the (g-2)_μ constraint, as well as for discovering electroweakly-interacting sparticles at a future linear e^+ e^- collider such as the ILC or CLIC.

  5. Robust point matching via vector field consensus.

    PubMed

    Jiayi Ma; Ji Zhao; Jinwen Tian; Yuille, Alan L; Zhuowen Tu

    2014-04-01

    In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden/latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90%). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.

  6. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  7. Inflationary magnetogenesis with added helicity: constraints from non-Gaussianities

    NASA Astrophysics Data System (ADS)

    Caprini, Chiara; Chiara Guzzetti, Maria; Sorbo, Lorenzo

    2018-06-01

    In previous work (Caprini and Sorbo 2014 J. Cosmol. Astropart. Phys. JCAP10(2014)056), two of us have proposed a model of inflationary magnetogenesis based on a rolling auxiliary field able both to account for the magnetic fields inferred by the (non) observation of gamma-rays from blazars, and to start the galactic dynamo, without incurring in any strong coupling or strong backreaction regime. Here we evaluate the correction to the scalar spectrum and bispectrum with respect to single-field slow-roll inflation generated in that scenario. The strongest constraints on the model originate from the non-observation of a scalar bispectrum. Nevertheless, even when those constraints are taken into consideration, the scenario can successfully account for the observed magnetic fields as long as the energy scale of inflation is smaller than GeV, under some conditions on the slow roll of the auxiliary scalar field.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, P

    Finite element method was used to analyze the three-point bend experimental data of A533B-1 pressure vessel steel obtained by Sherry, Lidbury, and Beardsmore [1] from -160 to -45 C within the ductile-brittle transition regime. As many researchers have shown, the failure stress ({sigma}{sub f}) of the material could be approximated as a constant. The characteristic length, or the critical distance (r{sub c}) from the crack tip, at which {sigma}{sub f} is reached, is shown to be temperature dependent based on the crack tip stress field calculated by the finite element method. With the J-A{sub 2} two-parameter constraint theory in fracturemore » mechanics, the fracture toughness (J{sub C} or K{sub JC}) can be expressed as a function of the constraint level (A{sub 2}) and the critical distance r{sub c}. This relationship is used to predict the fracture toughness of A533B-1 in the ductile-brittle transition regime with a constant {sigma}{sub f} and a set of temperature-dependent r{sub c}. It can be shown that the prediction agrees well with the test data for wide range of constraint levels from shallow cracks (a/W= 0.075) to deep cracks (a/W= 0.5), where a is the crack length and W is the specimen width.« less

  9. Optical fringe-reflection deflectometry with bundle adjustment

    NASA Astrophysics Data System (ADS)

    Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng

    2018-06-01

    Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.

  10. Constraining Galactic cosmic-ray parameters with Z ≤ 2 nuclei

    NASA Astrophysics Data System (ADS)

    Coste, B.; Derome, L.; Maurin, D.; Putze, A.

    2012-03-01

    Context. The secondary-to-primary B/C ratio is widely used for studying Galactic cosmic-ray propagation processes. The 2H/4He and 3He/4He ratios probe a different Z/A regime, which provides a test for the "universality" of propagation. Aims: We revisit the constraints on diffusion-model parameters set by the quartet (1H, 2H, 3He, 4He), using the most recent data as well as updated formulae for the inelastic and production cross-sections. Methods: Our analysis relies on the USINE propagation package and a Markov Chain Monte Carlo technique to estimate the probability density functions of the parameters. Simulated data were also used to validate analysis strategies. Results: The fragmentation of CNO cosmic rays (resp. NeMgSiFe) on the interstellar medium during their propagation contributes to 20% (resp. 20%) of the 2H and 15% (resp. 10%) of the 3He flux at high energy. The C to Fe elements are also responsible for up to 10% of the 4He flux measured at 1 GeV/n. The analysis of 3He/4He (and to a lesser extent 2H/4He) data shows that the transport parameters are consistent with those from the B/C analysis: the diffusion model with δ ~ 0.7 (diffusion slope), Vc ~ 20 km s-1 (galactic wind), Va ~ 40 km s-1 (reacceleration) is favoured, but the combination δ ~ 0.2, Vc ~ 0, and Va ~ 80 km s-1 is a close second. The confidence intervals on the parameters show that the constraints set by the quartet data can compete with those derived from the B/C data. These constraints are tighter when adding the 3He (or 2H) flux measurements, and the tightest when the He flux is added as well. For the latter, the analysis of simulated and real data shows an increased sensitivity to biases. Using the secondary-to-primary ratio along with a loose prior on the source parameters is recommended to obtain the most robust constraints on the transport parameters. Conclusions: Light nuclei should be systematically considered in the analysis of transport parameters. They provide independent constraints that can compete with those obtained from the B/C analysis.

  11. About some types of constraints in problems of routing

    NASA Astrophysics Data System (ADS)

    Petunin, A. A.; Polishuk, E. G.; Chentsov, A. G.; Chentsov, P. A.; Ukolov, S. S.

    2016-12-01

    Many routing problems arising in different applications can be interpreted as a discrete optimization problem with additional constraints. The latter include generalized travelling salesman problem (GTSP), to which task of tool routing for CNC thermal cutting machines is sometimes reduced. Technological requirements bound to thermal fields distribution during cutting process are of great importance when developing algorithms for this task solution. These requirements give rise to some specific constraints for GTSP. This paper provides a mathematical formulation for the problem of thermal fields calculating during metal sheet thermal cutting. Corresponding algorithm with its programmatic implementation is considered. The mathematical model allowing taking such constraints into account considering other routing problems is discussed either.

  12. Primordial Black Holes from Supersymmetry in the Early Universe.

    PubMed

    Cotner, Eric; Kusenko, Alexander

    2017-07-21

    Supersymmetric extensions of the standard model generically predict that in the early Universe a scalar condensate can form and fragment into Q balls before decaying. If the Q balls dominate the energy density for some period of time, the relatively large fluctuations in their number density can lead to formation of primordial black holes (PBH). Other scalar fields, unrelated to supersymmetry, can play a similar role. For a general charged scalar field, this robust mechanism can generate black holes over the entire mass range allowed by observational constraints, with a sufficient abundance to account for all dark matter in some parameter ranges. In the case of supersymmetry the mass range is limited from above by 10^{23}  g. We also comment on the role that topological defects can play for PBH formation in a similar fashion.

  13. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  14. Chameleon field dynamics during inflation

    NASA Astrophysics Data System (ADS)

    Saba, Nasim; Farhoudi, Mehrdad

    By studying the chameleon model during inflation, we investigate whether it can be a successful inflationary model, wherein we employ the common typical potential usually used in the literature. Thus, in the context of the slow-roll approximations, we obtain the e-folding number for the model to verify the ability of resolving the problems of standard big bang cosmology. Meanwhile, we apply the constraints on the form of the chosen potential and also on the equation of state parameter coupled to the scalar field. However, the results of the present analysis show that there is not much chance of having the chameleonic inflation. Hence, we suggest that if through some mechanism the chameleon model can be reduced to the standard inflationary model, then it may cover the whole era of the universe from the inflation up to the late time.

  15. Route constraints model based on polychromatic sets

    NASA Astrophysics Data System (ADS)

    Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu

    2018-03-01

    With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.

  16. Resurrecting the Power-law, Intermediate, and Logamediate Inflations in the DBI Scenario with Constant Sound Speed

    NASA Astrophysics Data System (ADS)

    Amani, Roonak; Rezazadeh, Kazem; Abdolmaleki, Asrin; Karami, Kayoomars

    2018-02-01

    We investigate the power-law, intermediate, and logamediate inflationary models in the framework of DBI non-canonical scalar field with constant sound speed. In the DBI setting, we first represent the power spectrum of both scalar density and tensor gravitational perturbations. Then, we derive different inflationary observables including the scalar spectral index n s , the running of the scalar spectral index {{dn}}s/d{ln}k, and the tensor-to-scalar ratio r. We show that the 95% CL constraint of the Planck 2015 T + E data on the non-Gaussianity parameter {f}{NL}{DBI} leads to the sound speed bound {c}s≥slant 0.087 in the DBI inflation. Moreover, our results imply that, although the predictions of the power-law, intermediate, and logamediate inflations in the standard canonical framework (c s = 1) are not consistent with the Planck 2015 data, in the DBI scenario with constant sound speed {c}s< 1, the result of the r-{n}s diagram for these models can lie inside the 68% CL region favored by Planck 2015 TT,TE,EE+lowP data. We also specify the parameter space of the power-law, intermediate, and logamediate inflations for which our models are compatible with the 68% or 95% CL regions of the Planck 2015 TT,TE,EE+lowP data. Using the allowed ranges of the parameter space of the intermediate and logamediate inflationary models, we estimate the running of the scalar spectral index and find that it is compatible with the 95% CL constraint from the Planck 2015 TT,TE,EE+lowP data.

  17. Report on the B-Fields at NIF Workshop Held at LLNL October 12-13, 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, K. B.; Moody, J. D.

    2015-12-13

    A national ICF laboratory workshop on requirements for a magnetized target capability on NIF was held by NIF at LLNL on October 12 and 13, attended by experts from LLNL, SNL, LLE, LANL, GA, and NRL. Advocates for indirect drive (LLNL), magnetic (Z) drive (SNL), polar direct drive (LLE), and basic science needing applied B (many institutions) presented and discussed requirements for the magnetized target capabilities they would like to see. 30T capability was most frequently requested. A phased operation increasing the field in steps experimentally can be envisioned. The NIF management will take the inputs from the scientific communitymore » represented at the workshop and recommend pulse-powered magnet parameters for NIF that best meet the collective user requests. In parallel, LLNL will continue investigating magnets for future generations that might be powered by compact laser-B-field generators (Moody, Fujioka, Santos, Woolsey, Pollock). The NIF facility engineers will start to analyze compatibility of the recommended pulsed magnet parameters (size, field, rise time, materials) with NIF chamber constraints, diagnostic access, and final optics protection against debris in FY16. The objective of this assessment will be to develop a schedule for achieving an initial Bfield capability. Based on an initial assessment, room temperature magnetized gas capsules will be fielded on NIF first. Magnetized cryo-ice-layered targets will take longer (more compatibility issues). Magnetized wetted foam DT targets (Olson) may have somewhat fewer compatibility issues making them a more likely choice for the first cryo-ice-layered target fielded with applied Bz.« less

  18. A method to design blended rolled edges for compact range reflectors

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Burnside, Walter D.

    1989-01-01

    A method to design blended rolled edges for arbitrary rim shape compact range reflectors is presented. The reflectors may be center-fed or offset-fed. The method leads to rolled edges with minimal surface discontinuities. It is shown that the reflectors designed using the prescribed method can be defined analytically using simple expressions. A procedure to obtain optimum rolled edges parameter is also presented. The procedure leads to blended rolled edges that minimize the diffracted fields emanating from the junction between the paraboloid and the rolled edge surface while satisfying certain constraints regarding the reflector size and the minimum operating frequency of the system.

  19. A method to design blended rolled edges for compact range reflectors

    NASA Technical Reports Server (NTRS)

    Gupta, Inder J.; Ericksen, Kurt P.; Burnside, Walter D.

    1990-01-01

    A method to design blended rolled edges for arbitrary rim shape compact range reflectors is presented. The reflectors may be center-fed or offset-fed. The method leads to rolled edges with minimal surface discontinuities. It is shown that the reflectors designed using the prescribed method can be defined analytically using simple expressions. A procedure to obtain optimum rolled edges parameters is also presented. The procedure leads to blended rolled edges that minimize the diffracted fields emanating from the junction between the paraboloid and the rolled edge surface while satisfying certain constraints regarding the reflector size and the minimum operating frequency of the system.

  20. Hydrogen Burning in Low Mass Stars Constrains Scalar-Tensor Theories of Gravity.

    PubMed

    Sakstein, Jeremy

    2015-11-13

    The most general scalar-tensor theories of gravity predict a weakening of the gravitational force inside astrophysical bodies. There is a minimum mass for hydrogen burning in stars that is set by the interplay of plasma physics and the theory of gravity. We calculate this for alternative theories of gravity and find that it is always significantly larger than the general relativity prediction. The observation of several low mass red dwarf stars therefore rules out a large class of scalar-tensor gravity theories and places strong constraints on the cosmological parameters appearing in the effective field theory of dark energy.

  1. Cosmic shear results from the deep lens survey. II. Full cosmological parameter constraints from tomography

    DOE PAGES

    Jee, M. James; Tyson, J. Anthony; Hilbert, Stefan; ...

    2016-06-15

    Here, we present a tomographic cosmic shear study from the Deep Lens Survey (DLS), which, providing a limiting magnitudemore » $${r}_{\\mathrm{lim}}\\sim 27$$ ($$5\\sigma $$), is designed as a precursor Large Synoptic Survey Telescope (LSST) survey with an emphasis on depth. Using five tomographic redshift bins, we study their auto- and cross-correlations to constrain cosmological parameters. We use a luminosity-dependent nonlinear model to account for the astrophysical systematics originating from intrinsic alignments of galaxy shapes. We find that the cosmological leverage of the DLS is among the highest among existing $$\\gt 10$$ deg2 cosmic shear surveys. Combining the DLS tomography with the 9 yr results of the Wilkinson Microwave Anisotropy Probe (WMAP9) gives $${{\\rm{\\Omega }}}_{m}={0.293}_{-0.014}^{+0.012}$$, $${\\sigma }_{8}={0.833}_{-0.018}^{+0.011}$$, $${H}_{0}={68.6}_{-1.2}^{+1.4}\\;{\\text{km s}}^{-1}\\;{{\\rm{Mpc}}}^{-1}$$, and $${{\\rm{\\Omega }}}_{b}=0.0475\\pm 0.0012$$ for ΛCDM, reducing the uncertainties of the WMAP9-only constraints by ~50%. When we do not assume flatness for ΛCDM, we obtain the curvature constraint $${{\\rm{\\Omega }}}_{k}=-{0.010}_{-0.015}^{+0.013}$$ from the DLS+WMAP9 combination, which, however, is not well constrained when WMAP9 is used alone. The dark energy equation-of-state parameter w is tightly constrained when baryonic acoustic oscillation (BAO) data are added, yielding $$w=-{1.02}_{-0.09}^{+0.10}$$ with the DLS+WMAP9+BAO joint probe. The addition of supernova constraints further tightens the parameter to $$w=-1.03\\pm 0.03$$. Our joint constraints are fully consistent with the final Planck results and also with the predictions of a ΛCDM universe.« less

  2. Constraints for transonic black hole accretion

    NASA Technical Reports Server (NTRS)

    Abramowicz, Marek A.; Kato, Shoji

    1989-01-01

    Regularity conditions and global topological constraints leave some forbidden regions in the parameter space of the transonic isothermal, rotating matter onto black holes. Unstable flows occupy regions touching the boundaries of the forbidden regions. The astrophysical consequences of these results are discussed.

  3. Multiobjective constraints for climate model parameter choices: Pragmatic Pareto fronts in CESM1

    NASA Astrophysics Data System (ADS)

    Langenbrunner, B.; Neelin, J. D.

    2017-09-01

    Global climate models (GCMs) are examples of high-dimensional input-output systems, where model output is a function of many variables, and an update in model physics commonly improves performance in one objective function (i.e., measure of model performance) at the expense of degrading another. Here concepts from multiobjective optimization in the engineering literature are used to investigate parameter sensitivity and optimization in the face of such trade-offs. A metamodeling technique called cut high-dimensional model representation (cut-HDMR) is leveraged in the context of multiobjective optimization to improve GCM simulation of the tropical Pacific climate, focusing on seasonal precipitation, column water vapor, and skin temperature. An evolutionary algorithm is used to solve for Pareto fronts, which are surfaces in objective function space along which trade-offs in GCM performance occur. This approach allows the modeler to visualize trade-offs quickly and identify the physics at play. In some cases, Pareto fronts are small, implying that trade-offs are minimal, optimal parameter value choices are more straightforward, and the GCM is well-functioning. In all cases considered here, the control run was found not to be Pareto-optimal (i.e., not on the front), highlighting an opportunity for model improvement through objectively informed parameter selection. Taylor diagrams illustrate that these improvements occur primarily in field magnitude, not spatial correlation, and they show that specific parameter updates can improve fields fundamental to tropical moist processes—namely precipitation and skin temperature—without significantly impacting others. These results provide an example of how basic elements of multiobjective optimization can facilitate pragmatic GCM tuning processes.

  4. Retrieving Storm Electric Fields From Aircraft Field Mill Data. Part 2; Applications

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Mach, D. M.; Christian, H. J.; Stewart, M. F.; Bateman, M. G.

    2005-01-01

    The Lagrange multiplier theory and "pitch down method" developed in Part I of this study are applied to complete the calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the method performs well in computer simulations. For mill measurement errors of 1 V/m and a 5 V/m error in the mean fair weather field function, the 3-D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair weather field was also tested using computer simulations. For mill measurement errors of 1 V/m, the method retrieves the 3-D storm field to within an error of about 8% if the fair weather field estimate is typically within 1 V/m of the true fair weather field. Using this side constraint and data from fair weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. The resulting calibration matrix was then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably with the results obtained from earlier calibration analyses that were based on iterative techniques.

  5. Solar System constraints on massless scalar-tensor gravity with positive coupling constant upon cosmological evolution of the scalar field

    NASA Astrophysics Data System (ADS)

    Anderson, David; Yunes, Nicolás

    2017-09-01

    Scalar-tensor theories of gravity modify general relativity by introducing a scalar field that couples nonminimally to the metric tensor, while satisfying the weak-equivalence principle. These theories are interesting because they have the potential to simultaneously suppress modifications to Einstein's theory on Solar System scales, while introducing large deviations in the strong field of neutron stars. Scalar-tensor theories can be classified through the choice of conformal factor, a scalar that regulates the coupling between matter and the metric in the Einstein frame. The class defined by a Gaussian conformal factor with a negative exponent has been studied the most because it leads to spontaneous scalarization (i.e. the sudden activation of the scalar field in neutron stars), which consequently leads to large deviations from general relativity in the strong field. This class, however, has recently been shown to be in conflict with Solar System observations when accounting for the cosmological evolution of the scalar field. We here study whether this remains the case when the exponent of the conformal factor is positive, as well as in another class of theories defined by a hyperbolic conformal factor. We find that in both of these scalar-tensor theories, Solar System tests are passed only in a very small subset of coupling parameter space, for a large set of initial conditions compatible with big bang nucleosynthesis. However, while we find that it is possible for neutron stars to scalarize, one must carefully select the coupling parameter to do so, and even then, the scalar charge is typically 2 orders of magnitude smaller than in the negative-exponent case. Our study suggests that future work on scalar-tensor gravity, for example in the context of tests of general relativity with gravitational waves from neutron star binaries, should be carried out within the positive coupling parameter class.

  6. Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.

    PubMed

    Capozziello, S; Lambiase, G; Saridakis, E N

    2017-01-01

    We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.

  7. Constraints on brane-world inflation from the CMB power spectrum: revisited

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, Mayukh R.; Mathews, Grant J.

    2018-03-01

    We analyze the Randal Sundrum brane-world inflation scenario in the context of the latest CMB constraints from Planck. We summarize constraints on the most popular classes of models and explore some more realistic inflaton effective potentials. The constraint on standard inflationary parameters changes in the brane-world scenario. We confirm that in general the brane-world scenario increases the tensor-to-scalar ratio, thus making this paradigm less consistent with the Planck constraints. Indeed, when BICEP2/Keck constraints are included, all monomial potentials in the brane-world scenario become disfavored compared to the standard scenario. However, for natural inflation the brane-world scenario could fit the constraints better due to larger allowed values of e-foldings N before the end of inflation in the brane-world.

  8. Cosmological parameter fittings with the BICEP2 data

    NASA Astrophysics Data System (ADS)

    Wu, FengQuan; Li, YiChao; Lu, YouJun; Chen, XueLei

    2014-08-01

    Combining the latest Planck, Wilkinson Microwave Anisotropy Probe (WMAP), and baryon acoustic oscillation (BAO) data, we exploit the recent cosmic microwave background (CMB) B-mode power spectra data released by the BICEP2 collaboration to constrain the cosmological parameters of the LCDM model, especially the primordial power spectra parameters of the scalar and the tensor modes, n s , α s , r, n t . We obtain constraints on the parameters for a lensed LCDM model using the Markov Chain Monte Carlo (MCMC) technique, the marginalized 68% bounds are r = 0.1043{-0.0914/+0.0307}, n s = 0.9617{-0.0061/+0.0061}, α s = -0.0175{-0.0097/+0.0105}, n t = 0.5198{-0.4579/+0.4515}.We find that a blue tilt for n t is favored slightly, but it is still well consistent with flat or even red tilt. Our r value is slightly smaller than the one obtained by the BICEP group, in that we permit n t as a free parameter without imposing the single-field slow roll inflation consistency relation. When we impose this relation, then r = 0.2130{-0.0609/+0.0446}. For most other parameters, the best fit values and measurement errors are not altered significantly by the introduction of the BICEP2 data.

  9. A loop-gap resonator for chirality-sensitive nuclear magneto-electric resonance (NMER)

    NASA Astrophysics Data System (ADS)

    Garbacz, Piotr; Fischer, Peer; Krämer, Steffen

    2016-09-01

    Direct detection of molecular chirality is practically impossible by methods of standard nuclear magnetic resonance (NMR) that is based on interactions involving magnetic-dipole and magnetic-field operators. However, theoretical studies provide a possible direct probe of chirality by exploiting an enantiomer selective additional coupling involving magnetic-dipole, magnetic-field, and electric field operators. This offers a way for direct experimental detection of chirality by nuclear magneto-electric resonance (NMER). This method uses both resonant magnetic and electric radiofrequency (RF) fields. The weakness of the chiral interaction though requires a large electric RF field and a small transverse RF magnetic field over the sample volume, which is a non-trivial constraint. In this study, we present a detailed study of the NMER concept and a possible experimental realization based on a loop-gap resonator. For this original device, the basic principle and numerical studies as well as fabrication and measurements of the frequency dependence of the scattering parameter are reported. By simulating the NMER spin dynamics for our device and taking the 19F NMER signal of enantiomer-pure 1,1,1-trifluoropropan-2-ol, we predict a chirality induced NMER signal that accounts for 1%-5% of the standard achiral NMR signal.

  10. High scale flavor alignment in two-Higgs doublet models and its phenomenology

    DOE PAGES

    Gori, Stefania; Haber, Howard E.; Santos, Edward

    2017-06-21

    The most general two-Higgs doublet model (2HDM) includes potentially large sources of flavor changing neutral currents (FCNCs) that must be suppressed in order to achieve a phenomenologically viable model. The flavor alignment ansatz postulates that all Yukawa coupling matrices are diagonal when expressed in the basis of mass-eigenstate fermion fields, in which case tree-level Higgs-mediated FCNCs are eliminated. In this work, we explore models with the flavor alignment condition imposed at a very high energy scale, which results in the generation of Higgs-mediated FCNCs via renormalization group running from the high energy scale to the electroweak scale. Using the currentmore » experimental bounds on flavor changing observables, constraints are derived on the aligned 2HDM parameter space. In the favored parameter region, we analyze the implications for Higgs boson phenomenology.« less

  11. Particle Flow Calorimetry for the ILC

    NASA Astrophysics Data System (ADS)

    Magill, Stephen

    2006-04-01

    The Particle Flow approach to detector design is seen as the best way to achieve dijet mass resolutions suitable for the precision measurements anticipated at a future e^+e^- Linear Collider (LC). Particle Flow Algorithms (PFAs) affect not only the way data is analyzed, but are necessary and crucial elements used even in initial stages of detector design. In particular, the Calorimeter design parameters are almost entirely dependent on the optimized performance of the PFA. Use of PFAs imposes constraints on the granularity and segmentation of the readout cells, the choices of absorber and active media, and overall detector parameters such as the strength of the B-field, magnet bore, hermeticity, etc. PFAs must be flexible and modular in order to evaluate many detector models in simulation. The influence of PFA development on calorimetry is presented here with particular emphasis on results from the use of PFAs on several LC detector models.

  12. Imaging shear strength along subduction faults

    USGS Publications Warehouse

    Bletery, Quentin; Thomas, Amanda M.; Rempel, Alan W.; Hardebeck, Jeanne L.

    2017-01-01

    Subduction faults accumulate stress during long periods of time and release this stress suddenly, during earthquakes, when it reaches a threshold. This threshold, the shear strength, controls the occurrence and magnitude of earthquakes. We consider a 3-D model to derive an analytical expression for how the shear strength depends on the fault geometry, the convergence obliquity, frictional properties, and the stress field orientation. We then use estimates of these different parameters in Japan to infer the distribution of shear strength along a subduction fault. We show that the 2011 Mw9.0 Tohoku earthquake ruptured a fault portion characterized by unusually small variations in static shear strength. This observation is consistent with the hypothesis that large earthquakes preferentially rupture regions with relatively homogeneous shear strength. With increasing constraints on the different parameters at play, our approach could, in the future, help identify favorable locations for large earthquakes.

  13. Distributed parameter statics of magnetic catheters.

    PubMed

    Tunay, Ilker

    2011-01-01

    We discuss how to use special Cosserat rod theory for deriving distributed-parameter static equilibrium equations of magnetic catheters. These medical devices are used for minimally-invasive diagnostic and therapeutic procedures and can be operated remotely or controlled by automated algorithms. The magnetic material can be lumped in rigid segments or distributed in flexible segments. The position vector of the cross-section centroid and quaternion representation of an orthonormal triad are selected as DOF. The strain energy for transversely isotropic, hyperelastic rods is augmented with the mechanical potential energy of the magnetic field and a penalty term to enforce the quaternion unity constraint. Numerical solution is found by 1D finite elements. Material properties of polymer tubes in extension, bending and twist are determined by mechanical and magnetic experiments. Software experiments with commercial FEM software indicate that the computational effort with the proposed method is at least one order of magnitude less than standard 3D FEM.

  14. Compressed Sensing in On-Grid MIMO Radar.

    PubMed

    Minner, Michael F

    2015-01-01

    The accurate detection of targets is a significant problem in multiple-input multiple-output (MIMO) radar. Recent advances of Compressive Sensing offer a means of efficiently accomplishing this task. The sparsity constraints needed to apply the techniques of Compressive Sensing to problems in radar systems have led to discretizations of the target scene in various domains, such as azimuth, time delay, and Doppler. Building upon recent work, we investigate the feasibility of on-grid Compressive Sensing-based MIMO radar via a threefold azimuth-delay-Doppler discretization for target detection and parameter estimation. We utilize a colocated random sensor array and transmit distinct linear chirps to a small scene with few, slowly moving targets. Relying upon standard far-field and narrowband assumptions, we analyze the efficacy of various recovery algorithms in determining the parameters of the scene through numerical simulations, with particular focus on the ℓ 1-squared Nonnegative Regularization method.

  15. Building a Smart Portal for Astronomy

    NASA Astrophysics Data System (ADS)

    Derriere, S.; Boch, T.

    2011-07-01

    The development of a portal for accessing astronomical resources is not an easy task. The ever-increasing complexity of the data products can result in very complex user interfaces, requiring a lot of effort and learning from the user in order to perform searches. This is often a design choice, where the user must explicitly set many constraints, while the portal search logic remains simple. We investigated a different approach, where the query interface is kept as simple as possible (ideally, a simple text field, like for Google search), and the search logic is made much more complex to interpret the query in a relevant manner. We will present the implications of this approach in terms of interpretation and categorization of the query parameters (related to astronomical vocabularies), translation (mapping) of these concepts into the portal components metadata, identification of query schemes and use cases matching the input parameters, and delivery of query results to the user.

  16. Inverting the parameters of an earthquake-ruptured fault with a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Ting-To; Fernàndez, Josè; Rundle, John B.

    1998-03-01

    Natural selection is the spirit of the genetic algorithm (GA): by keeping the good genes in the current generation, thereby producing better offspring during evolution. The crossover function ensures the heritage of good genes from parent to offspring. Meanwhile, the process of mutation creates a special gene, the character of which does not exist in the parent generation. A program based on genetic algorithms using C language is constructed to invert the parameters of an earthquake-ruptured fault. The verification and application of this code is shown to demonstrate its capabilities. It is determined that this code is able to find the global extreme and can be used to solve more practical problems with constraints gathered from other sources. It is shown that GA is superior to other inverting schema in many aspects. This easy handling and yet powerful algorithm should have many suitable applications in the field of geosciences.

  17. Equivalence principle in chameleon models

    NASA Astrophysics Data System (ADS)

    Kraiselburd, Lucila; Landau, Susana J.; Salgado, Marcelo; Sudarsky, Daniel; Vucetich, Héctor

    2018-05-01

    Most theories that predict time and/or space variation of fundamental constants also predict violations of the weak equivalence principle (WEP). In 2004 Khoury and Weltman [1] proposed the so called chameleon field arguing that it could help avoiding experimental bounds on the WEP while having a nontrivial cosmological impact. In this paper we revisit the extent to which these expectations continue to hold as we enter the regime of high precision tests. The basis of the study is the development of a new method for computing the force between two massive bodies induced by the chameleon field which takes into account the influence on the field by both, the large and the test bodies. We confirm that in the thin shell regime the force does depend nontrivially on the test body's composition, even when the chameleon coupling constants βi=β are universal. We also propose a simple criterion based on energy minimization, that we use to determine which of the approximations used in computing the scalar field in a two body problem is better in each specific regime. As an application of our analysis we then compare the resulting differential acceleration of two test bodies with the corresponding bounds obtained from Eötvös type experiments. We consider two setups: (1) an Earth based experiment where the test bodies are made of Be and Al; (2) the Lunar Laser Ranging experiment. We find that for some choices of the free parameters of the chameleon model the predictions of the Eötvös parameter are larger than some of the previous estimates. As a consequence, we put new constrains on these free parameters. Our conclusions strongly suggest that the properties of immunity from experimental tests of the WEP, usually attributed to the chameleon and related models, should be carefully reconsidered. An important result of our analysis is that our approach leads to new constraints on the parameter space of the chameleon models.

  18. Constrained ripple optimization of Tokamak bundle divertors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hively, L.M.; Rome, J.A.; Lynch, V.E.

    1983-02-01

    Magnetic field ripple from a tokamak bundle divertor is localized to a small toroidal sector and must be treated differently from the usual (distributed) toroidal field (TF) coil ripple. Generally, in a tokamak with an unoptimized divertor design, all of the banana-trapped fast ions are quickly lost due to banana drift diffusion or to trapping between the 1/R variation in absolute value vector B ..xi.. B and local field maxima due to the divertor. A computer code has been written to optimize automatically on-axis ripple subject to these constraints, while varying up to nine design parameters. Optimum configurations have lowmore » on-axis ripple (<0.2%) so that, now, most banana-trapped fast ions are confined. Only those ions with banana tips near the outside region (absolute value theta < or equal to 45/sup 0/) are lost. However, because finite-sized TF coils have not been used in this study, the flux bundle is not expanded.« less

  19. Reconstructions of the dark-energy equation of state and the inflationary potential

    NASA Astrophysics Data System (ADS)

    Barrow, John D.; Paliathanasis, Andronikos

    2018-07-01

    We use a mathematical approach based on the constraints systems in order to reconstruct the equation of state and the inflationary potential for the inflaton field from the observed spectral indices for the density perturbations ns and the tensor to scalar ratio r. From the astronomical data, we can observe that the measured values of these two indices lie on a two-dimensional surface. We express these indices in terms of the Hubble slow-roll parameters and we assume that ns-1=h( r) . For the function h( r) , we consider three cases, where h( r) is constant, linear and quadratic, respectively. From this, we derive second-order equations whose solutions provide us with the explicit forms for the expansion scale-factor, the scalar-field potential, and the effective equation of state for the scalar field. Finally, we show that for there exist mappings which transform one cosmological solution to another and allow new solutions to be generated from existing ones.

  20. Computing Intrinsic Images.

    DTIC Science & Technology

    1986-08-01

    most of the algorithms fail when applied to real images. (2) Usually the constraints from the geometry and the physics of the problem are not enough...large subset of real images), and so most of the algorithms fail when applied to real images. (2) Usually the constraints from the geometry and the...constraints from the geometry and the physics of the problem are not enough to guarantee uniqueness of the computed parameters. In this case, strong

  1. Modelling of a Solar Thermal Power Plant for Benchmarking Blackbox Optimization Solvers

    NASA Astrophysics Data System (ADS)

    Lemyre Garneau, Mathieu

    A new family of problems is provided to serve as a benchmark for blackbox optimization solvers. The problems are single or bi-objective and vary in complexity in terms of the number of variables used (from 5 to 29), the type of variables (integer, real, category), the number of constraints (from 5 to 17) and their types (binary or continuous). In order to provide problems exhibiting dynamics that reflect real engineering challenges, they are extracted from an original numerical model of a concentrated solar power (CSP) power plant with molten salt thermal storage. The model simulates the performance of the power plant by using a high level modeling of each of its main components, namely, an heliostats field, a central cavity receiver, a molten salt heat storage, a steam generator and an idealized powerblock. The heliostats field layout is determined through a simple automatic strategy that finds the best individual positions on the field by considering their respective cosine efficiency, atmospheric scattering and spillage losses as a function of the design parameters. A Monte-Carlo integral method is used to evaluate the heliostats field's optical performance throughout the day so that shadowing effects between heliostats are considered, and the results of this evaluation provide the inputs to simulate the levels and temperatures of the thermal storage. The molten salt storage inventory is used to transfer thermal energy to the powerblock, which simulates a simple Rankine cycle with a single steam turbine. Auxiliary models are used to provide additional optimization constraints on the investment cost, parasitic losses or components failure. The results of preliminary optimizations performed with the NOMAD software using default settings are provided to show the validity of the problems.

  2. An effective parameter optimization with radiation balance constraints in the CAM5

    NASA Astrophysics Data System (ADS)

    Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.

    2017-12-01

    Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.

  3. Cosmological parameter constraints with the Deep Lens Survey using galaxy-shear correlations and galaxy clustering properties

    NASA Astrophysics Data System (ADS)

    Yoon, Mijin; Jee, Myungkook James; Tyson, Tony

    2018-01-01

    The Deep Lens Survey (DLS), a precursor to the Large Synoptic Survey Telescope (LSST), is a 20 sq. deg survey carried out with NOAO’s Blanco and Mayall telescopes. The strength of the survey lies in its depth reaching down to ~27th mag in BVRz bands. This enables a broad redshift baseline study and allows us to investigate cosmological evolution of the large-scale structure. In this poster, we present the first cosmological analysis from the DLS using galaxy-shear correlations and galaxy clustering signals. Our DLS shear calibration accuracy has been validated through the most recent public weak-lensing data challenge. Photometric redshift systematic errors are tested by performing lens-source flip tests. Instead of real-space correlations, we reconstruct band-limited power spectra for cosmological parameter constraints. Our analysis puts a tight constraint on the matter density and the power spectrum normalization parameters. Our results are highly consistent with our previous cosmic shear analysis and also with the Planck CMB results.

  4. Constraints on short-term mantle rheology from the J2 observation and the dispersion of the 18.6 y tidal Love number

    NASA Technical Reports Server (NTRS)

    Sabadini, R.; Yuen, D. A.; Widmer, R.

    1985-01-01

    Information derived from data recently acquired from the LAGEOS satellite is used to place some constraints on the rheological parameters of short-term mantle rheology. The validity of Lambeck and Nakiboglu's (1983) rheological model is assessed by formally developing an expression for the transformed shear modulus using a truncated retardation spectrum. This analytical formula is used to show that the parameters of the above mentioned model are not consistent at all with the amount of anelastic dispersion expected in the Chandler wobble and with the attenuation of seismic normal modes. The feasibility of a standard linear solid (SLS) rheology operating over intermediate timescales between 1 and 100 yr is investigated to determine whether the tidal dispersion at 18.6 yr can be explained by this model. An attempt is made to place some constraints on the parameters of the SLS model and the nature of short-term mantle rheology for timescales of less than 100 yr is discussed.

  5. Constraining Secluded Dark Matter models with the public data from the 79-string IceCube search for dark matter in the Sun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ardid, M.; Felis, I.; Martínez-Mora, J.A.

    The 79-string IceCube search for dark matter in the Sun public data is used to test Secluded Dark Matter models. No significant excess over background is observed and constraints on the parameters of the models are derived. Moreover, the search is also used to constrain the dark photon model in the region of the parameter space with dark photon masses between 0.22 and ∼ 1 GeV and a kinetic mixing parameter ε ∼ 10{sup −9}, which remains unconstrained. These are the first constraints of dark photons from neutrino telescopes. It is expected that neutrino telescopes will be efficient tools tomore » test dark photons by means of different searches in the Sun, Earth and Galactic Center, which could complement constraints from direct detection, accelerators, astrophysics and indirect detection with other messengers, such as gamma rays or antiparticles.« less

  6. Analysis of magnetic fields using variational principles and CELAS2 elements

    NASA Technical Reports Server (NTRS)

    Frye, J. W.; Kasper, R. G.

    1977-01-01

    Prospective techniques for analyzing magnetic fields using NASTRAN are reviewed. A variational principle utilizing a vector potential function is presented which has as its Euler equations, the required field equations and boundary conditions for static magnetic fields including current sources. The need for an addition to this variational principle of a constraint condition is discussed. Some results using the Lagrange multiplier method to apply the constraint and CELAS2 elements to simulate the matrices are given. Practical considerations of using large numbers of CELAS2 elements are discussed.

  7. Curvature constraints from large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dio, Enea Di; Montanari, Francesco; Raccanelli, Alvise

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter Ω {sub K} with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependentmore » power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.« less

  8. Geometrically constrained kinematic global navigation satellite systems positioning: Implementation and performance

    NASA Astrophysics Data System (ADS)

    Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza

    2015-09-01

    GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.

  9. Planck 2015 results. XXIV. Cosmology from Sunyaev-Zeldovich cluster counts

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Comis, B.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Weller, J.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing of background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. Improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.

  10. Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...

    2016-09-20

    In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less

  11. Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.

    In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less

  12. Holographic constraints on Bjorken hydrodynamics at finite coupling

    NASA Astrophysics Data System (ADS)

    DiNunno, Brandon S.; Grozdanov, Sašo; Pedraza, Juan F.; Young, Steve

    2017-10-01

    In large- N c conformal field theories with classical holographic duals, inverse coupling constant corrections are obtained by considering higher-derivative terms in the corresponding gravity theory. In this work, we use type IIB supergravity and bottom-up Gauss-Bonnet gravity to study the dynamics of boost-invariant Bjorken hydrodynamics at finite coupling. We analyze the time-dependent decay properties of non-local observables (scalar two-point functions and Wilson loops) probing the different models of Bjorken flow and show that they can be expressed generically in terms of a few field theory parameters. In addition, our computations provide an analytically quantifiable probe of the coupling-dependent validity of hydrodynamics at early times in a simple model of heavy-ion collisions, which is an observable closely analogous to the hydrodynamization time of a quark-gluon plasma. We find that to third order in the hydrodynamic expansion, the convergence of hydrodynamics is improved and that generically, as expected from field theory considerations and recent holographic results, the applicability of hydrodynamics is delayed as the field theory coupling decreases.

  13. On the nature of a supposed water model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heckmann, Lotta, E-mail: lotta@fkp.tu-darmstadt.de; Drossel, Barbara

    2014-08-15

    A cell model that has been proposed by Stanley and Franzese in 2002 for modeling water is based on Potts variables that represent the possible orientations of bonds between water molecules. We show that in the liquid phase, where all cells are occupied by a molecule, the Hamiltonian of the cell model can be rewritten as a Hamiltonian of a conventional Potts model, albeit with two types of coupling constants. We argue that such a model, while having a first-order phase transition, cannot display the critical end point that is postulated for the phase transition between a high- and low-densitymore » liquid. A closer look at the mean-field calculations that claim to find such an end point in the cell model reveals that the mean-field theory is constructed such that the symmetry constraints on the order parameter are violated. This is equivalent to introducing an external field. The introduction of such a field can be given a physical justification due to the fact that water does not have the type of long-range order occurring in the Potts model.« less

  14. Maximal near-field radiative heat transfer between two plates

    NASA Astrophysics Data System (ADS)

    Nefzaoui, Elyes; Ezzahri, Younès; Drévillon, Jérémie; Joulain, Karl

    2013-09-01

    Near-field radiative transfer is a promising way to significantly and simultaneously enhance both thermo-photovoltaic (TPV) devices power densities and efficiencies. A parametric study of Drude and Lorentz models performances in maximizing near-field radiative heat transfer between two semi-infinite planes separated by nanometric distances at room temperature is presented in this paper. Optimal parameters of these models that provide optical properties maximizing the radiative heat flux are reported and compared to real materials usually considered in similar studies, silicon carbide and heavily doped silicon in this case. Results are obtained by exact and approximate (in the extreme near-field regime and the electrostatic limit hypothesis) calculations. The two methods are compared in terms of accuracy and CPU resources consumption. Their differences are explained according to a mesoscopic description of nearfield radiative heat transfer. Finally, the frequently assumed hypothesis which states a maximal radiative heat transfer when the two semi-infinite planes are of identical materials is numerically confirmed. Its subsequent practical constraints are then discussed. Presented results enlighten relevant paths to follow in order to choose or design materials maximizing nano-TPV devices performances.

  15. Observational constraints on varying neutrino-mass cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Chao-Qiang; Lee, Chung-Chi; Myrzakulov, R.

    We consider generic models of quintessence and we investigate the influence of massive neutrino matter with field-dependent masses on the matter power spectrum. In case of minimally coupled neutrino matter, we examine the effect in tracker models with inverse power-law and double exponential potentials. We present detailed investigations for the scaling field with a steep exponential potential, non-minimally coupled to massive neutrino matter, and we derive constraints on field-dependent neutrino masses from the observational data.

  16. Construction of optimal 3-node plate bending triangles by templates

    NASA Astrophysics Data System (ADS)

    Felippa, C. A.; Militello, C.

    A finite element template is a parametrized algebraic form that reduces to specific finite elements by setting numerical values to the free parameters. The present study concerns Kirchhoff Plate-Bending Triangles (KPT) with 3 nodes and 9 degrees of freedom. A 37-parameter template is constructed using the Assumed Natural Deviatoric Strain (ANDES). Specialization of this template includes well known elements such as DKT and HCT. The question addressed here is: can these parameters be selected to produce high performance elements? The study is carried out by staged application of constraints on the free parameters. The first stage produces element families satisfying invariance and aspect ratio insensitivity conditions. Application of energy balance constraints produces specific elements. The performance of such elements in benchmark tests is presently under study.

  17. A Brownian dynamics study on ferrofluid colloidal dispersions using an iterative constraint method to satisfy Maxwell’s equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubina, Sean Hyun, E-mail: sdubin2@uic.edu; Wedgewood, Lewis Edward, E-mail: wedge@uic.edu

    2016-07-15

    Ferrofluids are often favored for their ability to be remotely positioned via external magnetic fields. The behavior of particles in ferromagnetic clusters under uniformly applied magnetic fields has been computationally simulated using the Brownian dynamics, Stokesian dynamics, and Monte Carlo methods. However, few methods have been established that effectively handle the basic principles of magnetic materials, namely, Maxwell’s equations. An iterative constraint method was developed to satisfy Maxwell’s equations when a uniform magnetic field is imposed on ferrofluids in a heterogeneous Brownian dynamics simulation that examines the impact of ferromagnetic clusters in a mesoscale particle collection. This was accomplished bymore » allowing a particulate system in a simple shear flow to advance by a time step under a uniformly applied magnetic field, then adjusting the ferroparticles via an iterative constraint method applied over sub-volume length scales until Maxwell’s equations were satisfied. The resultant ferrofluid model with constraints demonstrates that the magnetoviscosity contribution is not as substantial when compared to homogeneous simulations that assume the material’s magnetism is a direct response to the external magnetic field. This was detected across varying intensities of particle-particle interaction, Brownian motion, and shear flow. Ferroparticle aggregation was still extensively present but less so than typically observed.« less

  18. Positive energy conditions in 4D conformal field theory

    DOE PAGES

    Farnsworth, Kara; Luty, Markus A.; Prilepina, Valentina

    2016-10-03

    Here, we argue that all consistent 4D quantum field theories obey a spacetime-averaged weak energy inequality < T 00 > ≥ –C/L 4, where L is the size of the smearing region, and C is a positive constant that depends on the theory. If this condition is violated, the theory has states that are indistinguishable from states of negative total energy by any local measurement, and we expect instabilities or other inconsistencies. We apply this condition to 4D conformal field theories, and find that it places constraints on the OPE coefficients of the theory. The constraints we find are weakermore » than the “conformal collider” constraints of Hofman and Maldacena. In 3D CFTs, the only constraint we find is equivalent to the positivity of 2-point function of the energy-momentum tensor, which follows from unitarity. Our calculations are performed using momentum-space Wightman functions, which are remarkably simple functions of momenta, and may be of interest in their own right.« less

  19. Constrained optimization for position calibration of an NMR field camera.

    PubMed

    Chang, Paul; Nassirpour, Sahar; Eschelbach, Martin; Scheffler, Klaus; Henning, Anke

    2018-07-01

    Knowledge of the positions of field probes in an NMR field camera is necessary for monitoring the B 0 field. The typical method of estimating these positions is by switching the gradients with known strengths and calculating the positions using the phases of the FIDs. We investigated improving the accuracy of estimating the probe positions and analyzed the effect of inaccurate estimations on field monitoring. The field probe positions were estimated by 1) assuming ideal gradient fields, 2) using measured gradient fields (including nonlinearities), and 3) using measured gradient fields with relative position constraints. The fields measured with the NMR field camera were compared to fields acquired using a dual-echo gradient recalled echo B 0 mapping sequence. Comparisons were done for shim fields from second- to fourth-order shim terms. The position estimation was the most accurate when relative position constraints were used in conjunction with measured (nonlinear) gradient fields. The effect of more accurate position estimates was seen when compared to fields measured using a B 0 mapping sequence (up to 10%-15% more accurate for some shim fields). The models acquired from the field camera are sensitive to noise due to the low number of spatial sample points. Position estimation of field probes in an NMR camera can be improved using relative position constraints and nonlinear gradient fields. Magn Reson Med 80:380-390, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Bayesian Methods for Effective Field Theories

    NASA Astrophysics Data System (ADS)

    Wesolowski, Sarah

    Microscopic predictions of the properties of atomic nuclei have reached a high level of precision in the past decade. This progress mandates improved uncertainty quantification (UQ) for a robust comparison of experiment with theory. With the uncertainty from many-body methods under control, calculations are now sensitive to the input inter-nucleon interactions. These interactions include parameters that must be fit to experiment, inducing both uncertainty from the fit and from missing physics in the operator structure of the Hamiltonian. Furthermore, the implementation of the inter-nucleon interactions is not unique, which presents the additional problem of assessing results using different interactions. Effective field theories (EFTs) take advantage of a separation of high- and low-energy scales in the problem to form a power-counting scheme that allows the organization of terms in the Hamiltonian based on their expected contribution to observable predictions. This scheme gives a natural framework for quantification of uncertainty due to missing physics. The free parameters of the EFT, called the low-energy constants (LECs), must be fit to data, but in a properly constructed EFT these constants will be natural-sized, i.e., of order unity. The constraints provided by the EFT, namely the size of the systematic uncertainty from truncation of the theory and the natural size of the LECs, are assumed information even before a calculation is performed or a fit is done. Bayesian statistical methods provide a framework for treating uncertainties that naturally incorporates prior information as well as putting stochastic and systematic uncertainties on an equal footing. For EFT UQ Bayesian methods allow the relevant EFT properties to be incorporated quantitatively as prior probability distribution functions (pdfs). Following the logic of probability theory, observable quantities and underlying physical parameters such as the EFT breakdown scale may be expressed as pdfs that incorporate the prior pdfs. Problems of model selection, such as distinguishing between competing EFT implementations, are also natural in a Bayesian framework. In this thesis we focus on two complementary topics for EFT UQ using Bayesian methods--quantifying EFT truncation uncertainty and parameter estimation for LECs. Using the order-by-order calculations and underlying EFT constraints as prior information, we show how to estimate EFT truncation uncertainties. We then apply the result to calculating truncation uncertainties on predictions of nucleon-nucleon scattering in chiral effective field theory. We apply model-checking diagnostics to our calculations to ensure that the statistical model of truncation uncertainty produces consistent results. A framework for EFT parameter estimation based on EFT convergence properties and naturalness is developed which includes a series of diagnostics to ensure the extraction of the maximum amount of available information from data to estimate LECs with minimal bias. We develop this framework using model EFTs and apply it to the problem of extrapolating lattice quantum chromodynamics results for the nucleon mass. We then apply aspects of the parameter estimation framework to perform case studies in chiral EFT parameter estimation, investigating a possible operator redundancy at fourth order in the chiral expansion and the appropriate inclusion of truncation uncertainty in estimating LECs.

  1. Constraints on Stress Components at the Internal Singular Point of an Elastic Compound Structure

    NASA Astrophysics Data System (ADS)

    Pestrenin, V. M.; Pestrenina, I. V.

    2017-03-01

    The classical analytical and numerical methods for investigating the stress-strain state (SSS) in the vicinity of a singular point consider the point as a mathematical one (having no linear dimensions). The reliability of the solution obtained by such methods is valid only outside a small vicinity of the singular point, because the macroscopic equations become incorrect and microscopic ones have to be used to describe the SSS in this vicinity. Also, it is impossible to set constraint or to formulate solutions in stress-strain terms for a mathematical point. These problems do not arise if the singular point is identified with the representative volume of material of the structure studied. In authors' opinion, this approach is consistent with the postulates of continuum mechanics. In this case, the formulation of constraints at a singular point and their investigation becomes an independent problem of mechanics for bodies with singularities. This method was used to explore constraints at an internal singular point (representative volume) of a compound wedge and a compound rib. It is shown that, in addition to the constraints given in the classical approach, there are also constraints depending on the macroscopic parameters of constituent materials. These constraints turn the problems of deformable bodies with an internal singular point into nonclassical ones. Combinations of material parameters determine the number of additional constraints and the critical stress state at the singular point. Results of this research can be used in the mechanics of composite materials and fracture mechanics and in studying stress concentrations in composite structural elements.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakstein, Jeremy; Wilcox, Harry; Bacon, David

    The Beyond Horndeski class of alternative gravity theories allow for Self-accelerating de-Sitter cosmologies with no need for a cosmological constant. This makes them viable alternatives to ΛCDM and so testing their small-scale predictions against General Relativity is of paramount importance. These theories generically predict deviations in both the Newtonian force law and the gravitational lensing of light inside extended objects. Therefore, by simultaneously fitting the X-ray and lensing profiles of galaxy clusters new constraints can be obtained. In this work, we apply this methodology to the stacked profiles of 58 high-redshift (0.1 < z < 1.2) clusters using X-ray surfacemore » brightness profiles from the XMM Cluster Survey and weak lensing profiles from CFHTLenS. By performing a multi-parameter Markov chain Monte Carlo analysis, we are able to place new constraints on the parameters governing deviations from Newton's law Υ{sub 1} = −0.11{sup +0.93}{sub −0.67} and light bending Υ{sub 2} = −0.22{sup +1.22}{sub −1.19}. Both constraints are consistent with General Relativity, for which Υ{sub 1} = Υ{sub 2} = 0. We present here the first observational constraints on Υ{sub 2}, as well as the first extragalactic measurement of both parameters.« less

  3. Constraints on interquark interaction parameters with GW170817 in a binary strange star scenario

    NASA Astrophysics Data System (ADS)

    Zhou, En-Ping; Zhou, Xia; Li, Ang

    2018-04-01

    The LIGO/VIRGO detection of the gravitational waves from a binary merger system, GW170817, has put a clean and strong constraint on the tidal deformability of the merging objects. From this constraint, deep insights can be obtained in compact star equation of states, which has been one of the most puzzling problems for nuclear physicists and astrophysicists. Employing one of the most widely used quark star EOS models, we characterize the star properties by the strange quark mass (ms ), an effective bag constant (Beff), the perturbative QCD correction (a4), as well as the gap parameter (Δ ) when considering quark pairing, and investigate the dependences of the tidal deformablity on them. We find that the tidal deformability is dominated by Beff and insensitive to ms, a4. We discuss the correlation between the tidal deformability and the maximum mass (MTOV) of a static quark star, which allows the model possibility to rule out the existence of quark stars with future gravitational wave observations and mass measurements. The current tidal deformability measurement implies MTOV≤2.18 M⊙ (2.32 M⊙ when pairing is considered) for quark stars. Combining with two-solar-mass pulsar observations, we also make constraints on the poorly known gap parameter Δ for color-flavor-locked quark matter.

  4. Model independent constraints on transition redshift

    NASA Astrophysics Data System (ADS)

    Jesus, J. F.; Holanda, R. F. L.; Pereira, S. H.

    2018-05-01

    This paper aims to put constraints on the transition redshift zt, which determines the onset of cosmic acceleration, in cosmological-model independent frameworks. In order to perform our analyses, we consider a flat universe and assume a parametrization for the comoving distance DC(z) up to third degree on z, a second degree parametrization for the Hubble parameter H(z) and a linear parametrization for the deceleration parameter q(z). For each case, we show that type Ia supernovae and H(z) data complement each other on the parameter space and tighter constrains for the transition redshift are obtained. By combining the type Ia supernovae observations and Hubble parameter measurements it is possible to constrain the values of zt, for each approach, as 0.806± 0.094, 0.870± 0.063 and 0.973± 0.058 at 1σ c.l., respectively. Then, such approaches provide cosmological-model independent estimates for this parameter.

  5. AOF LTAO mode: reconstruction strategy and first test results

    NASA Astrophysics Data System (ADS)

    Oberti, Sylvain; Kolb, Johann; Le Louarn, Miska; La Penna, Paolo; Madec, Pierre-Yves; Neichel, Benoit; Sauvage, Jean-François; Fusco, Thierry; Donaldson, Robert; Soenke, Christian; Suárez Valles, Marcos; Arsenault, Robin

    2016-07-01

    GALACSI is the Adaptive Optics (AO) system serving the instrument MUSE in the framework of the Adaptive Optics Facility (AOF) project. Its Narrow Field Mode (NFM) is a Laser Tomography AO (LTAO) mode delivering high resolution in the visible across a small Field of View (FoV) of 7.5" diameter around the optical axis. From a reconstruction standpoint, GALACSI NFM intends to optimize the correction on axis by estimating the turbulence in volume via a tomographic process, then projecting the turbulence profile onto one single Deformable Mirror (DM) located in the pupil, close to the ground. In this paper, the laser tomographic reconstruction process is described. Several methods (virtual DM, virtual layer projection) are studied, under the constraint of a single matrix vector multiplication. The pseudo-synthetic interaction matrix model and the LTAO reconstructor design are analysed. Moreover, the reconstruction parameter space is explored, in particular the regularization terms. Furthermore, we present here the strategy to define the modal control basis and split the reconstruction between the Low Order (LO) loop and the High Order (HO) loop. Finally, closed loop performance obtained with a 3D turbulence generator will be analysed with respect to the most relevant system parameters to be tuned.

  6. Reassessing The Fundamentals New Constraints on the Evolution, Ages and Masses of Neutron Stars

    NASA Astrophysics Data System (ADS)

    Kızıltan, Bülent

    2011-09-01

    The ages and masses of neutron stars (NSs) are two fundamental threads that make pulsars accessible to other sub-disciplines of astronomy and physics. A realistic and accurate determination of these two derived parameters play an important role in understanding of advanced stages of stellar evolution and the physics that govern relevant processes. Here I summarize new constraints on the ages and masses of NSs with an evolutionary perspective. I show that the observed P-Ṗ demographics is more diverse than what is theoretically predicted for the standard evolutionary channel. In particular, standard recycling followed by dipole spin-down fails to reproduce the population of millisecond pulsars with higher magnetic fields (B > 4 × 108 G) at rates deduced from observations. A proper inclusion of constraints arising from binary evolution and mass accretion offers a more realistic insight into the age distribution. By analytically implementing these constraints, I propose a ``modified'' spin-down age (τ~) for millisecond pulsars that gives estimates closer to the true age. Finally, I independently analyze the peak, skewness and cutoff values of the underlying mass distribution from a comprehensive list of radio pulsars for which secure mass measurements are available. The inferred mass distribution shows clear peaks at 1.35 Msolar and 1.50 Msolar for NSs in double neutron star (DNS) and neutron star-white dwarf (NS-WD) systems respectively. I find a mass cutoff at 2 Msolar for NSs with WD companions, which establishes a firm lower bound for the maximum mass of NSs.

  7. An Improved 3D Joint Inversion Method of Potential Field Data Using Cross-Gradient Constraint and LSQR Method

    NASA Astrophysics Data System (ADS)

    Joulidehsar, Farshad; Moradzadeh, Ali; Doulati Ardejani, Faramarz

    2018-06-01

    The joint interpretation of two sets of geophysical data related to the same source is an appropriate method for decreasing non-uniqueness of the resulting models during inversion process. Among the available methods, a method based on using cross-gradient constraint combines two datasets is an efficient approach. This method, however, is time-consuming for 3D inversion and cannot provide an exact assessment of situation and extension of anomaly of interest. In this paper, the first attempt is to speed up the required calculation by substituting singular value decomposition by least-squares QR method to solve the large-scale kernel matrix of 3D inversion, more rapidly. Furthermore, to improve the accuracy of resulting models, a combination of depth-weighing matrix and compacted constraint, as automatic selection covariance of initial parameters, is used in the proposed inversion algorithm. This algorithm was developed in Matlab environment and first implemented on synthetic data. The 3D joint inversion of synthetic gravity and magnetic data shows a noticeable improvement in the results and increases the efficiency of algorithm for large-scale problems. Additionally, a real gravity and magnetic dataset of Jalalabad mine, in southeast of Iran was tested. The obtained results by the improved joint 3D inversion of cross-gradient along with compacted constraint showed a mineralised zone in depth interval of about 110-300 m which is in good agreement with the available drilling data. This is also a further confirmation on the accuracy and progress of the improved inversion algorithm.

  8. Positive signs in massive gravity

    NASA Astrophysics Data System (ADS)

    Cheung, Clifford; Remmen, Grant N.

    2016-04-01

    We derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. The high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small island in the parameter space of ghost-free massive gravity. While the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Clifford; Remmen, Grant N.

    Here, we derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. Furthermore, the high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small islandmore » in the parameter space of ghost-free massive gravity. And while the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.« less

  10. Constraints on the pre-impact orbits of Solar system giant impactors

    NASA Astrophysics Data System (ADS)

    Jackson, Alan P.; Gabriel, Travis S. J.; Asphaug, Erik I.

    2018-03-01

    We provide a fast method for computing constraints on impactor pre-impact orbits, applying this to the late giant impacts in the Solar system. These constraints can be used to make quick, broad comparisons of different collision scenarios, identifying some immediately as low-probability events, and narrowing the parameter space in which to target follow-up studies with expensive N-body simulations. We benchmark our parameter space predictions, finding good agreement with existing N-body studies for the Moon. We suggest that high-velocity impact scenarios in the inner Solar system, including all currently proposed single impact scenarios for the formation of Mercury, should be disfavoured. This leaves a multiple hit-and-run scenario as the most probable currently proposed for the formation of Mercury.

  11. Neighboring extremals of dynamic optimization problems with path equality constraints

    NASA Technical Reports Server (NTRS)

    Lee, A. Y.

    1988-01-01

    Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.

  12. Size effects in martensitic microstructures: Finite-strain phase field model versus sharp-interface approach

    NASA Astrophysics Data System (ADS)

    Tůma, K.; Stupkiewicz, S.; Petryk, H.

    2016-10-01

    A finite-strain phase field model for martensitic phase transformation and twinning in shape memory alloys is developed and confronted with the corresponding sharp-interface approach extended to interfacial energy effects. The model is set in the energy framework so that the kinetic equations and conditions of mechanical equilibrium are fully defined by specifying the free energy and dissipation potentials. The free energy density involves the bulk and interfacial energy contributions, the latter describing the energy of diffuse interfaces in a manner typical for phase-field approaches. To ensure volume preservation during martensite reorientation at finite deformation within a diffuse interface, it is proposed to apply linear mixing of the logarithmic transformation strains. The physically different nature of phase interfaces and twin boundaries in the martensitic phase is reflected by introducing two order-parameters in a hierarchical manner, one as the reference volume fraction of austenite, and thus of the whole martensite, and the second as the volume fraction of one variant of martensite in the martensitic phase only. The microstructure evolution problem is given a variational formulation in terms of incremental fields of displacement and order parameters, with unilateral constraints on volume fractions explicitly enforced by applying the augmented Lagrangian method. As an application, size-dependent microstructures with diffuse interfaces are calculated for the cubic-to-orthorhombic transformation in a CuAlNi shape memory alloy and compared with the sharp-interface microstructures with interfacial energy effects.

  13. Systems with outer constraints. Gupta-Bleuler electromagnetism as an algebraic field theory

    NASA Astrophysics Data System (ADS)

    Grundling, Hendrik

    1988-03-01

    Since there are some important systems which have constraints not contained in their field algebras, we develop here in a C*-context the algebraic structures of these. The constraints are defined as a group G acting as outer automorphisms on the field algebra ℱ, α: G ↦ Aut ℱ, α G ⊄ Inn ℱ, and we find that the selection of G-invariant states on ℱ is the same as the selection of states ω on M( G M(Gmathop × limits_α F) ℱ) by ω( U g)=1∨ g∈ G, where U g ∈ M ( G M(Gmathop × limits_α F) ℱ)/ℱ are the canonical elements implementing α g . These states are taken as the physical states, and this specifies the resulting algebraic structure of the physics in M( G M(Gmathop × limits_α F) ℱ), and in particular the maximal constraint free physical algebra ℛ. A nontriviality condition is given for ℛ to exist, and we extend the notion of a crossed product to deal with a situation where G is not locally compact. This is necessary to deal with the field theoretical aspect of the constraints. Next the C*-algebra of the CCR is employed to define the abstract algebraic structure of Gupta-Bleuler electromagnetism in the present framework. The indefinite inner product representation structure is obtained, and this puts Gupta-Bleuler electromagnetism on a rigorous footing. Finally, as a bonus, we find that the algebraic structures just set up, provide a blueprint for constructive quadratic algebraic field theory.

  14. Retrieving Storm Electric Fields from Aircrfaft Field Mill Data: Part II: Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Mach, D. M.; Christian H. J.; Stewart, M. F.; Bateman M. G.

    2006-01-01

    The Lagrange multiplier theory developed in Part I of this study is applied to complete a relative calibration of a Citation aircraft that is instrumented with six field mill sensors. When side constraints related to average fields are used, the Lagrange multiplier method performs well in computer simulations. For mill measurement errors of 1 V m(sup -1) and a 5 V m(sup -1) error in the mean fair-weather field function, the 3D storm electric field is retrieved to within an error of about 12%. A side constraint that involves estimating the detailed structure of the fair-weather field was also tested using computer simulations. For mill measurement errors of 1 V m(sup -l), the method retrieves the 3D storm field to within an error of about 8% if the fair-weather field estimate is typically within 1 V m(sup -1) of the true fair-weather field. Using this type of side constraint and data from fair-weather field maneuvers taken on 29 June 2001, the Citation aircraft was calibrated. Absolute calibration was completed using the pitch down method developed in Part I, and conventional analyses. The resulting calibration matrices were then used to retrieve storm electric fields during a Citation flight on 2 June 2001. The storm field results are encouraging and agree favorably in many respects with results derived from earlier (iterative) techniques of calibration.

  15. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  16. TESTING GRAVITY WITH QUASI-PERIODIC OSCILLATIONS FROM ACCRETING BLACK HOLES: THE CASE OF THE EINSTEIN–DILATON–GAUSS–BONNET THEORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maselli, Andrea; Gualtieri, Leonardo; Ferrari, Valeria

    Quasi-periodic oscillations (QPOs) observed in the X-ray flux emitted by accreting black holes are associated with phenomena occurring near the horizon. Future very large area X-ray instruments will be able to measure QPO frequencies with very high precision, thus probing this strong-field region. Using the relativistic precession model, we show the way in which QPO frequencies could be used to test general relativity (GR) against those alternative theories of gravity which predict deviations from the classical theory in the strong-field and high-curvature regimes. We consider one of the best-motivated high-curvature corrections to GR, namely, the Einstein–Dilaton–Gauss–Bonnet theory, and show thatmore » a detection of QPOs with the expected sensitivity of the proposed ESA M-class mission LOFT would set the most stringent constraints on the parameter space of this theory.« less

  17. Axions, Inflation and String Theory

    NASA Astrophysics Data System (ADS)

    Mack, Katherine J.; Steinhardt, P. J.

    2009-01-01

    The QCD axion is the leading contender to rid the standard model of the strong-CP problem. If the Peccei-Quinn symmetry breaking occurs before inflation, which is likely in string theory models, axions manifest themselves cosmologically as a form of cold dark matter with a density determined by the axion's initial conditions and by the energy scale of inflation. Constraints on the dark matter density and on the amplitude of CMB isocurvature perturbations currently demand an exponential degree of fine-tuning of both axion and inflationary parameters beyond what is required for particle physics. String theory models generally produce large numbers of axion-like fields; the prospect that any of these fields exist at scales close to that of the QCD axion makes the problem drastically worse. I will discuss the challenge of accommodating string-theoretic axions in standard inflationary cosmology and show that the fine-tuning problems cannot be fully addressed by anthropic principle arguments.

  18. Magnetic field and flavor effects on the gamma-ray burst neutrino flux

    NASA Astrophysics Data System (ADS)

    Baerwald, Philipp; Hümmer, Svenja; Winter, Walter

    2011-03-01

    We reanalyze the prompt muon neutrino flux from gamma-ray bursts (GRBs) in terms of the particle physics involved, as in the example of the often-used reference Waxman-Bahcall GRB flux. We first reproduce this reference flux explicitly treating synchrotron energy losses of the secondary pions. Then we include additional neutrino production modes, the neutrinos from muon decays, the magnetic field effects on all secondary species, and flavor mixing with the current parameter uncertainties. We demonstrate that the combination of these effects modifies the shape of the original Waxman-Bahcall GRB flux significantly and changes the normalization by a factor of 3 to 4. As a consequence, the gamma-ray burst search strategy of neutrino telescopes may be based on the wrong flux shape, and the constraints derived for the GRB neutrino flux, such as the baryonic loading, may in fact be much stronger than anticipated.

  19. Static axisymmetric equilibria in general relativistic magnetohydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nunez, Manuel

    2008-01-15

    While the definition of static equilibria is not clear in a general relativistic context, in many cases of astrophysical interest a natural 3+1 split exists which allows us to define physically meaningful spatial and temporal coordinates. We study the possibility of axisymmetric magnetohydrodynamic equilibria in this setting. The presence of a nontrivial shift velocity provides a constraint not present in the Newtonian case, while the momentum equation may be set in a Grad-Shafranov-like form with the presence of additional terms involving the space-time metric coefficients. It is found that whenever the magnetic field or the shift velocity possesses poloidal component,more » the existence of even local static equilibria demands that the metric parameters satisfy such strong conditions that these equilibria are extremely unlikely. Only very particular cases such as purely toroidal fields and shifts yield existence of equilibria, provided we are able to choose arbitrarily the plasma pressure and density.« less

  20. An automated approach to magnetic divertor configuration design

    NASA Astrophysics Data System (ADS)

    Blommaert, M.; Dekeyser, W.; Baelmans, M.; Gauger, N. R.; Reiter, D.

    2015-01-01

    Automated methods based on optimization can greatly assist computational engineering design in many areas. In this paper an optimization approach to the magnetic design of a nuclear fusion reactor divertor is proposed and applied to a tokamak edge magnetic configuration in a first feasibility study. The approach is based on reduced models for magnetic field and plasma edge, which are integrated with a grid generator into one sensitivity code. The design objective chosen here for demonstrative purposes is to spread the divertor target heat load as much as possible over the entire target area. Constraints on the separatrix position are introduced to eliminate physically irrelevant magnetic field configurations during the optimization cycle. A gradient projection method is used to ensure stable cost function evaluations during optimization. The concept is applied to a configuration with typical Joint European Torus (JET) parameters and it automatically provides plausible configurations with reduced heat load.

  1. Anomalous dimensions of spinning operators from conformal symmetry

    NASA Astrophysics Data System (ADS)

    Gliozzi, Ferdinando

    2018-01-01

    We compute, to the first non-trivial order in the ɛ-expansion of a perturbed scalar field theory, the anomalous dimensions of an infinite class of primary operators with arbitrary spin ℓ = 0, 1, . . . , including as a particular case the weakly broken higher-spin currents, using only constraints from conformal symmetry. Following the bootstrap philosophy, no reference is made to any Lagrangian, equations of motion or coupling constants. Even the space dimensions d are left free. The interaction is implicitly turned on through the local operators by letting them acquire anomalous dimensions. When matching certain four-point and five-point functions with the corresponding quantities of the free field theory in the ɛ → 0 limit, no free parameter remains. It turns out that only the expected discrete d values are permitted and the ensuing anomalous dimensions reproduce known results for the weakly broken higher-spin currents and provide new results for the other spinning operators.

  2. Space Shuttle and Space Station Radio Frequency (RF) Exposure Analysis

    NASA Technical Reports Server (NTRS)

    Hwu, Shian U.; Loh, Yin-Chung; Sham, Catherine C.; Kroll, Quin D.

    2005-01-01

    This paper outlines the modeling techniques and important parameters to define a rigorous but practical procedure that can verify the compliance of RF exposure to the NASA standards for astronauts and electronic equipment. The electromagnetic modeling techniques are applied to analyze RF exposure in Space Shuttle and Space Station environments with reasonable computing time and resources. The modeling techniques are capable of taking into account the field interactions with Space Shuttle and Space Station structures. The obtained results illustrate the multipath effects due to the presence of the space vehicle structures. It's necessary to include the field interactions with the space vehicle in the analysis for an accurate assessment of the RF exposure. Based on the obtained results, the RF keep out zones are identified for appropriate operational scenarios, flight rules and necessary RF transmitter constraints to ensure a safe operating environment and mission success.

  3. Mid-Type M Dwarf Planet Occurrence Rates

    NASA Astrophysics Data System (ADS)

    Hardegree-Ullman, Kevin; Cushing, Michael; Muirhead, Philip Steven

    2018-01-01

    Planet occurrence rates increase toward later spectral types; therefore, M dwarf systems are our most promising targets in the search for exoplanets. Stars in the original Kepler field were primarily characterized from photometry alone, resulting in large uncertainties (~30%) for properties of late-type stars like M dwarfs. Planet occurrence rate calculations require precise measurements of stellar radii, which can be constrained to ~10% using temperatures and metallicities derived from spectra. These measurements need to be performed on a statistically significant population of stars, including systems with and without planets. Using WIYN, the Discovery Channel Telescope, and IRTF, we have gathered spectra of about half of the ~550 probable mid-type M dwarfs in the Kepler field. Our observations have led to better constraints on stellar parameters and new planet occurrence rates for mid-type M dwarfs. We gratefully acknowledge support from the NASA-NSF Exoplanet Observational Research partnership, the National Optical Astronomy Observatory, and the NASA Exoplanet Science Institute.

  4. Cosmology for quadratic gravity in generalized Weyl geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiménez, Jose Beltrán; Heisenberg, Lavinia; Koivisto, Tomi S.

    A class of vector-tensor theories arises naturally in the framework of quadratic gravity in spacetimes with linear vector distortion. Requiring the absence of ghosts for the vector field imposes an interesting condition on the allowed connections with vector distortion: the resulting one-parameter family of connections generalises the usual Weyl geometry with polar torsion. The cosmology of this class of theories is studied, focusing on isotropic solutions wherein the vector field is dominated by the temporal component. De Sitter attractors are found and inhomogeneous perturbations around such backgrounds are analysed. In particular, further constraints on the models are imposed by excludingmore » pathologies in the scalar, vector and tensor fluctuations. Various exact background solutions are presented, describing a constant and an evolving dark energy, a bounce and a self-tuning de Sitter phase. However, the latter two scenarios are not viable under a closer scrutiny.« less

  5. 750 GeV diphoton resonance, 125 GeV Higgs and muon g - 2 anomaly in deflected anomaly mediation SUSY breaking scenarios

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Wu, Lei; Yang, Jin Min; Zhang, Mengchao

    2016-08-01

    We propose to interpret the 750 GeV diphoton excess in deflected anomaly mediation supersymmetry breaking scenarios, which can naturally predict couplings between a singlet field and vector-like messengers. The CP-even scalar component (S) of the singlet field can serve as the 750 GeV resonance. The messenger scale, which is of order the gravitino scale, can be as light as Fϕ ∼ O (10) TeV when the messenger species NF and the deflection parameter d are moderately large. Such messengers can induce the large loop decay process S → γγ. Our results show that such a scenario can successfully accommodate the 125 GeV Higgs boson, the 750 GeV diphoton excess and the muon g - 2 without conflicting with the LHC constraints. We also comment on the possible explanations in the gauge mediation supersymmetry breaking scenario.

  6. Effective field theory analysis on μ problem in low-scale gauge mediation

    NASA Astrophysics Data System (ADS)

    Zheng, Sibo

    2012-02-01

    Supersymmetric models based on the scenario of gauge mediation often suffer from the well-known μ problem. In this paper, we reconsider this problem in low-scale gauge mediation in terms of effective field theory analysis. In this paradigm, all high energy input soft mass can be expressed via loop expansions. If the corrections coming from messenger thresholds are small, as we assume in this letter, then all RG evaluations can be taken as linearly approximation for low-scale supersymmetric breaking. Due to these observations, the parameter space can be systematically classified and studied after constraints coming from electro-weak symmetry breaking are imposed. We find that some old proposals in the literature are reproduced, and two new classes are uncovered. We refer to a microscopic model, where the specific relations among coefficients in one of the new classes are well motivated. Also, we discuss some primary phenomenologies.

  7. Cosmologically allowed regions for the axion decay constant Fa

    NASA Astrophysics Data System (ADS)

    Kawasaki, Masahiro; Sonomoto, Eisuke; Yanagida, Tsutomu T.

    2018-07-01

    If the Peccei-Quinn symmetry is already broken during inflation, the decay constant Fa of the axion can be in a wide region from 1011GeV to 1018GeV for the axion being the dominant dark matter. In this case, however, the axion causes the serious cosmological problem, isocurvature perturbation problem, which severely constrains the Hubble parameter during inflation. The constraint is relaxed when Peccei-Quinn scalar field takes a large value ∼Mp (Planck scale) during inflation. In this letter, we point out that the allowed region of the decay constant Fa is reduced to a rather narrow region for a given tensor-to-scalar ratio r when Peccei-Quinn scalar field takes ∼Mp during inflation. For example, if the ratio r is determined as r ≳10-3 in future measurements, we can predict Fa ≃ (0.1- 1.4) ×1012GeV for domain wall number NDW = 6.

  8. Lithium wall conditioning by high frequency pellet injection in RFX-mod

    NASA Astrophysics Data System (ADS)

    Innocente, P.; Mansfield, D. K.; Roquemore, A. L.; Agostini, M.; Barison, S.; Canton, A.; Carraro, L.; Cavazzana, R.; De Masi, G.; Fassina, A.; Fiameni, S.; Grando, L.; Rais, B.; Rossetto, F.; Scarin, P.

    2015-08-01

    In the RFX-mod reversed field pinch experiment, lithium wall conditioning has been tested with multiple scopes: to improve density control, to reduce impurities and to increase energy and particle confinement time. Large single lithium pellet injection, lithium capillary-pore system and lithium evaporation has been used for lithiumization. The last two methods, which presently provide the best results in tokamak devices, have limited applicability in the RFX-mod device due to the magnetic field characteristics and geometrical constraints. On the other side, the first mentioned technique did not allow injecting large amount of lithium. To improve the deposition, recently in RFX-mod small lithium multi-pellets injection has been tested. In this paper we compare lithium multi-pellets injection to the other techniques. Multi-pellets gave more uniform Li deposition than evaporator, but provided similar effects on plasma parameters, showing that further optimizations are required.

  9. Magnetohydrodynamic Models of Molecular Tornadoes

    NASA Astrophysics Data System (ADS)

    Au, Kelvin; Fiege, Jason D.

    2017-07-01

    Recent observations near the Galactic Center (GC) have found several molecular filaments displaying striking helically wound morphology that are collectively known as molecular tornadoes. We investigate the equilibrium structure of these molecular tornadoes by formulating a magnetohydrodynamic model of a rotating, helically magnetized filament. A special analytical solution is derived where centrifugal forces balance exactly with toroidal magnetic stress. From the physics of torsional Alfvén waves we derive a constraint that links the toroidal flux-to-mass ratio and the pitch angle of the helical field to the rotation laws, which we find to be an important component in describing the molecular tornado structure. The models are compared to the Ostriker solution for isothermal, nonmagnetic, nonrotating filaments. We find that neither the analytic model nor the Alfvén wave model suffer from the unphysical density inversions noted by other authors. A Monte Carlo exploration of our parameter space is constrained by observational measurements of the Pigtail Molecular Cloud, the Double Helix Nebula, and the GC Molecular Tornado. Observable properties such as the velocity dispersion, filament radius, linear mass, and surface pressure can be used to derive three dimensionless constraints for our dimensionless models of these three objects. A virial analysis of these constrained models is studied for these three molecular tornadoes. We find that self-gravity is relatively unimportant, whereas magnetic fields and external pressure play a dominant role in the confinement and equilibrium radial structure of these objects.

  10. Observational constraints on successful model of quintessential Inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Chao-Qiang; Lee, Chung-Chi; Sami, M.

    We study quintessential inflation using a generalized exponential potential V (φ)∝ exp(−λ φ {sup n} / M {sub Pl} {sup n} ), n >1, the model admits slow-roll inflation at early times and leads to close-to-scaling behaviour in the post inflationary era with an exit to dark energy at late times. We present detailed investigations of the inflationary stage in the light of the Planck 2015 results, study post-inflationary dynamics and analytically confirm the existence of an approximately scaling solution. Additionally, assuming that standard massive neutrinos are non-minimally coupled, makes the field φ dominant once again at late times givingmore » rise to present accelerated expansion of the Universe. We derive observational constraints on the field and time-dependent neutrino masses. In particular, for n =6 (8), the parameter λ is constrained to be, log λ > −7.29 (−11.7); the model produces the spectral index of the power spectrum of primordial scalar (matter density) perturbations as n {sub s} = 0.959 ± 0.001 (0.961 ± 0.001) and tiny tensor-to-scalar ratio, r <1.72 × 10{sup −2} (2.32 × 10{sup −2}) respectively. Consequently, the upper bound on possible values of the sum of neutrino masses Σ m {sub ν} ∼< 2.5 eV significantly enhances compared to that in the standard ΛCDM model.« less

  11. Scrutinizing the alignment limit in two-Higgs-doublet models. II. mH=125 GeV

    NASA Astrophysics Data System (ADS)

    Bernon, Jérémy; Gunion, John F.; Haber, Howard E.; Jiang, Yun; Kraml, Sabine

    2016-02-01

    In the alignment limit of a multidoublet Higgs sector, one of the Higgs mass eigenstates aligns in field space with the direction of the scalar field vacuum expectation values, and its couplings approach those of the Standard Model (SM) Higgs boson. We consider C P -conserving two-Higgs-doublet models (2HDMs) of type I and type II near the alignment limit in which the heavier of the two C P -even Higgs bosons, H , is the SM-like state observed with a mass of 125 GeV, and the couplings of H to gauge bosons approach those of the SM. We review the theoretical structure and analyze the phenomenological implications of this particular realization of the alignment limit, where decoupling of the extra states cannot occur given that the lighter C P -even state h must, by definition, have a mass below 125 GeV. For the numerical analysis, we perform scans of the 2HDM parameter space employing the software packages 2hdmc and lilith, taking into account all relevant pre-LHC constraints, constraints from the measurements of the 125 GeV Higgs signal at the LHC, as well as the most recent limits coming from searches for other Higgs-like states. Implications for Run 2 at the LHC, including expectations for observing the other scalar states, are also discussed.

  12. Constraints on the magnetic fields in galaxies implied by the infrared-to-radio correlation

    NASA Technical Reports Server (NTRS)

    Helou, George; Bicay, M. D.

    1990-01-01

    A physical model is proposed for understanding the tight correlation between far-IR and nonthermal radio luminosities in star-forming galaxies. The approach suggests that the only constraint implied by the correlation is a universal relation whereby magnetic field strength scales with gas density to a power beta between 1/3 and 2/3, inclusive.

  13. Testing chameleon gravity with the Coma cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terukina, Ayumu; Yamamoto, Kazuhiro; Lombriser, Lucas

    2014-04-01

    We propose a novel method to test the gravitational interactions in the outskirts of galaxy clusters. When gravity is modified, this is typically accompanied by the introduction of an additional scalar degree of freedom, which mediates an attractive fifth force. The presence of an extra gravitational coupling, however, is tightly constrained by local measurements. In chameleon modifications of gravity, local tests can be evaded by employing a screening mechanism that suppresses the fifth force in dense environments. While the chameleon field may be screened in the interior of the cluster, its outer region can still be affected by the extramore » force, introducing a deviation between the hydrostatic and lensing mass of the cluster. Thus, the chameleon modification can be tested by combining the gas and lensing measurements of the cluster. We demonstrate the operability of our method with the Coma cluster, for which both a lensing measurement and gas observations from the X-ray surface brightness, the X-ray temperature, and the Sunyaev-Zel'dovich effect are available. Using the joint observational data set, we perform a Markov chain Monte Carlo analysis of the parameter space describing the different profiles in both the Newtonian and chameleon scenarios. We report competitive constraints on the chameleon field amplitude and its coupling strength to matter. In the case of f(R) gravity, corresponding to a specific choice of the coupling, we find an upper bound on the background field amplitude of |f{sub R0}| < 6 × 10{sup −5}, which is currently the tightest constraint on cosmological scales.« less

  14. Vakonomic Constraints in Higher-Order Classical Field Theory

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.

    2010-07-01

    We propose a differential-geometric setting for the dynamics of a higher-order field theory, based on the Skinner and Rusk formalism for mechanics. This approach incorporates aspects of both, the Lagrangian and the Hamiltonian description, since the field equations are formulated using the Lagrangian on a higher-order jet bundle and the canonical multisymplectic form on its affine dual. The result is that we obtain a unique and global intrinsic description of the dynamics. The case of vakonomic constraints is also studied within this formalism.

  15. Observational constraints on Hubble parameter in viscous generalized Chaplygin gas

    NASA Astrophysics Data System (ADS)

    Thakur, P.

    2018-04-01

    Cosmological model with viscous generalized Chaplygin gas (in short, VGCG) is considered here to determine observational constraints on its equation of state parameters (in short, EoS) from background data. These data consists of H(z)-z (OHD) data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter and SN Ia data (Union 2.1). Best-fit values of the EoS parameters including present Hubble parameter (H0) and their acceptable range at different confidence limits are determined. In this model the permitted range for the present Hubble parameter and the transition redshift (zt) at 1σ confidence limits are H0= 70.24^{+0.34}_{-0.36} and zt=0.76^{+0.07}_{-0.07} respectively. These EoS parameters are then compared with those of other models. Present age of the Universe (t0) have also been determined here. Akaike information criterion and Bayesian information criterion for the model selection have been adopted for comparison with other models. It is noted that VGCG model satisfactorily accommodates the present accelerating phase of the Universe.

  16. Order Reduction, Projectability and Constraints of Second-Order Field Theories and Higher-Order Mechanics

    NASA Astrophysics Data System (ADS)

    Gaset, Jordi; Román-Roy, Narciso

    2016-12-01

    The projectability of Poincaré-Cartan forms in a third-order jet bundle J3π onto a lower-order jet bundle is a consequence of the degenerate character of the corresponding Lagrangian. This fact is analyzed using the constraint algorithm for the associated Euler-Lagrange equations in J3π. The results are applied to study the Hilbert Lagrangian for the Einstein equations (in vacuum) from a multisymplectic point of view. Thus we show how these equations are a consequence of the application of the constraint algorithm to the geometric field equations, meanwhile the other constraints are related with the fact that this second-order theory is equivalent to a first-order theory. Furthermore, the case of higher-order mechanics is also studied as a particular situation.

  17. Transoptr — A second order beam transport design code with optimization and constraints

    NASA Astrophysics Data System (ADS)

    Heighway, E. A.; Hutcheon, R. M.

    1981-08-01

    This code was written initially to design an achromatic and isochronous reflecting magnet and has been extended to compete in capability (for constrained problems) with TRANSPORT. Its advantage is its flexibility in that the user writes a routine to describe his transport system. The routine allows the definition of general variables from which the system parameters can be derived. Further, the user can write any constraints he requires as algebraic equations relating the parameters. All variables may be used in either a first or second order optimization.

  18. Estimating free-body modal parameters from tests of a constrained structure

    NASA Technical Reports Server (NTRS)

    Cooley, Victor M.

    1993-01-01

    Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.

  19. Reduced Order Podolsky Model

    NASA Astrophysics Data System (ADS)

    Thibes, Ronaldo

    2017-02-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution, we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  20. The effect of geometric and electric constraints on the performance of polymer-stabilized cholesteric liquid crystals with a double-handed circularly polarized light reflection band

    NASA Astrophysics Data System (ADS)

    Relaix, Sabrina; Mitov, Michel

    2008-08-01

    Polymer-stabilized cholesteric liquid crystals (PSCLCs) with a double-handed circularly polarized reflection band are fabricated. The geometric and electric constraints appear to be relevant parameters in obtaining a single-layer CLC structure with a clear-cut double-handed circularly polarized reflection band since light scattering phenomena can alter the reflection properties when the PSCLC is cooled from the elaboration temperature to the operating one. A compromise needs to be found between the LC molecule populations, which are bound to the polymer network due to strong surface effects or not. Besides, a monodomain texture is preserved if the PSCLC is subjected to an electric field at the same time as the thermal process intrinsic to the elaboration process. As a consequence, the light scattering is reduced and both kinds of circularly polarized reflected light beams are put in evidence. Related potential applications are smart reflective windows for the solar light management or reflective polarizer-free displays with higher brightness.

  1. Higgs EFT for 2HDM and beyond.

    PubMed

    Bélusca-Maïto, Hermès; Falkowski, Adam; Fontes, Duarte; Romão, Jorge C; Silva, João P

    2017-01-01

    We discuss the validity of the Standard Model Effective Field Theory (SM EFT) as the low-energy effective theory for the two-Higgs-doublet Model (2HDM). Using the up-to-date Higgs signal strength measurements at the LHC, one can obtain a likelihood function for the Wilson coefficients of dimension-6 operators in the EFT Lagrangian. Given the matching between the 2HDM and the EFT, the constraints on the Wilson coefficients can be translated into constraints on the parameters of the 2HDM Lagrangian. We discuss under which conditions such a procedure correctly reproduces the true limits on the 2HDM. Finally, we employ the SM EFT to identify the pattern of the Higgs boson couplings that are needed to improve the fit to the current Higgs data. To this end, one needs, simultaneously, to increase the top Yukawa coupling, decrease the bottom Yukawa coupling, and induce a new contact interaction of the Higgs boson with gluons. We comment on how these modifications can be realized in the 2HDM extended by new colored particles.

  2. Optimal apodization design for medical ultrasound using constrained least squares part I: theory.

    PubMed

    Guenther, Drake A; Walker, William F

    2007-02-01

    Aperture weighting functions are critical design parameters in the development of ultrasound systems because beam characteristics affect the contrast and point resolution of the final output image. In previous work by our group, we developed a metric that quantifies a broadband imaging system's contrast resolution performance. We now use this metric to formulate a novel general ultrasound beamformer design method. In our algorithm, we use constrained least squares (CLS) techniques and a linear algebra formulation to describe the system point spread function (PSF) as a function of the aperture weightings. In one approach, we minimize the energy of the PSF outside a certain boundary and impose a linear constraint on the aperture weights. In a second approach, we minimize the energy of the PSF outside a certain boundary while imposing a quadratic constraint on the energy of the PSF inside the boundary. We present detailed analysis for an arbitrary ultrasound imaging system and discuss several possible applications of the CLS techniques, such as designing aperture weightings to maximize contrast resolution and improve the system depth of field.

  3. Swarm formation control utilizing elliptical surfaces and limiting functions.

    PubMed

    Barnes, Laura E; Fields, Mary Anne; Valavanis, Kimon P

    2009-12-01

    In this paper, we present a strategy for organizing swarms of unmanned vehicles into a formation by utilizing artificial potential fields that were generated from normal and sigmoid functions. These functions construct the surface on which swarm members travel, controlling the overall swarm geometry and the individual member spacing. Nonlinear limiting functions are defined to provide tighter swarm control by modifying and adjusting a set of control variables that force the swarm to behave according to set constraints, formation, and member spacing. The artificial potential functions and limiting functions are combined to control swarm formation, orientation, and swarm movement as a whole. Parameters are chosen based on desired formation and user-defined constraints. This approach is computationally efficient and scales well to different swarm sizes, to heterogeneous systems, and to both centralized and decentralized swarm models. Simulation results are presented for a swarm of 10 and 40 robots that follow circle, ellipse, and wedge formations. Experimental results are included to demonstrate the applicability of the approach on a swarm of four custom-built unmanned ground vehicles (UGVs).

  4. Vainshtein mechanism after GW170817

    NASA Astrophysics Data System (ADS)

    Crisostomi, Marco; Koyama, Kazuya

    2018-01-01

    The almost simultaneous detection of gravitational waves and a short gamma-ray burst from a neutron star merger has put a tight constraint on the difference between the speed of gravity and light. In the four-dimensional scalar-tensor theory with second-order equations of motion, the Horndeski theory, this translates into a significant reduction of the viable parameter space of the theory. Recently, extensions of Horndeski theory, which are free from Ostrogradsky ghosts despite the presence of higher-order derivatives in the equations of motion, have been identified and classified exploiting the degeneracy criterium. In these new theories, the fifth force mediated by the scalar field must be suppressed in order to evade the stringent Solar System constraints. We study the Vainshtein mechanism in the most general degenerate higher-order scalar-tensor theory in which light and gravity propagate at the same speed. We find that the Vainshtein mechanism generally works outside a matter source but it is broken inside matter, similarly to beyond Horndeski theories. This leaves interesting possibilities to test these theories that are compatible with gravitational wave observations using astrophysical objects.

  5. Higgs gravitational interaction, weak boson scattering, and Higgs inflation in Jordan and Einstein frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Jing; Xianyu, Zhong-Zhi; He, Hong-Jian, E-mail: jingren2004@gmail.com, E-mail: xianyuzhongzhi@gmail.com, E-mail: hjhe@tsinghua.edu.cn

    2014-06-01

    We study gravitational interaction of Higgs boson through the unique dimension-4 operator ξH{sup †}HR, with H  the Higgs doublet and R  the Ricci scalar curvature. We analyze the effect of this dimensionless nonminimal coupling ξ  on weak gauge boson scattering in both Jordan and Einstein frames. We explicitly establish the longitudinal-Goldstone equivalence theorem with nonzero ξ coupling in both frames, and analyze the unitarity constraints. We study the ξ-induced weak boson scattering cross sections at O(1−30) TeV scales, and propose to probe the Higgs-gravity coupling via weak boson scattering experiments at the LHC (14 TeV) and the next generation ppmore » colliders (50-100 TeV). We further extend our study to Higgs inflation, and quantitatively derive the perturbative unitarity bounds via coupled channel analysis, under large field background at the inflation scale. We analyze the unitarity constraints on the parameter space in both the conventional Higgs inflation and the improved models in light of the recent BICEP2 data.« less

  6. EEHG Performance and Scaling Laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penn, Gregory

    This note will calculate the idealized performance of echo-enabled harmonic generation performance (EEHG), explore the parameter settings, and look at constraints determined by incoherent synchrotron radiation (ISR) and intrabeam scattering (IBS). Another important effect, time-of-flight variations related to transverse emittance, is included here but without detailed explanation because it has been described previously. The importance of ISR and IBS is that they lead to random energy shifts that lead to temporal shifts after the various beam manipulations required by the EEHG scheme. These effects give competing constraints on the beamline. For chicane magnets which are too compact for a givenmore » R56, the magnetic fields will be sufficiently strong that ISR will blur out the complex phase space structure of the echo scheme to the point where the bunching is strongly suppressed. The effect of IBS is more omnipresent, and requires an overall compact beamline. It is particularly challenging for the second pulse in a two-color attosecond beamline, due to the long delay between the first energy modulation and the modulator for the second pulse.« less

  7. Identification of linkages between potential Environmental and Social Impacts of Surface Mining and Ecosystem Services in Thar Coal field, Pakistan

    NASA Astrophysics Data System (ADS)

    Hina, A.

    2017-12-01

    Although Thar coal is recognized to be one of the most abundant fossil fuel that could meet the need to combat energy crisis of Pakistan, but there still remains a challenge to tackle the associated environmental and socio-ecological changes and its linkage to the provision of ecosystem services of the region. The study highlights the importance of considering Ecosystem service assessment to be undertaken in all strategic Environmental and Social Assessments of Thar coal field projects. The three-step approach has been formulated to link the project impacts to the provision of important ecosystem services; 1) Identification of impact indicators and parameters by analyzing the environmental and social impacts of surface mining in Thar Coal field through field investigation, literature review and stakeholder consultations; 2) Ranking of parameters and criteria alternatives using Multi-criteria Decision Analysis(MCDA) tool: (AHP method); 3) Using ranked parameters as a proxy to prioritize important ecosystem services of the region; The ecosystem services that were prioritized because of both high significance of project impact and high project dependence are highlighted as: Water is a key ecosystem service to be addressed and valued due to its high dependency in the area for livestock, human wellbeing, agriculture and other purposes. Crop production related to agricultural services, in association with supply services such as soil quality, fertility, and nutrient recycling and water retention need to be valued. Cultural services affected in terms of land use change and resettlement and rehabilitation factors are recommended to be addressed. The results of the analysis outline a framework of identifying these linkages as key constraints to foster the emergence of green growth and development in Pakistan. The practicality of implementing these assessments requires policy instruments and strategies to support human well-being and social inclusion while minimizing environmental degradation and loss of ecosystem services. Keywords Ecosystem service assessment; Environmental and Social Impact Assessment; coal mining; Thar Coal Field; Sustainable development

  8. A Bayesian inversion for slip distribution of 1 Apr 2007 Mw8.1 Solomon Islands Earthquake

    NASA Astrophysics Data System (ADS)

    Chen, T.; Luo, H.

    2013-12-01

    On 1 Apr 2007 the megathrust Mw8.1 Solomon Islands earthquake occurred in the southeast pacific along the New Britain subduction zone. 102 vertical displacement measurements over the southeastern end of the rupture zone from two field surveys after this event provide a unique constraint for slip distribution inversion. In conventional inversion method (such as bounded variable least squares) the smoothing parameter that determines the relative weight placed on fitting the data versus smoothing the slip distribution is often subjectively selected at the bend of the trade-off curve. Here a fully probabilistic inversion method[Fukuda,2008] is applied to estimate distributed slip and smoothing parameter objectively. The joint posterior probability density function of distributed slip and the smoothing parameter is formulated under a Bayesian framework and sampled with Markov chain Monte Carlo method. We estimate the spatial distribution of dip slip associated with the 1 Apr 2007 Solomon Islands earthquake with this method. Early results show a shallower dip angle than previous study and highly variable dip slip both along-strike and down-dip.

  9. MAGNETOROTATIONAL TURBULENCE TRANSPORTS ANGULAR MOMENTUM IN STRATIFIED DISKS WITH LOW MAGNETIC PRANDTL NUMBER BUT MAGNETIC REYNOLDS NUMBER ABOVE A CRITICAL VALUE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oishi, Jeffrey S.; Mac Low, Mordecai-Mark, E-mail: jsoishi@stanford.edu, E-mail: mordecai@amnh.org

    2011-10-10

    The magnetorotational instability (MRI) may dominate outward transport of angular momentum in accretion disks, allowing material to fall onto the central object. Previous work has established that the MRI can drive a mean-field dynamo, possibly leading to a self-sustaining accretion system. Recently, however, simulations of the scaling of the angular momentum transport parameter {alpha}{sub SS} with the magnetic Prandtl number Pm have cast doubt on the ability of the MRI to transport astrophysically relevant amounts of angular momentum in real disk systems. Here, we use simulations including explicit physical viscosity and resistivity to show that when vertical stratification is included,more » mean-field dynamo action operates, driving the system to a configuration in which the magnetic field is not fully helical. This relaxes the constraints on the generated field provided by magnetic helicity conservation, allowing the generation of a mean field on timescales independent of the resistivity. Our models demonstrate the existence of a critical magnetic Reynolds number Rm{sub crit}, below which transport becomes strongly Pm-dependent and chaotic, but above which the transport is steady and Pm-independent. Prior simulations showing Pm dependence had Rm < Rm{sub crit}. We conjecture that this steady regime is possible because the mean-field dynamo is not helicity-limited and thus does not depend on the details of the helicity ejection process. Scaling to realistic astrophysical parameters suggests that disks around both protostars and stellar mass black holes have Rm >> Rm{sub crit}. Thus, we suggest that the strong Pm dependence seen in recent simulations does not occur in real systems.« less

  10. Magnetorotational Turbulence Transports Angular Momentum in Stratified Disks with Low Magnetic Prandtl Number but Magnetic Reynolds Number above a Critical Value

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oishi, Jeffrey S.; /KIPAC, Menlo Park; Low, Mordecai-Mark Mac

    2012-02-14

    The magnetorotational instability (MRI) may dominate outward transport of angular momentum in accretion disks, allowing material to fall onto the central object. Previous work has established that the MRI can drive a mean-field dynamo, possibly leading to a self-sustaining accretion system. Recently, however, simulations of the scaling of the angular momentum transport parameter {alpha}{sub SS} with the magnetic Prandtl number Pm have cast doubt on the ability of the MRI to transport astrophysically relevant amounts of angular momentum in real disk systems. Here, we use simulations including explicit physical viscosity and resistivity to show that when vertical stratification is included,more » mean field dynamo action operates, driving the system to a configuration in which the magnetic field is not fully helical. This relaxes the constraints on the generated field provided by magnetic helicity conservation, allowing the generation of a mean field on timescales independent of the resistivity. Our models demonstrate the existence of a critical magnetic Reynolds number Rm{sub crit}, below which transport becomes strongly Pm-dependent and chaotic, but above which the transport is steady and Pm-independent. Prior simulations showing Pm-dependence had Rm < Rm{sub crit}. We conjecture that this steady regime is possible because the mean field dynamo is not helicity-limited and thus does not depend on the details of the helicity ejection process. Scaling to realistic astrophysical parameters suggests that disks around both protostars and stellar mass black holes have Rm >> Rm{sub crit}. Thus, we suggest that the strong Pm dependence seen in recent simulations does not occur in real systems.« less

  11. Primordial black hole production in Critical Higgs Inflation

    NASA Astrophysics Data System (ADS)

    Ezquiaga, Jose María; García-Bellido, Juan; Ruiz Morales, Ester

    2018-01-01

    Primordial Black Holes (PBH) arise naturally from high peaks in the curvature power spectrum of near-inflection-point single-field inflation, and could constitute today the dominant component of the dark matter in the universe. In this letter we explore the possibility that a broad spectrum of PBH is formed in models of Critical Higgs Inflation (CHI), where the near-inflection point is related to the critical value of the RGE running of both the Higgs self-coupling λ (μ) and its non-minimal coupling to gravity ξ (μ). We show that, for a wide range of model parameters, a half-domed-shaped peak in the matter spectrum arises at sufficiently small scales that it passes all the constraints from large scale structure observations. The predicted cosmic microwave background spectrum at large scales is in agreement with Planck 2015 data, and has a relatively large tensor-to-scalar ratio that may soon be detected by B-mode polarization experiments. Moreover, the wide peak in the power spectrum gives an approximately lognormal PBH distribution in the range of masses 0.01- 100M⊙, which could explain the LIGO merger events, while passing all present PBH observational constraints. The stochastic background of gravitational waves coming from the unresolved black-hole-binary mergers could also be detected by LISA or PTA. Furthermore, the parameters of the CHI model are consistent, within 2σ, with the measured Higgs parameters at the LHC and their running. Future measurements of the PBH mass spectrum could allow us to obtain complementary information about the Higgs couplings at energies well above the EW scale, and thus constrain new physics beyond the Standard Model.

  12. Constraints on Non-flat Cosmologies with Massive Neutrinos after Planck 2015

    NASA Astrophysics Data System (ADS)

    Chen, Yun; Ratra, Bharat; Biesiada, Marek; Li, Song; Zhu, Zong-Hong

    2016-10-01

    We investigate two dark energy cosmological models (I.e., the ΛCDM and ϕCDM models) with massive neutrinos assuming two different neutrino mass hierarchies in both the spatially flat and non-flat scenarios, where in the ϕCDM model the scalar field possesses an inverse power-law potential, V(ϕ) ∝ ϕ -α (α > 0). Cosmic microwave background data from Planck 2015, baryon acoustic oscillation data from 6dFGS, SDSS-MGS, BOSS-LOWZ and BOSS CMASS-DR11, the joint light-curve analysis compilation of SNe Ia apparent magnitude observations, and the Hubble Space Telescope H 0 prior, are jointly employed to constrain the model parameters. We first determine constraints assuming three species of degenerate massive neutrinos. In the spatially flat (non-flat) ΛCDM model, the sum of neutrino masses is bounded as Σm ν < 0.165(0.299) eV at 95% confidence level (CL). Correspondingly, in the flat (non-flat) ϕCDM model, we find Σm ν < 0.164(0.301) eV at 95% CL. The inclusion of spatial curvature as a free parameter results in a significant broadening of confidence regions for Σm ν and other parameters. In the scenario where the total neutrino mass is dominated by the heaviest neutrino mass eigenstate, we obtain similar conclusions to those obtained in the degenerate neutrino mass scenario. In addition, the results show that the bounds on Σm ν based on two different neutrino mass hierarchies have insignificant differences in the spatially flat case for both the ΛCDM and ϕCDM models; however, the corresponding differences are larger in the non-flat case.

  13. Disentangling dark energy and cosmic tests of gravity from weak lensing systematics

    NASA Astrophysics Data System (ADS)

    Laszlo, Istvan; Bean, Rachel; Kirk, Donnacha; Bridle, Sarah

    2012-06-01

    We consider the impact of key astrophysical and measurement systematics on constraints on dark energy and modifications to gravity on cosmic scales. We focus on upcoming photometric ‘stage III’ and ‘stage IV’ large-scale structure surveys such as the Dark Energy Survey (DES), the Subaru Measurement of Images and Redshifts survey, the Euclid survey, the Large Synoptic Survey Telescope (LSST) and Wide Field Infra-Red Space Telescope (WFIRST). We illustrate the different redshift dependencies of gravity modifications compared to intrinsic alignments, the main astrophysical systematic. The way in which systematic uncertainties, such as galaxy bias and intrinsic alignments, are modelled can change dark energy equation-of-state parameter and modified gravity figures of merit by a factor of 4. The inclusion of cross-correlations of cosmic shear and galaxy position measurements helps reduce the loss of constraining power from the lensing shear surveys. When forecasts for Planck cosmic microwave background and stage IV surveys are combined, constraints on the dark energy equation-of-state parameter and modified gravity model are recovered, relative to those from shear data with no systematic uncertainties, provided fewer than 36 free parameters in total are used to describe the galaxy bias and intrinsic alignment models as a function of scale and redshift. While some uncertainty in the intrinsic alignment (IA) model can be tolerated, it is going to be important to be able to parametrize IAs well in order to realize the full potential of upcoming surveys. To facilitate future investigations, we also provide a fitting function for the matter power spectrum arising from the phenomenological modified gravity model we consider.

  14. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints

    PubMed Central

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-01-01

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108

  15. An Accurate GPS-IMU/DR Data Fusion Method for Driverless Car Based on a Set of Predictive Models and Grid Constraints.

    PubMed

    Wang, Shiyao; Deng, Zhidong; Yin, Gang

    2016-02-24

    A high-performance differential global positioning system (GPS)  receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.

  16. Fundamental Activity Constraints Lead to Specific Interpretations of the Connectome.

    PubMed

    Schuecker, Jannis; Schmidt, Maximilian; van Albada, Sacha J; Diesmann, Markus; Helias, Moritz

    2017-02-01

    The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function.

  17. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  18. Non-analyticity of holographic Rényi entropy in Lovelock gravity

    NASA Astrophysics Data System (ADS)

    Puletti, V. Giangreco M.; Pourhasan, Razieh

    2017-08-01

    We compute holographic Rényi entropies for spherical entangling surfaces on the boundary while considering third order Lovelock gravity with negative cosmological constant in the bulk. Our study shows that third order Lovelock black holes with hyperbolic event horizon are unstable, and at low temperatures those with smaller mass are favoured, giving rise to first order phase transitions in the bulk. We determine regions in the Lovelock parameter space in arbitrary dimensions, where bulk phase transitions happen and where boundary causality constraints are met. We show that each of these points corresponds to a dual boundary conformal field theory whose Rényi entropy exhibits a kink at a certain critical index n.

  19. Phase-only asymmetric optical cryptosystem based on random modulus decomposition

    NASA Astrophysics Data System (ADS)

    Xu, Hongfeng; Xu, Wenhui; Wang, Shuaihua; Wu, Shaofan

    2018-06-01

    We propose a phase-only asymmetric optical cryptosystem based on random modulus decomposition (RMD). The cryptosystem is presented for effectively improving the capacity to resist various attacks, including the attack of iterative algorithms. On the one hand, RMD and phase encoding are combined to remove the constraints that can be used in the attacking process. On the other hand, the security keys (geometrical parameters) introduced by Fresnel transform can increase the key variety and enlarge the key space simultaneously. Numerical simulation results demonstrate the strong feasibility, security and robustness of the proposed cryptosystem. This cryptosystem will open up many new opportunities in the application fields of optical encryption and authentication.

  20. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

Top