Sample records for constrained parameter space

  1. Astrophysical Model Selection in Gravitational Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  2. Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.

    PubMed

    Mulder, Joris

    2014-02-01

    Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.

  3. Estimating free-body modal parameters from tests of a constrained structure

    NASA Technical Reports Server (NTRS)

    Cooley, Victor M.

    1993-01-01

    Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.

  4. Trajectory Design Strategies for the NGST L2 Libration Point Mission

    NASA Technical Reports Server (NTRS)

    Folta, David; Cooley, Steven; Howell, Kathleen; Bauer, Frank H.

    2001-01-01

    The Origins' Next Generation Space Telescope (NGST) trajectory design is addressed in light of improved methods for attaining constrained orbit parameters and their control at the exterior collinear libration point, L2. The use of a dynamical systems approach, state-space equations for initial libration orbit control, and optimization to achieve constrained orbit parameters are emphasized. The NGST trajectory design encompasses a direct transfer and orbit maintenance under a constant acceleration. A dynamical systems approach can be used to provide a biased orbit and stationkeeping maintenance method that incorporates the constraint of a single axis correction scheme.

  5. Recovering a Probabilistic Knowledge Structure by Constraining Its Parameter Space

    ERIC Educational Resources Information Center

    Stefanutti, Luca; Robusto, Egidio

    2009-01-01

    In the Basic Local Independence Model (BLIM) of Doignon and Falmagne ("Knowledge Spaces," Springer, Berlin, 1999), the probabilistic relationship between the latent knowledge states and the observable response patterns is established by the introduction of a pair of parameters for each of the problems: a lucky guess probability and a careless…

  6. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  7. Redshift Space Distortion on the Small Scale Clustering of Structure

    NASA Astrophysics Data System (ADS)

    Park, Hyunbae; Sabiu, Cristiano; Li, Xiao-dong; Park, Changbom; Kim, Juhan

    2018-01-01

    The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. The shape of the two-point correlation of galaxies exhibits a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. In our previous works, we can made use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This current work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities. We now aim to understand the redshift evolution of the full shape of the small scale, anisotropic galaxy clustering and give a firmer theoretical footing to our previous works.

  8. Constraining parameters of white-dwarf binaries using gravitational-wave and electromagnetic observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Sweta; Nelemans, Gijs, E-mail: s.shah@astro.ru.nl

    The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, themore » inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.« less

  9. Constraining sterile neutrinos with AMANDA and IceCube atmospheric neutrino data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esmaili, Arman; Peres, O.L.G.; Halzen, Francis, E-mail: aesmaili@ifi.unicamp.br, E-mail: halzen@icecube.wisc.edu, E-mail: orlando@ifi.unicamp.br

    2012-11-01

    We demonstrate that atmospheric neutrino data accumulated with the AMANDA and the partially deployed IceCube experiments constrain the allowed parameter space for a hypothesized fourth sterile neutrino beyond the reach of a combined analysis of all other experiments, for Δm{sup 2}{sub 41}∼<1 eV{sup 2}. Although the IceCube data wins the statistics in the analysis, the advantage of a combined analysis of AMANDA and IceCube data is the partial remedy of yet unknown instrumental systematic uncertainties. We also illustrate the sensitivity of the completed IceCube detector, that is now taking data, to the parameter space of 3+1 model.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petiteau, Antoine; Babak, Stanislav; Sesana, Alberto

    Gravitational wave (GW) signals from coalescing massive black hole (MBH) binaries could be used as standard sirens to measure cosmological parameters. The future space-based GW observatory Laser Interferometer Space Antenna (LISA) will detect up to a hundred of those events, providing very accurate measurements of their luminosity distances. To constrain the cosmological parameters, we also need to measure the redshift of the galaxy (or cluster of galaxies) hosting the merger. This requires the identification of a distinctive electromagnetic event associated with the binary coalescence. However, putative electromagnetic signatures may be too weak to be observed. Instead, we study here themore » possibility of constraining the cosmological parameters by enforcing statistical consistency between all the possible hosts detected within the measurement error box of a few dozen of low-redshift (z < 3) events. We construct MBH populations using merger tree realizations of the dark matter hierarchy in a {Lambda}CDM universe, and we use data from the Millennium simulation to model the galaxy distribution in the LISA error box. We show that, assuming that all the other cosmological parameters are known, the parameter w describing the dark energy equation of state can be constrained to a 4%-8% level (2{sigma} error), competitive with current uncertainties obtained by type Ia supernovae measurements, providing an independent test of our cosmological model.« less

  11. Electric dipole moments in natural supersymmetry

    NASA Astrophysics Data System (ADS)

    Nakai, Yuichiro; Reece, Matthew

    2017-08-01

    We discuss electric dipole moments (EDMs) in the framework of CP-violating natural supersymmetry (SUSY). Recent experimental results have significantly tightened constraints on the EDMs of electrons and of mercury, and substantial further progress is expected in the near future. We assess how these results constrain the parameter space of natural SUSY. In addition to our discussion of SUSY, we provide a set of general formulas for two-loop fermion EDMs, which can be applied to a wide range of models of new physics. In the SUSY context, the two-loop effects of stops and charginos respectively constrain the phases of A t μ and M 2 μ to be small in the natural part of parameter space. If the Higgs mass is lifted to 125 GeV by a new tree-level superpotential interaction and soft term with CP-violating phases, significant EDMs can arise from the two-loop effects of W bosons and tops. We compare the bounds arising from EDMs to those from other probes of new physics including colliders, b → sγ, and dark matter searches. Importantly, improvements in reach not only constrain higher masses, but require the phases to be significantly smaller in the natural parameter space at low mass. The required smallness of phases sharpens the CP problem of natural SUSY model building.

  12. Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.

    PubMed

    Cabrera, M E; Casas, J A; Delgado, A

    2012-01-13

    The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11)  GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.

  13. Maximizing the information learned from finite data selects a simple model

    NASA Astrophysics Data System (ADS)

    Mattingly, Henry H.; Transtrum, Mark K.; Abbott, Michael C.; Machta, Benjamin B.

    2018-02-01

    We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We advocate for the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from limited data. When many parameters are poorly constrained by the available data, we find that this prior puts weight only on boundaries of the parameter space. Thus, it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. In the limit where there are sufficient data to tightly constrain any number of parameters, this reduces to the Jeffreys prior. However, we argue that this limit is pathological when applied to the hyperribbon parameter manifolds generic in science, because it leads to dramatic dependence on effects invisible to experiment.

  14. Direct reconstruction of dark energy.

    PubMed

    Clarkson, Chris; Zunckel, Caroline

    2010-05-28

    An important issue in cosmology is reconstructing the effective dark energy equation of state directly from observations. With so few physically motivated models, future dark energy studies cannot only be based on constraining a dark energy parameter space. We present a new nonparametric method which can accurately reconstruct a wide variety of dark energy behavior with no prior assumptions about it. It is simple, quick and relatively accurate, and involves no expensive explorations of parameter space. The technique uses principal component analysis and a combination of information criteria to identify real features in the data, and tailors the fitting functions to pick up trends and smooth over noise. We find that we can constrain a large variety of w(z) models to within 10%-20% at redshifts z≲1 using just SNAP-quality data.

  15. Disentangling Redshift-Space Distortions and Nonlinear Bias using the 2D Power Spectrum

    DOE PAGES

    Jennings, Elise; Wechsler, Risa H.

    2015-08-07

    We present the nonlinear 2D galaxy power spectrum, P(k, µ), in redshift space, measured from the Dark Sky simulations, using galaxy catalogs constructed with both halo occupation distribution and subhalo abundance matching methods, chosen to represent an intermediate redshift sample of luminous red galaxies. We find that the information content in individual µ (cosine of the angle to the line of sight) bins is substantially richer then multipole moments, and show that this can be used to isolate the impact of nonlinear growth and redshift space distortion (RSD) effects. Using the µ < 0.2 simulation data, which we show ismore » not impacted by RSD effects, we can successfully measure the nonlinear bias to an accuracy of ~ 5% at k < 0.6hMpc-1 . This use of individual µ bins to extract the nonlinear bias successfully removes a large parameter degeneracy when constraining the linear growth rate of structure. We carry out a joint parameter estimation, using the low µ simulation data to constrain the nonlinear bias, and µ > 0.2 to constrain the growth rate and show that f can be constrained to ~ 26(22)% to a kmax < 0.4(0.6)hMpc-1 from clustering alone using a simple dispersion model, for a range of galaxy models. Our analysis of individual µ bins also reveals interesting physical effects which arise simply from different methods of populating halos with galaxies. We also find a prominent turnaround scale, at which RSD damping effects are greater then the nonlinear growth, which differs not only for each µ bin but also for each galaxy model. These features may provide unique signatures which could be used to shed light on the galaxy–dark matter connection. Furthermore, the idea of separating nonlinear growth and RSD effects making use of the full information in the 2D galaxy power spectrum yields significant improvements in constraining cosmological parameters and may be a promising probe of galaxy formation models.« less

  16. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  17. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  18. Cosmology with photometric weak lensing surveys: Constraints with redshift tomography of convergence peaks and moments

    NASA Astrophysics Data System (ADS)

    Petri, Andrea; May, Morgan; Haiman, Zoltán

    2016-09-01

    Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w . When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ωm,w ,σ8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. We find that redshift tomography with the power spectrum reduces the area of the 1 σ confidence interval in (Ωm,w ) space by a factor of 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ωm,w ) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. We find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.

  19. A RSSI-based parameter tracking strategy for constrained position localization

    NASA Astrophysics Data System (ADS)

    Du, Jinze; Diouris, Jean-François; Wang, Yide

    2017-12-01

    In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.

  20. The detection of impact regions of asteroids' orbits by means of constrained minimization of confidence coefficient. (Russian Title: Выявление областей столкновительных орбит астероидов с помощью условной минимизации доверительного коэффициента)

    NASA Astrophysics Data System (ADS)

    Baturin, A. P.

    2014-12-01

    The theme of NEO's impact orbits' regions detecting has been considered. The regions have been detected in the space of initial motion parameters. The detecting has been done by means of constrained minimization of so called "confidence coefficient". This coefficient determines the position of an orbit inside its confidence ellipsoid obtained from a least-square orbit fitting. As a condition the constraining of an asteroid-Earth distance at considered encounter has been used. By means of random variation of initial approximations for the minimization and of the parameter constraining an asteroid-Earth distance it has been demonstrated that impact regions usually have a form of some long tubes in the space of initial motion parameters. The demonstration has been done for the asteroids 2009 FD, 2011 TO and 2012 PB20 at their waited closest encounters to the Earth.

  1. Marginal space learning for efficient detection of 2D/3D anatomical structures in medical images.

    PubMed

    Zheng, Yefeng; Georgescu, Bogdan; Comaniciu, Dorin

    2009-01-01

    Recently, marginal space learning (MSL) was proposed as a generic approach for automatic detection of 3D anatomical structures in many medical imaging modalities [1]. To accurately localize a 3D object, we need to estimate nine pose parameters (three for position, three for orientation, and three for anisotropic scaling). Instead of exhaustively searching the original nine-dimensional pose parameter space, only low-dimensional marginal spaces are searched in MSL to improve the detection speed. In this paper, we apply MSL to 2D object detection and perform a thorough comparison between MSL and the alternative full space learning (FSL) approach. Experiments on left ventricle detection in 2D MRI images show MSL outperforms FSL in both speed and accuracy. In addition, we propose two novel techniques, constrained MSL and nonrigid MSL, to further improve the efficiency and accuracy. In many real applications, a strong correlation may exist among pose parameters in the same marginal spaces. For example, a large object may have large scaling values along all directions. Constrained MSL exploits this correlation for further speed-up. The original MSL only estimates the rigid transformation of an object in the image, therefore cannot accurately localize a nonrigid object under a large deformation. The proposed nonrigid MSL directly estimates the nonrigid deformation parameters to improve the localization accuracy. The comparison experiments on liver detection in 226 abdominal CT volumes demonstrate the effectiveness of the proposed methods. Our system takes less than a second to accurately detect the liver in a volume.

  2. A pitfall of piecewise-polytropic equation of state inference

    NASA Astrophysics Data System (ADS)

    Raaijmakers, Geert; Riley, Thomas E.; Watts, Anna L.

    2018-05-01

    The only messenger radiation in the Universe which one can use to statistically probe the Equation of State (EOS) of cold dense matter is that originating from the near-field vicinities of compact stars. Constraining gravitational masses and equatorial radii of rotating compact stars is a major goal for current and future telescope missions, with a primary purpose of constraining the EOS. From a Bayesian perspective it is necessary to carefully discuss prior definition; in this context a complicating issue is that in practice there exist pathologies in the general relativistic mapping between spaces of local (interior source matter) and global (exterior spacetime) parameters. In a companion paper, these issues were raised on a theoretical basis. In this study we reproduce a probability transformation procedure from the literature in order to map a joint posterior distribution of Schwarzschild gravitational masses and radii into a joint posterior distribution of EOS parameters. We demonstrate computationally that EOS parameter inferences are sensitive to the choice to define a prior on a joint space of these masses and radii, instead of on a joint space interior source matter parameters. We focus on the piecewise-polytropic EOS model, which is currently standard in the field of astrophysical dense matter study. We discuss the implications of this issue for the field.

  3. Exploring theory space with Monte Carlo reweighting

    DOE PAGES

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  4. Optimal synchronization in space

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-02-01

    In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.

  5. Precision constraints on the top-quark effective field theory at future lepton colliders

    NASA Astrophysics Data System (ADS)

    Durieux, G.

    We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.

  6. Tests of gravity with future space-based experiments

    NASA Astrophysics Data System (ADS)

    Sakstein, Jeremy

    2018-03-01

    Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.

  7. Model independent constraints on transition redshift

    NASA Astrophysics Data System (ADS)

    Jesus, J. F.; Holanda, R. F. L.; Pereira, S. H.

    2018-05-01

    This paper aims to put constraints on the transition redshift zt, which determines the onset of cosmic acceleration, in cosmological-model independent frameworks. In order to perform our analyses, we consider a flat universe and assume a parametrization for the comoving distance DC(z) up to third degree on z, a second degree parametrization for the Hubble parameter H(z) and a linear parametrization for the deceleration parameter q(z). For each case, we show that type Ia supernovae and H(z) data complement each other on the parameter space and tighter constrains for the transition redshift are obtained. By combining the type Ia supernovae observations and Hubble parameter measurements it is possible to constrain the values of zt, for each approach, as 0.806± 0.094, 0.870± 0.063 and 0.973± 0.058 at 1σ c.l., respectively. Then, such approaches provide cosmological-model independent estimates for this parameter.

  8. Cosmology with photometric weak lensing surveys: Constraints with redshift tomography of convergence peaks and moments

    DOE PAGES

    Petri, Andrea; May, Morgan; Haiman, Zoltán

    2016-09-30

    Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w. When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ω m,w,σ 8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. Here we find that redshift tomography with the power spectrum reduces the area of the 1σ confidence interval in (Ω m,w) space by a factor ofmore » 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ω m,w) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. In conclusion, we find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.« less

  9. Cosmological constraints on extended Galileon models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Antonio De; Tsujikawa, Shinji, E-mail: antoniod@nu.ac.th, E-mail: shinji@rs.kagu.tus.ac.jp

    2012-03-01

    The extended Galileon models possess tracker solutions with de Sitter attractors along which the dark energy equation of state is constant during the matter-dominated epoch, i.e. w{sub DE} = −1−s, where s is a positive constant. Even with this phantom equation of state there are viable parameter spaces in which the ghosts and Laplacian instabilities are absent. Using the observational data of the supernovae type Ia, the cosmic microwave background (CMB), and baryon acoustic oscillations, we place constraints on the tracker solutions at the background level and find that the parameter s is constrained to be s = 0.034{sub −0.034}{supmore » +0.327} (95 % CL) in the flat Universe. In order to break the degeneracy between the models we also study the evolution of cosmological density perturbations relevant to the large-scale structure (LSS) and the Integrated-Sachs-Wolfe (ISW) effect in CMB. We show that, depending on the model parameters, the LSS and the ISW effect is either positively or negatively correlated. It is then possible to constrain viable parameter spaces further from the observational data of the ISW-LSS cross-correlation as well as from the matter power spectrum.« less

  10. Design of bearings for rotor systems based on stability

    NASA Technical Reports Server (NTRS)

    Dhar, D.; Barrett, L. E.; Knospe, C. R.

    1992-01-01

    Design of rotor systems incorporating stable behavior is of great importance to manufacturers of high speed centrifugal machinery since destabilizing mechanisms (from bearings, seals, aerodynamic cross coupling, noncolocation effects from magnetic bearings, etc.) increase with machine efficiency and power density. A new method of designing bearing parameters (stiffness and damping coefficients or coefficients of the controller transfer function) is proposed, based on a numerical search in the parameter space. The feedback control law is based on a decentralized low order controller structure, and the various design requirements are specified as constraints in the specification and parameter spaces. An algorithm is proposed for solving the problem as a sequence of constrained 'minimax' problems, with more and more eigenvalues into an acceptable region in the complex plane. The algorithm uses the method of feasible directions to solve the nonlinear constrained minimization problem at each stage. This methodology emphasizes the designer's interaction with the algorithm to generate acceptable designs by relaxing various constraints and changing initial guesses interactively. A design oriented user interface is proposed to facilitate the interaction.

  11. Thunder-induced ground motions: 2. Site characterization

    NASA Astrophysics Data System (ADS)

    Lin, Ting-L.; Langston, Charles A.

    2009-04-01

    Thunder-induced ground motion, near-surface refraction, and Rayleigh wave dispersion measurements were used to constrain near-surface velocity structure at an unconsolidated sediment site. We employed near-surface seismic refraction measurements to first define ranges for site structure parameters. Air-coupled and hammer-generated Rayleigh wave dispersion curves were used to further constrain the site structure by a grid search technique. The acoustic-to-seismic coupling is modeled as an incident plane P wave in a fluid half-space impinging into a solid layered half-space. We found that the infrasound-induced ground motions constrained substrate velocities and the average thickness and velocities of the near-surface layer. The addition of higher-frequency near-surface Rayleigh waves produced tighter constraints on the near-surface velocities. This suggests that natural or controlled airborne pressure sources can be used to investigate the near-surface site structures for earthquake shaking hazard studies.

  12. APPLICATION OF A BIP CONSTRAINED OPTIMIZATION MODEL COMBINED WITH NASA's ATLAS MODEL TO OPTIMIZE THE SOCIETAL BENEFITS OF THE USA's INTERNATIONAL SPACE EXPLORATION AND UTILIZATION INITIATIVE OF 1/14/04

    NASA Technical Reports Server (NTRS)

    Morgenthaler, George W.; Glover, Fred W.; Woodcock, Gordon R.; Laguna, Manuel

    2005-01-01

    The 1/14/04 USA Space Exploratiofltilization Initiative invites all Space-faring Nations, all Space User Groups in Science, Space Entrepreneuring, Advocates of Robotic and Human Space Exploration, Space Tourism and Colonization Promoters, etc., to join an International Space Partnership. With more Space-faring Nations and Space User Groups each year, such a Partnership would require Multi-year (35 yr.-45 yr.) Space Mission Planning. With each Nation and Space User Group demanding priority for its missions, one needs a methodology for obiectively selecting the best mission sequences to be added annually to this 45 yr. Moving Space Mission Plan. How can this be done? Planners have suggested building a Reusable, Sustainable, Space Transportation Infrastructure (RSSn) to increase Mission synergism, reduce cost, and increase scientific and societal returns from this Space Initiative. Morgenthaler and Woodcock presented a Paper at the 55th IAC, Vancouver B.C., Canada, entitled Constrained Optimization Models For Optimizing Multi - Year Space Programs. This Paper showed that a Binary Integer Programming (BIP) Constrained Optimization Model combined with the NASA ATLAS Cost and Space System Operational Parameter Estimating Model has the theoretical capability to solve such problems. IAA Commission III, Space Technology and Space System Development, in its ACADEMY DAY meeting at Vancouver, requested that the Authors and NASA experts find several Space Exploration Architectures (SEAS), apply the combined BIP/ATLAS Models, and report the results at the 56th Fukuoka IAC. While the mathematical Model is in Ref.[2] this Paper presents the Application saga of that effort.

  13. Terrestrial Sagnac delay constraining modified gravity models

    NASA Astrophysics Data System (ADS)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cembranos, Jose A. R.; Diaz-Cruz, J. Lorenzo; Prado, Lilian

    Dark Matter direct detection experiments are able to exclude interesting parameter space regions of particle models which predict an important amount of thermal relics. We use recent data to constrain the branon model and to compute the region that is favored by CDMS measurements. Within this work, we also update present colliders constraints with new studies coming from the LHC. Despite the present low luminosity, it is remarkable that for heavy branons, CMS and ATLAS measurements are already more constraining than previous analyses performed with TEVATRON and LEP data.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, Elise; Wechsler, Risa H.

    We present the nonlinear 2D galaxy power spectrum, P(k, µ), in redshift space, measured from the Dark Sky simulations, using galaxy catalogs constructed with both halo occupation distribution and subhalo abundance matching methods, chosen to represent an intermediate redshift sample of luminous red galaxies. We find that the information content in individual µ (cosine of the angle to the line of sight) bins is substantially richer then multipole moments, and show that this can be used to isolate the impact of nonlinear growth and redshift space distortion (RSD) effects. Using the µ < 0.2 simulation data, which we show ismore » not impacted by RSD effects, we can successfully measure the nonlinear bias to an accuracy of ~ 5% at k < 0.6hMpc-1 . This use of individual µ bins to extract the nonlinear bias successfully removes a large parameter degeneracy when constraining the linear growth rate of structure. We carry out a joint parameter estimation, using the low µ simulation data to constrain the nonlinear bias, and µ > 0.2 to constrain the growth rate and show that f can be constrained to ~ 26(22)% to a kmax < 0.4(0.6)hMpc-1 from clustering alone using a simple dispersion model, for a range of galaxy models. Our analysis of individual µ bins also reveals interesting physical effects which arise simply from different methods of populating halos with galaxies. We also find a prominent turnaround scale, at which RSD damping effects are greater then the nonlinear growth, which differs not only for each µ bin but also for each galaxy model. These features may provide unique signatures which could be used to shed light on the galaxy–dark matter connection. Furthermore, the idea of separating nonlinear growth and RSD effects making use of the full information in the 2D galaxy power spectrum yields significant improvements in constraining cosmological parameters and may be a promising probe of galaxy formation models.« less

  16. Fine-structure constant constraints on dark energy. II. Extending the parameter space

    NASA Astrophysics Data System (ADS)

    Martins, C. J. A. P.; Pinho, A. M. M.; Carreira, P.; Gusart, A.; López, J.; Rocha, C. I. S. A.

    2016-01-01

    Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α , are a powerful probe of new physics. Recently these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, were used to constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ , to the electromagnetic sector) the α variation. One caveat of these analyses was that it was based on fiducial models where the dark energy equation of state was described by a single parameter (effectively its present day value, w0). Here we relax this assumption and study broader dark energy model classes, including the Chevallier-Polarski-Linder and early dark energy parametrizations. Even in these extended cases we find that the current data constrains the coupling ζ at the 1 0-6 level and w0 to a few percent (marginalizing over other parameters), thus confirming the robustness of earlier analyses. On the other hand, the additional parameters are typically not well constrained. We also highlight the implications of our results for constraints on violations of the weak equivalence principle and improvements to be expected from forthcoming measurements with high-resolution ultrastable spectrographs.

  17. A method and data for video monitor sizing. [human CRT viewing requirements

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M., III; Shields, N. L., Jr.; Malone, T. B.; Guerin, E. G.

    1976-01-01

    The paper outlines an approach consisting of using analytical methods and empirical data to determine monitor size constraints based on the human operator's CRT viewing requirements in a context where panel space and volume considerations for the Space Shuttle aft cabin constrain the size of the monitor to be used. Two cases are examined: remote scene imaging and alphanumeric character display. The central parameter used to constrain monitor size is the ratio M/L where M is the monitor dimension and L the viewing distance. The study is restricted largely to 525 line video systems having an SNR of 32 db and bandwidth of 4.5 MHz. Degradation in these parameters would require changes in the empirically determined visual angle constants presented. The data and methods described are considered to apply to cases where operators are required to view via TV target objects which are well differentiated from the background and where the background is relatively sparse. It is also necessary to identify the critical target dimensions and cues.

  18. Lepton flavor violating B meson decays via a scalar leptoquark

    NASA Astrophysics Data System (ADS)

    Sahoo, Suchismita; Mohanta, Rukmani

    2016-06-01

    We study the effect of scalar leptoquarks in the lepton flavor violating B meson decays induced by the flavor-changing transitions b →q li+lj- with q =s , d . In the standard model, these transitions are extremely rare as they are either two-loop suppressed or proceed via box diagrams with tiny neutrino masses in the loop. However, in the leptoquark model, they can occur at tree level and are expected to have significantly large branching ratios. The leptoquark parameter space is constrained using the experimental limits on the branching ratios of Bq→l+l- processes. Using such constrained parameter space, we predict the branching ratios of LFV semileptonic B meson decays, such as B+→K+(π+)li+lj-, B+→(K*+,ρ+)li+lj-, and Bs→ϕ li+lj-, which are found to be within the experimental reach of LHCb and the upcoming Belle II experiments. We also investigate the rare leptonic KL ,S→μ+μ-(e+e-) and KL→μ∓e± decays in the leptoquark model.

  19. Cosmological constraints from galaxy clustering in the presence of massive neutrinos

    NASA Astrophysics Data System (ADS)

    Zennaro, M.; Bel, J.; Dossett, J.; Carbone, C.; Guzzo, L.

    2018-06-01

    The clustering ratio is defined as the ratio between the correlation function and the variance of the smoothed overdensity field. In Λ cold dark matter (ΛCDM) cosmologies without massive neutrinos, it has already been proven to be independent of bias and redshift space distortions on a range of linear scales. It therefore can provide us with a direct comparison of predictions (for matter in real space) against measurements (from galaxies in redshift space). In this paper we first extend the applicability of such properties to cosmologies that account for massive neutrinos, by performing tests against simulated data. We then investigate the constraining power of the clustering ratio on cosmological parameters such as the total neutrino mass and the equation of state of dark energy. We analyse the joint posterior distribution of the parameters that satisfy both measurements of the galaxy clustering ratio in the SDSS-DR12, and the angular power spectra of cosmic microwave background temperature and polarization anisotropies measured by the Planck satellite. We find the clustering ratio to be very sensitive to the CDM density parameter, but less sensitive to the total neutrino mass. We also forecast the constraining power the clustering ratio will achieve, predicting the amplitude of its errors with a Euclid-like galaxy survey. First we compute parameter forecasts using the Planck covariance matrix alone, then we add information from the clustering ratio. We find a significant improvement on the constraint of all considered parameters, and in particular an improvement of 40 per cent for the CDM density and 14 per cent for the total neutrino mass.

  20. Constraining new physics models with isotope shift spectroscopy

    NASA Astrophysics Data System (ADS)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  1. Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.

    PubMed

    Capozziello, S; Lambiase, G; Saridakis, E N

    2017-01-01

    We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.

  2. Test of the Chevallier-Polarski-Linder parametrization for rapid dark energy equation of state transitions

    NASA Astrophysics Data System (ADS)

    Linden, Sebastian; Virey, Jean-Marc

    2008-07-01

    We test the robustness and flexibility of the Chevallier-Polarski-Linder (CPL) parametrization of the dark energy equation of state w(z)=w0+wa(z)/(1+z) in recovering a four-parameter steplike fiducial model. We constrain the parameter space region of the underlying fiducial model where the CPL parametrization offers a reliable reconstruction. It turns out that non-negligible biases leak into the results for recent (z<2.5) rapid transitions, but that CPL yields a good reconstruction in all other cases. The presented analysis is performed with supernova Ia data as forecasted for a space mission like SNAP/JDEM, combined with future expectations for the cosmic microwave background shift parameter R and the baryonic acoustic oscillation parameter A.

  3. Constraining axion dark matter with Big Bang Nucleosynthesis

    DOE PAGES

    Blum, Kfir; D'Agnolo, Raffaele Tito; Lisanti, Mariangela; ...

    2014-08-04

    We show that Big Bang Nucleosynthesis (BBN) significantly constrains axion-like dark matter. The axion acts like an oscillating QCD θ angle that redshifts in the early Universe, increasing the neutron–proton mass difference at neutron freeze-out. An axion-like particle that couples too strongly to QCD results in the underproduction of during BBN and is thus excluded. The BBN bound overlaps with much of the parameter space that would be covered by proposed searches for a time-varying neutron EDM. The QCD axion does not couple strongly enough to affect BBN

  4. Constraining axion dark matter with Big Bang Nucleosynthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, Kfir; D'Agnolo, Raffaele Tito; Lisanti, Mariangela

    We show that Big Bang Nucleosynthesis (BBN) significantly constrains axion-like dark matter. The axion acts like an oscillating QCD θ angle that redshifts in the early Universe, increasing the neutron–proton mass difference at neutron freeze-out. An axion-like particle that couples too strongly to QCD results in the underproduction of during BBN and is thus excluded. The BBN bound overlaps with much of the parameter space that would be covered by proposed searches for a time-varying neutron EDM. The QCD axion does not couple strongly enough to affect BBN

  5. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xiao-Dong; Park, Changbom; Forero-Romero, J. E.

    We propose a method based on the redshift dependence of the Alcock-Paczynski (AP) test to measure the expansion history of the universe. It uses the isotropy of the galaxy density gradient field to constrain cosmological parameters. If the density parameter Ω {sub m} or the dark energy equation of state w are incorrectly chosen, the gradient field appears to be anisotropic with the degree of anisotropy varying with redshift. We use this effect to constrain the cosmological parameters governing the expansion history of the universe. Although redshift-space distortions (RSD) induced by galaxy peculiar velocities also produce anisotropies in the gradientmore » field, these effects are close to uniform in magnitude over a large range of redshift. This makes the redshift variation of the gradient field anisotropy relatively insensitive to the RSD. By testing the method on mock surveys drawn from the Horizon Run 3 cosmological N-body simulations, we demonstrate that the cosmological parameters can be estimated without bias. Our method is complementary to the baryon acoustic oscillation or topology methods as it depends on D{sub AH} , the product of the angular diameter distance and the Hubble parameter.« less

  7. Rhelogical constraints on ridge formation on Icy Satellites

    NASA Astrophysics Data System (ADS)

    Rudolph, M. L.; Manga, M.

    2010-12-01

    The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.

  8. Measuring tides and binary parameters from gravitational wave data and eclipsing timings of detached white dwarf binaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Sweta; Nelemans, Gijs, E-mail: s.shah@astro.ru.nl

    The discovery of the most compact detached white dwarf (WD) binary SDSS J065133.33+284423.3 has been discussed in terms of probing the tidal effects in WDs. This system is also a verification source for the space-based gravitational wave (GW) detector, eLISA, or the evolved Laser Interferometer Space Antenna, which will observe short-period compact Galactic binaries with P {sub orb} ≲ 5 hr. We address the prospects of performing tidal studies using eLISA binaries by showing the fractional uncertainties in the orbital decay rate, f-dot , and the rate of that decay, f{sup ¨} expected from both the GW and electromagnetic (EM)more » data for some of the high-f binaries. We find that f-dot and f{sup ¨} can be measured using GW data only for the most massive WD binaries observed at high frequencies. From timing the eclipses for ∼10 yr, we find that f-dot can be known to ∼0.1% for J0651. We find that from GW data alone, measuring the effects of tides in binaries is (almost) impossible. We also investigate the improvement in the knowledge of the binary parameters by combining the GW amplitude and inclination with EM data with and without f-dot . In our previous work, we found that EM data on distance constrained the 2σ uncertainty in chirp mass to 15%-25% whereas adding f-dot reduces it to 0.11%. EM data on f-dot also constrain the 2σ uncertainty in distance to 35%-19%. EM data on primary mass constrain the secondary mass m {sub 2} to factors of two to ∼40% whereas adding f-dot reduces this to 25%. Finally, using single-line spectroscopic data constrains 2σ uncertainties in both the m {sub 2}, d to factors of two to ∼40%. Adding EM data on f-dot reduces these 2σ uncertainties to ≤25% and 6%-19%, respectively. Thus we find that EM measurements of f-dot and radial velocity are valuable in constraining eLISA binary parameters.« less

  9. Constraints on cosmic superstrings from Kaluza-Klein emission.

    PubMed

    Dufaux, Jean-François

    2012-07-06

    Cosmic superstrings interact generically with a tower of light and/or strongly coupled Kaluza-Klein (KK) modes associated with the geometry of the internal space. We study the production of KK particles by cosmic superstring loops, and show that it is constrained by big bang nucleosynthesis. We study the resulting constraints in the parameter space of the underlying string theory model and highlight their complementarity with the regions that can be probed by current and upcoming gravitational wave experiments.

  10. Constraining Dark Matter Interactions with Pseudoscalar and Scalar Mediators Using Collider Searches for Multijets plus Missing Transverse Energy.

    PubMed

    Buchmueller, Oliver; Malik, Sarah A; McCabe, Christopher; Penning, Bjoern

    2015-10-30

    The monojet search, looking for events involving missing transverse energy (E_{T}) plus one or two jets, is the most prominent collider dark matter search. We show that multijet searches, which look for E_{T} plus two or more jets, are significantly more sensitive than the monojet search for pseudoscalar- and scalar-mediated interactions. We demonstrate this in the context of a simplified model with a pseudoscalar interaction that explains the excess in GeV energy gamma rays observed by the Fermi Large Area Telescope. We show that multijet searches already constrain a pseudoscalar interpretation of the excess in much of the parameter space where the mass of the mediator M_{A} is more than twice the dark matter mass m_{DM}. With the forthcoming run of the Large Hadron Collider at higher energies, the remaining regions of the parameter space where M_{A}>2m_{DM} will be fully explored. Furthermore, we highlight the importance of complementing the monojet final state with multijet final states to maximize the sensitivity of the search for the production of dark matter at colliders.

  11. The Allowed Parameter Space of a Long-lived Neutron Star as the Merger Remnant of GW170817

    NASA Astrophysics Data System (ADS)

    Ai, Shunke; Gao, He; Dai, Zi-Gao; Wu, Xue-Feng; Li, Ang; Zhang, Bing; Li, Mu-Zi

    2018-06-01

    Due to the limited sensitivity of the current gravitational wave (GW) detectors, the central remnant of the binary neutron star (NS) merger associated with GW170817 remains an open question. In view of the relatively large total mass, it is generally proposed that the merger of GW170817 would lead to a short-lived hypermassive NS or directly produce a black hole (BH). There is no clear evidence to support or rule out a long-lived NS as the merger remnant. Here, we utilize the GW and electromagnetic (EM) signals to comprehensively investigate the parameter space that allows a long-lived NS to survive as the merger remnant of GW170817. We find that for some stiff equations of state, the merger of GW170817 could, in principle, lead to a massive NS, which has a millisecond spin period. The post-merger GW signal could hardly constrain the ellipticity of the NS. If the ellipticity reaches 10‑3, in order to be compatible with the multi-band EM observations, the dipole magnetic field of the NS (B p ) is constrained to the magnetar level of ∼1014 G. If the ellipticity is smaller than 10‑4, B p is constrained to the level of ∼109–1011 G. These conclusions weakly depend on the adoption of the NS equation of state.

  12. An Inequality Constrained Least-Squares Approach as an Alternative Estimation Procedure for Atmospheric Parameters from VLBI Observations

    NASA Astrophysics Data System (ADS)

    Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel

    2016-12-01

    On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.

  13. Charting the parameter space of the global 21-cm signal

    NASA Astrophysics Data System (ADS)

    Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan; Lotem, Matan

    2017-12-01

    The early star-forming Universe is still poorly constrained, with the properties of high-redshift stars, the first heating sources and reionization highly uncertain. This leaves observers planning 21-cm experiments with little theoretical guidance. In this work, we explore the possible range of high-redshift parameters including the star formation efficiency and the minimal mass of star-forming haloes; the efficiency, spectral energy distribution and redshift evolution of the first X-ray sources; and the history of reionization. These parameters are only weakly constrained by available observations, mainly the optical depth to the cosmic microwave background. We use realistic semi-numerical simulations to produce the global 21-cm signal over the redshift range z = 6-40 for each of 193 different combinations of the astrophysical parameters spanning the allowed range. We show that the expected signal fills a large parameter space, but with a fixed general shape for the global 21-cm curve. Even with our wide selection of models, we still find clear correlations between the key features of the global 21-cm signal and underlying astrophysical properties of the high-redshift Universe, namely the Ly α intensity, the X-ray heating rate and the production rate of ionizing photons. These correlations can be used to directly link future measurements of the global 21-cm signal to astrophysical quantities in a mostly model-independent way. We identify additional correlations that can be used as consistency checks.

  14. Constraining axion-like-particles with hard X-ray emission from magnetars

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-François; Sinha, Kuver

    2018-06-01

    Axion-like particles (ALPs) produced in the core of a magnetar will convert to photons in the magnetosphere, leading to possible signatures in the hard X-ray band. We perform a detailed calculation of the ALP-to-photon conversion probability in the magnetosphere, recasting the coupled differential equations that describe ALP-photon propagation into a form that is efficient for large scale numerical scans. We show the dependence of the conversion probability on the ALP energy, mass, ALP-photon coupling, magnetar radius, surface magnetic field, and the angle between the magnetic field and direction of propagation. Along the way, we develop an analytic formalism to perform similar calculations in more general n-state oscillation systems. Assuming ALP emission rates from the core that are just subdominant to neutrino emission, we calculate the resulting constraints on the ALP mass versus ALP-photon coupling space, taking SGR 1806-20 as an example. In particular, we take benchmark values for the magnetar radius and core temperature, and constrain the ALP parameter space by the requirement that the luminosity from ALP-to-photon conversion should not exceed the total observed luminosity from the magnetar. The resulting constraints are competitive with constraints from helioscope experiments in the relevant part of ALP parameter space.

  15. Fermionic dark matter with pseudo-scalar Yukawa interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghorbani, Karim, E-mail: k-ghorbani@araku.ac.ir

    2015-01-01

    We consider a renormalizable extension of the standard model whose fermionic dark matter (DM) candidate interacts with a real singlet pseudo-scalar via a pseudo-scalar Yukawa term while we assume that the full Lagrangian is CP-conserved in the classical level. When the pseudo-scalar boson develops a non-zero vacuum expectation value, spontaneous CP-violation occurs and this provides a CP-violated interaction of the dark sector with the SM particles through mixing between the Higgs-like boson and the SM-like Higgs boson. This scenario suggests a minimal number of free parameters. Focusing mainly on the indirect detection observables, we calculate the dark matter annihilation crossmore » section and then compute the DM relic density in the range up to m{sub DM} = 300 GeV.We then find viable regions in the parameter space constrained by the observed DM relic abundance as well as invisible Higgs decay width in the light of 125 GeV Higgs discovery at the LHC. We find that within the constrained region of the parameter space, there exists a model with dark matter mass m{sub DM} ∼ 38 GeV annihilating predominantly into b quarks, which can explain the Fermi-LAT galactic gamma-ray excess.« less

  16. Model-independent constraints on modified gravity from current data and from the Euclid and SKA future surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddei, Laura; Martinelli, Matteo; Amendola, Luca, E-mail: taddei@thphys.uni-heidelberg.de, E-mail: martinelli@lorentz.leidenuniv.nl, E-mail: amendola@thphys.uni-heidelberg.de

    2016-12-01

    The aim of this paper is to constrain modified gravity with redshift space distortion observations and supernovae measurements. Compared with a standard ΛCDM analysis, we include three additional free parameters, namely the initial conditions of the matter perturbations, the overall perturbation normalization, and a scale-dependent modified gravity parameter modifying the Poisson equation, in an attempt to perform a more model-independent analysis. First, we constrain the Poisson parameter Y (also called G {sub eff}) by using currently available f σ{sub 8} data and the recent SN catalog JLA. We find that the inclusion of the additional free parameters makes the constraintsmore » significantly weaker than when fixing them to the standard cosmological value. Second, we forecast future constraints on Y by using the predicted growth-rate data for Euclid and SKA missions. Here again we point out the weakening of the constraints when the additional parameters are included. Finally, we adopt as modified gravity Poisson parameter the specific Horndeski form, and use scale-dependent forecasts to build an exclusion plot for the Yukawa potential akin to the ones realized in laboratory experiments, both for the Euclid and the SKA surveys.« less

  17. Matter coupling in partially constrained vielbein formulation of massive gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Antonio De; Mukohyama, Shinji; Gümrükçüoğlu, A. Emir

    2016-01-01

    We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less

  18. Matter coupling in partially constrained vielbein formulation of massive gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Antonio De; Gümrükçüoğlu, A. Emir; Heisenberg, Lavinia

    2016-01-04

    We consider a linear effective vielbein matter coupling without introducing the Boulware-Deser ghost in ghost-free massive gravity. This is achieved in the partially constrained vielbein formulation. We first introduce the formalism and prove the absence of ghost at all scales. As next we investigate the cosmological application of this coupling in this new formulation. We show that even if the background evolution accords with the metric formulation, the perturbations display important different features in the partially constrained vielbein formulation. We study the cosmological perturbations of the two branches of solutions separately. The tensor perturbations coincide with those in the metricmore » formulation. Concerning the vector and scalar perturbations, the requirement of absence of ghost and gradient instabilities yields slightly different allowed parameter space.« less

  19. Constraining the evolution of the Hubble Parameter using cosmic chronometers

    NASA Astrophysics Data System (ADS)

    Dickinson, Hugh

    2017-08-01

    Substantial investment is being made in space- and ground-based missions with the goal of revealing the nature of the observed cosmic acceleration. This is one of the most important unsolved problems in cosmology today.We propose here to constrain the evolution of the Hubble parameter [H(z)] between 1.3 < z < 2, using the cosmic chronometer method, based on differential age measurements for passively evolving galaxies. Existing WFC3-IR G102 and G141 grisms data obtained by the WISP, 3D-HST+AGHAST, FIGS, and CLEAR surveys will yield a sample of 140 suitable standard clocks, expanding existing samples by a factor of five. These additional data will enable us to improve existing constraints on the evolution of H at high redshift, and insodoing to better understand the fundamental nature of dark energy.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Edward K.; Cornish, Neil J.

    Massive black hole binaries are key targets for the space based gravitational wave Laser Interferometer Space Antenna (LISA). Several studies have investigated how LISA observations could be used to constrain the parameters of these systems. Until recently, most of these studies have ignored the higher harmonic corrections to the waveforms. Here we analyze the effects of the higher harmonics in more detail by performing extensive Monte Carlo simulations. We pay particular attention to how the higher harmonics impact parameter correlations, and show that the additional harmonics help mitigate the impact of having two laser links fail, by allowing for anmore » instantaneous measurement of the gravitational wave polarization with a single interferometer channel. By looking at parameter correlations we are able to explain why certain mass ratios provide dramatic improvements in certain parameter estimations, and illustrate how the improved polarization measurement improves the prospects for single interferometer operation.« less

  1. Matrix Transfer Function Design for Flexible Structures: An Application

    NASA Technical Reports Server (NTRS)

    Brennan, T. J.; Compito, A. V.; Doran, A. L.; Gustafson, C. L.; Wong, C. L.

    1985-01-01

    The application of matrix transfer function design techniques to the problem of disturbance rejection on a flexible space structure is demonstrated. The design approach is based on parameterizing a class of stabilizing compensators for the plant and formulating the design specifications as a constrained minimization problem in terms of these parameters. The solution yields a matrix transfer function representation of the compensator. A state space realization of the compensator is constructed to investigate performance and stability on the nominal and perturbed models. The application is made to the ACOSSA (Active Control of Space Structures) optical structure.

  2. Constraining Secluded Dark Matter models with the public data from the 79-string IceCube search for dark matter in the Sun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ardid, M.; Felis, I.; Martínez-Mora, J.A.

    The 79-string IceCube search for dark matter in the Sun public data is used to test Secluded Dark Matter models. No significant excess over background is observed and constraints on the parameters of the models are derived. Moreover, the search is also used to constrain the dark photon model in the region of the parameter space with dark photon masses between 0.22 and ∼ 1 GeV and a kinetic mixing parameter ε ∼ 10{sup −9}, which remains unconstrained. These are the first constraints of dark photons from neutrino telescopes. It is expected that neutrino telescopes will be efficient tools tomore » test dark photons by means of different searches in the Sun, Earth and Galactic Center, which could complement constraints from direct detection, accelerators, astrophysics and indirect detection with other messengers, such as gamma rays or antiparticles.« less

  3. Finding viable models in SUSY parameter spaces with signal specific discovery potential

    NASA Astrophysics Data System (ADS)

    Burgess, Thomas; Lindroos, Jan Øye; Lipniacka, Anna; Sandaker, Heidi

    2013-08-01

    Recent results from ATLAS giving a Higgs mass of 125.5 GeV, further constrain already highly constrained supersymmetric models such as pMSSM or CMSSM/mSUGRA. As a consequence, finding potentially discoverable and non-excluded regions of model parameter space is becoming increasingly difficult. Several groups have invested large effort in studying the consequences of Higgs mass bounds, upper limits on rare B-meson decays, and limits on relic dark matter density on constrained models, aiming at predicting superpartner masses, and establishing likelihood of SUSY models compared to that of the Standard Model vis-á-vis experimental data. In this paper a framework for efficient search for discoverable, non-excluded regions of different SUSY spaces giving specific experimental signature of interest is presented. The method employs an improved Markov Chain Monte Carlo (MCMC) scheme exploiting an iteratively updated likelihood function to guide search for viable models. Existing experimental and theoretical bounds as well as the LHC discovery potential are taken into account. This includes recent bounds on relic dark matter density, the Higgs sector and rare B-mesons decays. A clustering algorithm is applied to classify selected models according to expected phenomenology enabling automated choice of experimental benchmarks and regions to be used for optimizing searches. The aim is to provide experimentalist with a viable tool helping to target experimental signatures to search for, once a class of models of interest is established. As an example a search for viable CMSSM models with τ-lepton signatures observable with the 2012 LHC data set is presented. In the search 105209 unique models were probed. From these, ten reference benchmark points covering different ranges of phenomenological observables at the LHC were selected.

  4. Cosmological Constraints from the Redshift Dependence of the Volume Effect Using the Galaxy 2-point Correlation Function across the Line of Sight

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Dong; Park, Changbom; Sabiu, Cristiano G.; Park, Hyunbae; Cheng, Cheng; Kim, Juhan; Hong, Sungwook E.

    2017-08-01

    We develop a methodology to use the redshift dependence of the galaxy 2-point correlation function (2pCF) across the line of sight, ξ ({r}\\perp ), as a probe of cosmological parameters. The positions of galaxies in comoving Cartesian space varies under different cosmological parameter choices, inducing a redshift-dependent scaling in the galaxy distribution. This geometrical distortion can be observed as a redshift-dependent rescaling in the measured ξ ({r}\\perp ). We test this methodology using a sample of 1.75 billion mock galaxies at redshifts 0, 0.5, 1, 1.5, and 2, drawn from the Horizon Run 4 N-body simulation. The shape of ξ ({r}\\perp ) can exhibit a significant redshift evolution when the galaxy sample is analyzed under a cosmology differing from the true, simulated one. Other contributions, including the gravitational growth of structure, galaxy bias, and the redshift space distortions, do not produce large redshift evolution in the shape. We show that one can make use of this geometrical distortion to constrain the values of cosmological parameters governing the expansion history of the universe. This method could be applicable to future large-scale structure surveys, especially photometric surveys such as DES and LSST, to derive tight cosmological constraints. This work is a continuation of our previous works as a strategy to constrain cosmological parameters using redshift-invariant physical quantities.

  5. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  6. Stochastic background from cosmic (super)strings: Popcorn-like and (Gaussian) continuous regimes

    NASA Astrophysics Data System (ADS)

    Regimbau, Tania; Giampanis, Stefanos; Siemens, Xavier; Mandic, Vuk

    2012-03-01

    In the era of the next generation of gravitational wave experiments a stochastic background from cusps of cosmic (super)strings is expected to be probed and, if not detected, to be significantly constrained. A popcornlike background can be, for part of the parameter space, as pronounced as the (Gaussian) continuous contribution from unresolved sources that overlap in frequency and time. We study both contributions from unresolved cosmic string cusps over a range of frequencies relevant to ground based interferometers, such as the LIGO/Virgo second generation and Einstein Telescope third generation detectors, the space antenna LISA, and pulsar timing arrays. We compute the sensitivity (at the 2σ level) in the parameter space for the LIGO/Virgo second generation detector, the Einstein Telescope detector, LISA, and pulsar timing arrays. We conclude that the popcorn regime is complementary to the continuous background. Its detection could therefore enhance confidence in a stochastic background detection and possibly help determine fundamental string parameters such as the string tension and the reconnection probability.

  7. Smooth Constrained Heuristic Optimization of a Combinatorial Chemical Space

    DTIC Science & Technology

    2015-05-01

    ARL-TR-7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...

  8. Constraining the phantom braneworld model from cosmic structure sizes

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Kousvos, Stefanos R.

    2017-11-01

    We consider the phantom braneworld model in the context of the maximum turnaround radius, RTA ,max, of a stable, spherical cosmic structure with a given mass. The maximum turnaround radius is the point where the attraction due to the central inhomogeneity gets balanced with the repulsion of the ambient dark energy, beyond which a structure cannot hold any mass, thereby giving the maximum upper bound on the size of a stable structure. In this work we derive an analytical expression of RTA ,max for this model using cosmological scalar perturbation theory. Using this we numerically constrain the parameter space, including a bulk cosmological constant and the Weyl fluid, from the mass versus observed size data for some nearby, nonvirial cosmic structures. We use different values of the matter density parameter Ωm, both larger and smaller than that of the Λ cold dark matter, as the input in our analysis. We show in particular, that (a) with a vanishing bulk cosmological constant the predicted upper bound is always greater than what is actually observed; a similar conclusion holds if the bulk cosmological constant is negative (b) if it is positive, the predicted maximum size can go considerably below than what is actually observed and owing to the involved nature of the field equations, it leads to interesting constraints on not only the bulk cosmological constant itself but on the whole parameter space of the theory.

  9. Emulating Simulations of Cosmic Dawn for 21 cm Power Spectrum Constraints on Cosmology, Reionization, and X-Ray Heating

    NASA Astrophysics Data System (ADS)

    Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley

    2017-10-01

    Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.

  10. Constraining neutron guide optimizations with phase-space considerations

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads; Lefmann, Kim

    2016-09-01

    We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.

  11. Design of an ultrasonic micro-array for near field sensing during retinal microsurgery.

    PubMed

    Clarke, Clyde; Etienne-Cummings, Ralph

    2006-01-01

    A method for obtaining the optimal and specific sensor parameters for a tool-tip mountable ultrasonic transducer micro-array is presented. The ultrasonic transducer array sensor parameters, such as frequency of operation, element size, inter-element spacing, number of elements and transducer geometry are obtained using a quadratic programming method to obtain a maximum directivity while being constrained to a total array size of 4 mm2 and the required resolution for retinal imaging. The technique is used to design a uniformly spaced NxN transducer array that is capable of resolving structures in the retina that are as small as 2 microm from a distance of 100 microm. The resultant 37x37 array of 16 microm transducers with 26 microm spacing will be realized as a Capacitive Micromachined Ultrasonic Transducer (CMUT) array and used for imaging and robotic guidance during retinal microsurgery.

  12. Neutrino-two-Higgs-doublet model with the inverse seesaw mechanisms

    NASA Astrophysics Data System (ADS)

    Tang, Yi-Lei; Zhu, Shou-hua

    2017-09-01

    In this paper, we combine the ν -two-Higgs-doublet-model with the inverse seesaw mechanisms. In this model, the Yukawa couplings involving the sterile neutrinos and the exotic Higgs bosons can be of order 1 in the case of a large tan β . We calculated the corrections to the Z -resonance parameters Rli,Al i, and Nν, together with the l1→l2γ branching ratios and the muon anomalous g -2 . Compared with the current bounds and plans for the future colliders, we find that the corrections to the electroweak parameters can be constrained or discovered in much of the parameter space.

  13. Patchy screening of the cosmic microwave background by inhomogeneous reionization

    NASA Astrophysics Data System (ADS)

    Gluscevic, Vera; Kamionkowski, Marc; Hanson, Duncan

    2013-02-01

    We derive a constraint on patchy screening of the cosmic microwave background from inhomogeneous reionization using off-diagonal TB and TT correlations in WMAP-7 temperature/polarization data. We interpret this as a constraint on the rms optical-depth fluctuation Δτ as a function of a coherence multipole LC. We relate these parameters to a comoving coherence scale, of bubble size RC, in a phenomenological model where reionization is instantaneous but occurs on a crinkly surface, and also to the bubble size in a model of “Swiss cheese” reionization where bubbles of fixed size are spread over some range of redshifts. The current WMAP data are still too weak, by several orders of magnitude, to constrain reasonable models, but forthcoming Planck and future EPIC data should begin to approach interesting regimes of parameter space. We also present constraints on the parameter space imposed by the recent results from the EDGES experiment.

  14. Multiple angles on the sterile neutrino - a combined view of cosmological and oscillation limits

    NASA Astrophysics Data System (ADS)

    Guzowski, Pawel

    2017-09-01

    The possible existence of sterile neutrinos is an important unresolved question for both particle physics and cosmology. Data sensitive to a sterile neutrino is coming from both particle physics experiments and from astrophysical measurements of the Cosmic Microwave Background. In this study, we address the question whether these two contrasting data sets provide complementary information about sterile neutrinos. We focus on the muon disappearance oscillation channel, taking data from the MINOS, ICECUBE and Planck experiments, converting the limits into particle physics and cosmological parameter spaces, to illustrate the different regions of parameter space where the data sets have the best sensitivity. For the first time, we combine the data sets into a single analysis to illustrate how the limits on the parameters of the sterile-neutrino model are strengthened. We investigate how data from a future accelerator neutrino experiment (SBN) will be able to further constrain this picture.

  15. Shape parameters explain data from spatial transformations: comment on Pearce et al. (2004) and Tommasi & Polli (2004).

    PubMed

    Cheng, Ken; Gallistel, C R

    2005-04-01

    In 2 recent studies on rats (J. M. Pearce, M. A. Good, P. M. Jones, & A. McGregor, see record 2004-12429-006) and chicks (L. Tommasi & C. Polli, see record 2004-15642-007), the animals were trained to search in 1 corner of a rectilinear space. When tested in transformed spaces of different shapes, the animals still showed systematic choices. Both articles rejected the global matching of shape in favor of local matching processes. The present authors show that although matching by shape congruence is unlikely, matching by the shape parameter of the 1st principal axis can explain all the data. Other shape parameters, such as symmetry axes, may do even better. Animals are likely to use some global matching to constrain and guide the use of local cues; such use keeps local matching processes from exploding in complexity.

  16. Experimental constraints on metric and non-metric theories of gravity

    NASA Technical Reports Server (NTRS)

    Will, Clifford M.

    1989-01-01

    Experimental constraints on metric and non-metric theories of gravitation are reviewed. Tests of the Einstein Equivalence Principle indicate that only metric theories of gravity are likely to be viable. Solar system experiments constrain the parameters of the weak field, post-Newtonian limit to be close to the values predicted by general relativity. Future space experiments will provide further constraints on post-Newtonian gravity.

  17. Observational Role of Dark Matter in f(R) Models for Structure Formation

    NASA Astrophysics Data System (ADS)

    Verma, Murli Manohar; Yadav, Bal Krishna

    The fixed points for the dynamical system in the phase space have been calculated with dark matter in the f(R) gravity models. The stability conditions of these fixed points are obtained in the ongoing accelerated phase of the universe, and the values of the Hubble parameter and Ricci scalar are obtained for various evolutionary stages of the universe. We present a range of some modifications of general relativistic action consistent with the ΛCDM model. We elaborate upon the fact that the upcoming cosmological observations would further constrain the bounds on the possible forms of f(R) with greater precision that could in turn constrain the search for dark matter in colliders.

  18. Statistical mechanics of budget-constrained auctions

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  19. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  20. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty

  1. Fermi-LAT upper limits on gamma-ray emission from colliding wind binaries

    DOE PAGES

    Werner, Michael; Reimer, O.; Reimer, A.; ...

    2013-07-09

    Here, colliding wind binaries (CWBs) are thought to give rise to a plethora of physical processes including acceleration and interaction of relativistic particles. Observation of synchrotron radiation in the radio band confirms there is a relativistic electron population in CWBs. Accordingly, CWBs have been suspected sources of high-energy γ-ray emission since the COS-B era. Theoretical models exist that characterize the underlying physical processes leading to particle acceleration and quantitatively predict the non-thermal energy emission observable at Earth. Furthermore, we strive to find evidence of γ-ray emission from a sample of seven CWB systems: WR 11, WR 70, WR 125, WRmore » 137, WR 140, WR 146, and WR 147. Theoretical modelling identified these systems as the most favourable candidates for emitting γ-rays. We make a comparison with existing γ-ray flux predictions and investigate possible constraints. We used 24 months of data from the Large Area Telescope (LAT) on-board the Fermi Gamma Ray Space Telescope to perform a dedicated likelihood analysis of CWBs in the LAT energy range. As a result, we find no evidence of γ-ray emission from any of the studied CWB systems and determine corresponding flux upper limits. For some CWBs the interplay of orbital and stellar parameters renders the Fermi-LAT data not sensitive enough to constrain the parameter space of the emission models. In the cases of WR140 and WR147, the Fermi -LAT upper limits appear to rule out some model predictions entirely and constrain theoretical models over a significant parameter space. A comparison of our findings to the CWB η Car is made.« less

  2. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: Cosmological implications of the Fourier space wedges of the final sample

    NASA Astrophysics Data System (ADS)

    Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Scoccimarro, Román; Crocce, Martín; Dalla Vecchia, Claudio; Montesano, Francesco; Gil-Marín, Héctor; Ross, Ashley J.; Beutler, Florian; Rodríguez-Torres, Sergio; Chuang, Chia-Hsun; Prada, Francisco; Kitaura, Francisco-Shu; Cuesta, Antonio J.; Eisenstein, Daniel J.; Percival, Will J.; Vargas-Magaña, Mariana; Tinker, Jeremy L.; Tojeiro, Rita; Brownstein, Joel R.; Maraston, Claudia; Nichol, Robert C.; Olmstead, Matthew D.; Samushia, Lado; Seo, Hee-Jong; Streblyanska, Alina; Zhao, Gong-bo

    2017-05-01

    We extract cosmological information from the anisotropic power-spectrum measurements from the recently completed Baryon Oscillation Spectroscopic Survey (BOSS), extending the concept of clustering wedges to Fourier space. Making use of new fast-Fourier-transform-based estimators, we measure the power-spectrum clustering wedges of the BOSS sample by filtering out the information of Legendre multipoles ℓ > 4. Our modelling of these measurements is based on novel approaches to describe non-linear evolution, bias and redshift-space distortions, which we test using synthetic catalogues based on large-volume N-body simulations. We are able to include smaller scales than in previous analyses, resulting in tighter cosmological constraints. Using three overlapping redshift bins, we measure the angular-diameter distance, the Hubble parameter and the cosmic growth rate, and explore the cosmological implications of our full-shape clustering measurements in combination with cosmic microwave background and Type Ia supernova data. Assuming a Λ cold dark matter (ΛCDM) cosmology, we constrain the matter density to Ω M= 0.311_{-0.010}^{+0.009} and the Hubble parameter to H_0 = 67.6_{-0.6}^{+0.7} km s^{-1 Mpc^{-1}}, at a confidence level of 68 per cent. We also allow for non-standard dark energy models and modifications of the growth rate, finding good agreement with the ΛCDM paradigm. For example, we constrain the equation-of-state parameter to w = -1.019_{-0.039}^{+0.048}. This paper is part of a set that analyses the final galaxy-clustering data set from BOSS. The measurements and likelihoods presented here are combined with others in Alam et al. to produce the final cosmological constraints from BOSS.

  3. Sensitivity of Space Station alpha joint robust controller to structural modal parameter variations

    NASA Technical Reports Server (NTRS)

    Kumar, Renjith R.; Cooper, Paul A.; Lim, Tae W.

    1991-01-01

    The photovoltaic array sun tracking control system of Space Station Freedom is described. A synthesis procedure for determining optimized values of the design variables of the control system is developed using a constrained optimization technique. The synthesis is performed to provide a given level of stability margin, to achieve the most responsive tracking performance, and to meet other design requirements. Performance of the baseline design, which is synthesized using predicted structural characteristics, is discussed and the sensitivity of the stability margin is examined for variations of the frequencies, mode shapes and damping ratios of dominant structural modes. The design provides enough robustness to tolerate a sizeable error in the predicted modal parameters. A study was made of the sensitivity of performance indicators as the modal parameters of the dominant modes vary. The design variables are resynthesized for varying modal parameters in order to achieve the most responsive tracking performance while satisfying the design requirements. This procedure of reoptimization design parameters would be useful in improving the control system performance if accurate model data are provided.

  4. Can we go From Tomographically Determined Seismic Velocities to Composition? Amplitude Resolution Issues in Local Earthquake Tomography

    NASA Astrophysics Data System (ADS)

    Wagner, L.

    2007-12-01

    There have been a number of recent papers (i.e. Lee (2003), James et al. (2004), Hacker and Abers (2004), Schutt and Lesher (2006)) which calculate predicted velocities for xenolith compositions at mantle pressures and temperatures. It is tempting, therefore, to attempt to go the other way ... to use tomographically determined absolute velocities to constrain mantle composition. However, in order to do this, it is vital that one is able to accurately constrain not only the polarity of the determined velocity deviations (i.e. fast vs slow) but also how much faster, how much slower relative to the starting model, if absolute velocities are to be so closely analyzed. While much attention has been given to issues concerning spatial resolution in seismic tomography (i.e. what areas are fast, what areas are slow), little attention has been directed at the issue of amplitude resolution (how fast, how slow). Velocity deviation amplitudes in seismic tomography are heavily influenced by the amount of regularization used and the number of iterations performed. Determining these two parameters is a difficult and little discussed problem. I explore the effect of these two parameters on the amplitudes obtained from the tomographic inversion of the Chile Argentina Geophysical Experiment (CHARGE) dataset, and attempt to determine a reasonable solution space for the low Vp, high Vs, low Vp/Vs anomaly found above the flat slab in central Chile. I then compare this solution space to the range in experimentally determined velocities for peridotite end-members to evaluate our ability to constrain composition using tomographically determined seismic velocities. I find that in general, it will be difficult to constrain the compositions of normal mantle peridotites using tomographically determined velocities, but that in the unusual case of the anomaly above the flat slab, the observed velocity structure still has an anomalously high S wave velocity and low Vp/Vs ratio that is most consistent with enstatite, but inconsistent with the predicted velocities of known mantle xenoliths.

  5. Dibaryons in neutron stars

    NASA Technical Reports Server (NTRS)

    Olinto, Angela V.; Haensel, Pawel; Frieman, Joshua A.

    1991-01-01

    The effects are studied of H-dibaryons on the structure of neutron stars. It was found that H particles could be present in neutron stars for a wide range of dibaryon masses. The appearance of dibaryons softens the equations of state, lowers the maximum neutron star mass, and affects the transport properties of dense matter. The parameter space is constrained for dibaryons by requiring that a 1.44 solar mass neutron star be gravitationally stable.

  6. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian

    2010-12-01

    In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  7. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  8. Methods of Helium Injection and Removal for Heat Transfer Augmentation

    NASA Technical Reports Server (NTRS)

    Haight, Harlan; Kegley, Jeff; Bourdreaux, Meghan

    2008-01-01

    While augmentation of heat transfer from a test article by helium gas at low pressures is well known, the method is rarely employed during space simulation testing because the test objectives usually involve simulation of an orbital thermal environment. Test objectives of cryogenic optical testing at Marshall Space Flight Center's X-ray Cryogenic Facility (XRCF) have typically not been constrained by orbital environment parameters. As a result, several methods of helium injection have been utilized at the XRCF since 1999 to decrease thermal transition times. A brief synopsis of these injection (and removal) methods including will be presented.

  9. Methods of Helium Injection and Removal for Heat Transfer Augmentation

    NASA Technical Reports Server (NTRS)

    Kegley, Jeffrey

    2008-01-01

    While augmentation of heat transfer from a test article by helium gas at low pressures is well known, the method is rarely employed during space simulation testing because the test objectives are to simulate an orbital thermal environment. Test objectives of cryogenic optical testing at Marshall Space Flight Center's X-ray Calibration Facility (XRCF) have typically not been constrained by orbital environment parameters. As a result, several methods of helium injection have been utilized at the XRCF since 1999 to decrease thermal transition times. A brief synopsis of these injection (and removal) methods including will be presented.

  10. Constraining the dark energy models with H (z ) data: An approach independent of H0

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Fotios K.; Basilakos, Spyros

    2018-03-01

    We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.

  11. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea will be illustrated using a short-arc, angles-only observation scenario.

  12. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  13. Constraining screened fifth forces with the electron magnetic moment

    NASA Astrophysics Data System (ADS)

    Brax, Philippe; Davis, Anne-Christine; Elder, Benjamin; Wong, Leong Khim

    2018-04-01

    Chameleon and symmetron theories serve as archetypal models for how light scalar fields can couple to matter with gravitational strength or greater, yet evade the stringent constraints from classical tests of gravity on Earth and in the Solar System. They do so by employing screening mechanisms that dynamically alter the scalar's properties based on the local environment. Nevertheless, these do not hide the scalar completely, as screening leads to a distinct phenomenology that can be well constrained by looking for specific signatures. In this work, we investigate how a precision measurement of the electron magnetic moment places meaningful constraints on both chameleons and symmetrons. Two effects are identified: First, virtual chameleons and symmetrons run in loops to generate quantum corrections to the intrinsic value of the magnetic moment—a common process widely considered in the literature for many scenarios beyond the Standard Model. A second effect, however, is unique to scalar fields that exhibit screening. A scalar bubblelike profile forms inside the experimental vacuum chamber and exerts a fifth force on the electron, leading to a systematic shift in the experimental measurement. In quantifying this latter effect, we present a novel approach that combines analytic arguments and a small number of numerical simulations to solve for the bubblelike profile quickly for a large range of model parameters. Taken together, both effects yield interesting constraints in complementary regions of parameter space. While the constraints we obtain for the chameleon are largely uncompetitive with those in the existing literature, this still represents the tightest constraint achievable yet from an experiment not originally designed to search for fifth forces. We break more ground with the symmetron, for which our results exclude a large and previously unexplored region of parameter space. Central to this achievement are the quantum correction terms, which are able to constrain symmetrons with masses in the range μ ∈[10-3.88,108] eV , whereas other experiments have hitherto only been sensitive to 1 or 2 orders of magnitude at a time.

  14. Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach

    NASA Astrophysics Data System (ADS)

    Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto

    2017-12-01

    In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \

  15. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  16. Intelligent Sampling of Hazardous Particle Populations in Resource-Constrained Environments

    NASA Astrophysics Data System (ADS)

    McCollough, J. P.; Quinn, J. M.; Starks, M. J.; Johnston, W. R.

    2017-10-01

    Sampling of anomaly-causing space environment drivers is necessary for both real-time operations and satellite design efforts, and optimizing measurement sampling helps minimize resource demands. Relating these measurements to spacecraft anomalies requires the ability to resolve spatial and temporal variability in the energetic charged particle hazard of interest. Here we describe a method for sampling particle fluxes informed by magnetospheric phenomenology so that, along a given trajectory, the variations from both temporal dynamics and spatial structure are adequately captured while minimizing oversampling. We describe the coordinates, sampling method, and specific regions and parameters employed. We compare resulting sampling cadences with data from spacecraft spanning the regions of interest during a geomagnetically active period, showing that the algorithm retains the gross features necessary to characterize environmental impacts on space systems in diverse orbital regimes while greatly reducing the amount of sampling required. This enables sufficient environmental specification within a resource-constrained context, such as limited telemetry bandwidth, processing requirements, and timeliness.

  17. Gaining insight into the T _2^*-T2 relationship in surface NMR free-induction decay measurements

    NASA Astrophysics Data System (ADS)

    Grombacher, Denys; Auken, Esben

    2018-05-01

    One of the primary shortcomings of the surface nuclear magnetic resonance (NMR) free-induction decay (FID) measurement is the uncertainty surrounding which mechanism controls the signal's time dependence. Ideally, the FID-estimated relaxation time T_2^* that describes the signal's decay carries an intimate link to the geometry of the pore space. In this limit the parameter T_2^* is closely linked to a related parameter T2, which is more closely linked to pore-geometry. If T_2^* ˜eq {T_2} the FID can provide valuable insight into relative pore-size and can be used to make quantitative permeability estimates. However, given only FID measurements it is difficult to determine whether T_2^* is linked to pore geometry or whether it has been strongly influenced by background magnetic field inhomogeneity. If the link between an observed T_2^* and the underlying T2 could be further constrained the utility of the standard surface NMR FID measurement would be greatly improved. We hypothesize that an approach employing an updated surface NMR forward model that solves the full Bloch equations with appropriately weighted relaxation terms can be used to help constrain the T_2^*-T2 relationship. Weighting the relaxation terms requires estimating the poorly constrained parameters T2 and T1; to deal with this uncertainty we propose to conduct a parameter search involving multiple inversions that employ a suite of forward models each describing a distinct but plausible T_2^*-T2 relationship. We hypothesize that forward models given poor T2 estimates will produce poor data fits when using the complex-inversion, while forward models given reliable T2 estimates will produce satisfactory data fits. By examining the data fits produced by the suite of plausible forward models, the likely T_2^*-T2 can be constrained by identifying the range of T2 estimates that produce reliable data fits. Synthetic and field results are presented to investigate the feasibility of the proposed technique.

  18. Testing general relativity and alternative theories of gravity with space-based atomic clocks and atom interferometers

    NASA Astrophysics Data System (ADS)

    Bondarescu, Ruxandra; Schärer, Andreas; Jetzer, Philippe; Angélil, Raymond; Saha, Prasenjit; Lundgren, Andrew

    2015-05-01

    The successful miniaturisation of extremely accurate atomic clocks and atom interferometers invites prospects for satellite missions to perform precision experiments. We discuss the effects predicted by general relativity and alternative theories of gravity that can be detected by a clock, which orbits the Earth. Our experiment relies on the precise tracking of the spacecraft using its observed tick-rate. The spacecraft's reconstructed four-dimensional trajectory will reveal the nature of gravitational perturbations in Earth's gravitational field, potentially differentiating between different theories of gravity. This mission can measure multiple relativistic effects all during the course of a single experiment, and constrain the Parametrized Post-Newtonian Parameters around the Earth. A satellite carrying a clock of fractional timing inaccuracy of Δ f / f ˜ 10-16 in an elliptic orbit around the Earth would constrain the PPN parameters |β - 1|, |γ - 1| ≲ 10-6. We also briefly review potential constraints by atom interferometers on scalar tensor theories and in particular on Chameleon and dilaton models.

  19. OPTIMASS: a package for the minimization of kinematic mass functions with constraints

    NASA Astrophysics Data System (ADS)

    Cho, Won Sang; Gainer, James S.; Kim, Doojin; Lim, Sung Hak; Matchev, Konstantin T.; Moortgat, Filip; Pape, Luc; Park, Myeonghun

    2016-01-01

    Reconstructed mass variables, such as M 2, M 2 C , M T * , and M T2 W , play an essential role in searches for new physics at hadron colliders. The calculation of these variables generally involves constrained minimization in a large parameter space, which is numerically challenging. We provide a C++ code, O ptimass, which interfaces with the M inuit library to perform this constrained minimization using the Augmented Lagrangian Method. The code can be applied to arbitrarily general event topologies, thus allowing the user to significantly extend the existing set of kinematic variables. We describe this code, explain its physics motivation, and demonstrate its use in the analysis of the fully leptonic decay of pair-produced top quarks using M 2 variables.

  20. Maximum entropy modeling of metabolic networks by constraining growth-rate moments predicts coexistence of phenotypes

    NASA Astrophysics Data System (ADS)

    De Martino, Daniele

    2017-12-01

    In this work maximum entropy distributions in the space of steady states of metabolic networks are considered upon constraining the first and second moments of the growth rate. Coexistence of fast and slow phenotypes, with bimodal flux distributions, emerges upon considering control on the average growth (optimization) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of Escherichia coli where it quantifies the metabolic activity of slow growing phenotypes and it provides a quantitative map with metabolic fluxes, opening the possibility to detect coexistence from flux data. A preliminary analysis on data for E. coli cultures in standard conditions shows degeneracy for the inferred parameters that extend in the coexistence region.

  1. The eigenstate thermalization hypothesis in constrained Hilbert spaces: A case study in non-Abelian anyon chains

    NASA Astrophysics Data System (ADS)

    Chandran, A.; Schulz, Marc D.; Burnell, F. J.

    2016-12-01

    Many phases of matter, including superconductors, fractional quantum Hall fluids, and spin liquids, are described by gauge theories with constrained Hilbert spaces. However, thermalization and the applicability of quantum statistical mechanics has primarily been studied in unconstrained Hilbert spaces. In this paper, we investigate whether constrained Hilbert spaces permit local thermalization. Specifically, we explore whether the eigenstate thermalization hypothesis (ETH) holds in a pinned Fibonacci anyon chain, which serves as a representative case study. We first establish that the constrained Hilbert space admits a notion of locality by showing that the influence of a measurement decays exponentially in space. This suggests that the constraints are no impediment to thermalization. We then provide numerical evidence that ETH holds for the diagonal and off-diagonal matrix elements of various local observables in a generic disorder-free nonintegrable model. We also find that certain nonlocal observables obey ETH.

  2. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less

  3. Hydrologic and hydraulic flood forecasting constrained by remote sensing data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2017-12-01

    Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.

  4. Study of constrained minimal supersymmetry

    NASA Astrophysics Data System (ADS)

    Kane, G. L.; Kolda, Chris; Roszkowski, Leszek; Wells, James D.

    1994-06-01

    Taking seriously the phenomenological indications for supersymmetry we have made a detailed study of unified minimal SUSY, including many effects at the few percent level in a consistent fashion. We report here a general analysis of what can be studied without choosing a particular gauge group at the unification scale. Firstly, we find that the encouraging SUSY unification results of recent years do survive the challenge of a more complete and accurate analysis. Taking into account effects at the 5-10 % level leads to several improvements of previous results and allows us to sharpen our predictions for SUSY in the light of unification. We perform a thorough study of the parameter space and look for patterns to indicate SUSY predictions, so that they do not depend on arbitrary choices of some parameters or untested assumptions. Our results can be viewed as a fully constrained minimal SUSY standard model. The resulting model forms a well-defined basis for comparing the physics potential of different facilities. Very little of the acceptable parameter space has been excluded by CERN LEP or Fermilab so far, but a significant fraction can be covered when these accelerators are upgraded. A number of initial applications to the understanding of the values of mh and mt, the SUSY spectrum, detectability of SUSY at LEP II or Fermilab, B(b-->sγ), Γ(Z-->bb¯), dark matter, etc., are included in a separate section that might be of more interest to some readers than the technical aspects of model building. We formulate an approach to extracting SUSY parameters from data when superpartners are detected. For small tanβ or large mt both m1/2 and m0 are entirely bounded from above at ~1 TeV without having to use a fine-tuning constraint.

  5. SPACE-BASED MICROLENS PARALLAX OBSERVATION AS A WAY TO RESOLVE THE SEVERE DEGENERACY BETWEEN MICROLENS-PARALLAX AND LENS-ORBITAL EFFECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, C.; Udalski, A.; Szymański, M. K.

    In this paper, we demonstrate the severity of the degeneracy between the microlens-parallax and lens-orbital effects by presenting the analysis of the gravitational binary-lens event OGLE-2015-BLG-0768. Despite the obvious deviation from the model based on the linear observer motion and the static binary, it is found that the residual can be almost equally well explained by either the parallactic motion of the Earth or the rotation of the binary-lens axis, resulting in the severe degeneracy between the two effects. We show that the degeneracy can be readily resolved with the additional data provided by space-based microlens parallax observations. By enablingmore » us to distinguish between the two higher-order effects, space-based microlens parallax observations will not only make it possible to accurately determine the physical lens parameters but also to further constrain the orbital parameters of binary lenses.« less

  6. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    NASA Astrophysics Data System (ADS)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.

  7. Constraints on modified gravity from Planck 2015: when the health of your theory makes the difference

    NASA Astrophysics Data System (ADS)

    Salvatelli, Valentina; Piazza, Federico; Marinoni, Christian

    2016-09-01

    We use the effective field theory of dark energy (EFT of DE) formalism to constrain dark energy models belonging to the Horndeski class with the recent Planck 2015 CMB data. The space of theories is spanned by a certain number of parameters determining the linear cosmological perturbations, while the expansion history is set to that of a standard ΛCDM model. We always demand that the theories be free of fatal instabilities. Additionally, we consider two optional conditions, namely that scalar and tensor perturbations propagate with subliminal speed. Such criteria severely restrict the allowed parameter space and are thus very effective in shaping the posteriors. As a result, we confirm that no theory performs better than ΛCDM when CMB data alone are analysed. Indeed, the healthy dark energy models considered here are not able to reproduce those phenomenological behaviours of the effective Newton constant and gravitational slip parameters that, according to previous studies, best fit the data.

  8. Constraints on modified gravity from Planck 2015: when the health of your theory makes the difference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvatelli, Valentina; Piazza, Federico; Marinoni, Christian, E-mail: Valentina.Salvatelli@cpt.univ-mrs.fr, E-mail: Federico.Piazza@cpt.univ-mrs.fr, E-mail: Christian.Marinoni@cpt.univ-mrs.fr

    We use the effective field theory of dark energy (EFT of DE) formalism to constrain dark energy models belonging to the Horndeski class with the recent Planck 2015 CMB data. The space of theories is spanned by a certain number of parameters determining the linear cosmological perturbations, while the expansion history is set to that of a standard ΛCDM model. We always demand that the theories be free of fatal instabilities. Additionally, we consider two optional conditions, namely that scalar and tensor perturbations propagate with subliminal speed. Such criteria severely restrict the allowed parameter space and are thus very effectivemore » in shaping the posteriors. As a result, we confirm that no theory performs better than ΛCDM when CMB data alone are analysed. Indeed, the healthy dark energy models considered here are not able to reproduce those phenomenological behaviours of the effective Newton constant and gravitational slip parameters that, according to previous studies, best fit the data.« less

  9. Z boson mediated dark matter beyond the effective theory

    DOE PAGES

    Kearney, John; Orlofsky, Nicholas; Pierce, Aaron

    2017-02-17

    Here, direct detection bounds are beginning to constrain a very simple model of weakly interacting dark matter—a Majorana fermion with a coupling to the Z boson. In a particularly straightforward gauge-invariant realization, this coupling is introduced via a higher-dimensional operator. While attractive in its simplicity, this model generically induces a large ρ parameter. An ultraviolet completion that avoids an overly large contribution to ρ is the singlet-doublet model. We revisit this model, focusing on the Higgs blind spot region of parameter space where spin-independent interactions are absent. This model successfully reproduces dark matter with direct detection mediated by the Zmore » boson but whose cosmology may depend on additional couplings and states. Future direct detection experiments should effectively probe a significant portion of this parameter space, aside from a small coannihilating region. As such, Z-mediated thermal dark matter as realized in the singlet-doublet model represents an interesting target for future searches.« less

  10. Nonlinear programming extensions to rational function approximation methods for unsteady aerodynamic forces

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1988-01-01

    The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.

  11. The Supernovae Analysis Application (SNAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  12. The Supernovae Analysis Application (SNAP)

    DOE PAGES

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas; ...

    2017-09-06

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  13. The Supernovae Analysis Application (SNAP)

    NASA Astrophysics Data System (ADS)

    Bayless, Amanda J.; Fryer, Chris L.; Wollaeger, Ryan; Wiggins, Brandon; Even, Wesley; de la Rosa, Janie; Roming, Peter W. A.; Frey, Lucy; Young, Patrick A.; Thorpe, Rob; Powell, Luke; Landers, Rachel; Persson, Heather D.; Hay, Rebecca

    2017-09-01

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginning to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.

  14. USING ForeCAT DEFLECTIONS AND ROTATIONS TO CONSTRAIN THE EARLY EVOLUTION OF CMEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kay, C.; Opher, M.; Colaninno, R. C.

    2016-08-10

    To accurately predict the space weather effects of the impacts of coronal mass ejection (CME) at Earth one must know if and when a CME will impact Earth and the CME parameters upon impact. In 2015 Kay et al. presented Forecasting a CME’s Altered Trajectory (ForeCAT), a model for CME deflections based on the magnetic forces from the background solar magnetic field. Knowing the deflection and rotation of a CME enables prediction of Earth impacts and the orientation of the CME upon impact. We first reconstruct the positions of the 2010 April 8 and the 2012 July 12 CMEs frommore » the observations. The first of these CMEs exhibits significant deflection and rotation (34° deflection and 58° rotation), while the second shows almost no deflection or rotation (<3° each). Using ForeCAT, we explore a range of initial parameters, such as the CME’s location and size, and find parameters that can successfully reproduce the behavior for each CME. Additionally, since the deflection depends strongly on the behavior of a CME in the low corona, we are able to constrain the expansion and propagation of these CMEs in the low corona.« less

  15. Reflected stochastic differential equation models for constrained animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  16. A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.

    PubMed

    Kim, Joo H; Roberts, Dustyn

    2015-09-01

    Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.

    PubMed

    Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong

    2016-06-01

    Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Illustration of microphysical processes in Amazonian deep convective clouds in the gamma phase space: introduction and potential applications

    NASA Astrophysics Data System (ADS)

    Cecchini, Micael A.; Machado, Luiz A. T.; Wendisch, Manfred; Costa, Anja; Krämer, Martina; Andreae, Meinrat O.; Afchine, Armin; Albrecht, Rachel I.; Artaxo, Paulo; Borrmann, Stephan; Fütterer, Daniel; Klimach, Thomas; Mahnke, Christoph; Martin, Scot T.; Minikin, Andreas; Molleker, Sergej; Pardo, Lianet H.; Pöhlker, Christopher; Pöhlker, Mira L.; Pöschl, Ulrich; Rosenfeld, Daniel; Weinzierl, Bernadett

    2017-12-01

    The behavior of tropical clouds remains a major open scientific question, resulting in poor representation by models. One challenge is to realistically reproduce cloud droplet size distributions (DSDs) and their evolution over time and space. Many applications, not limited to models, use the gamma function to represent DSDs. However, even though the statistical characteristics of the gamma parameters have been widely studied, there is almost no study dedicated to understanding the phase space of this function and the associated physics. This phase space can be defined by the three parameters that define the DSD intercept, shape, and curvature. Gamma phase space may provide a common framework for parameterizations and intercomparisons. Here, we introduce the phase space approach and its characteristics, focusing on warm-phase microphysical cloud properties and the transition to the mixed-phase layer. We show that trajectories in this phase space can represent DSD evolution and can be related to growth processes. Condensational and collisional growth may be interpreted as pseudo-forces that induce displacements in opposite directions within the phase space. The actually observed movements in the phase space are a result of the combination of such pseudo-forces. Additionally, aerosol effects can be evaluated given their significant impact on DSDs. The DSDs associated with liquid droplets that favor cloud glaciation can be delimited in the phase space, which can help models to adequately predict the transition to the mixed phase. We also consider possible ways to constrain the DSD in two-moment bulk microphysics schemes, in which the relative dispersion parameter of the DSD can play a significant role. Overall, the gamma phase space approach can be an invaluable tool for studying cloud microphysical evolution and can be readily applied in many scenarios that rely on gamma DSDs.

  19. Learning Maximal Entropy Models from finite size datasets: a fast Data-Driven algorithm allows to sample from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).

  20. Conjoined constraints on modified gravity from the expansion history and cosmic growth

    NASA Astrophysics Data System (ADS)

    Basilakos, Spyros; Nesseris, Savvas

    2017-09-01

    In this paper we present conjoined constraints on several cosmological models from the expansion history H (z ) and cosmic growth f σ8. The models we study include the CPL w0wa parametrization, the holographic dark energy (HDE) model, the time-varying vacuum (ΛtCDM ) model, the Dvali, Gabadadze and Porrati (DGP) and Finsler-Randers (FRDE) models, a power-law f (T ) model, and finally the Hu-Sawicki f (R ) model. In all cases we perform a simultaneous fit to the SnIa, CMB, BAO, H (z ) and growth data, while also following the conjoined visualization of H (z ) and f σ8 as in Linder (2017). Furthermore, we introduce the figure of merit (FoM) in the H (z )-f σ8 parameter space as a way to constrain models that jointly fit both probes well. We use both the latest H (z ) and f σ8 data, but also LSST-like mocks with 1% measurements, and we find that the conjoined method of constraining the expansion history and cosmic growth simultaneously is able not only to place stringent constraints on these parameters, but also to provide an easy visual way to discriminate cosmological models. Finally, we confirm the existence of a tension between the growth-rate and Planck CMB data, and we find that the FoM in the conjoined parameter space of H (z )-f σ8(z ) can be used to discriminate between the Λ CDM model and certain classes of modified gravity models, namely the DGP and f (T ).

  1. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  2. Search for scalar dark matter via pseudoscalar portal interactions in light of the Galactic Center gamma-ray excess

    NASA Astrophysics Data System (ADS)

    Yang, Kwei-Chou

    2018-01-01

    In light of the observed Galactic center gamma-ray excess, we investigate a simplified model, for which the scalar dark matter interacts with quarks through a pseudoscalar mediator. The viable regions of the parameter space, that can also account for the relic density and evade the current searches, are identified, if the low-velocity dark matter annihilates through an s -channel off shell mediator mostly into b ¯b , and/or annihilates directly into two hidden on shell mediators, which subsequently decay into the quark pairs. These two kinds of annihilations are s wave. The projected monojet limit set by the high luminosity LHC sensitivity could constrain the favored parameter space, where the mediator's mass is larger than the dark matter mass by a factor of 2. We show that the projected sensitivity of 15-year Fermi-LAT observations of dwarf spheroidal galaxies can provide a stringent constraint on the most parameter space allowed in this model. If the on shell mediator channel contributes to the dark matter annihilation cross sections over 50%, this model with a lighter mediator can be probed in the projected PICO-500L experiment.

  3. OPTIMASS: A package for the minimization of kinematic mass functions with constraints

    DOE PAGES

    Cho, Won Sang; Gainer, James S.; Kim, Doojin; ...

    2016-01-07

    Reconstructed mass variables, such as M 2, M 2C, M* T, and M T2 W, play an essential role in searches for new physics at hadron colliders. The calculation of these variables generally involves constrained minimization in a large parameter space, which is numerically challenging. We provide a C++ code, Optimass, which interfaces with the Minuit library to perform this constrained minimization using the Augmented Lagrangian Method. The code can be applied to arbitrarily general event topologies, thus allowing the user to significantly extend the existing set of kinematic variables. Here, we describe this code, explain its physics motivation, andmore » demonstrate its use in the analysis of the fully leptonic decay of pair-produced top quarks using M 2 variables.« less

  4. Freeze-in through portals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blennow, Mattias; Fernandez-Martínez, Enrique; Zaldívar, Bryan, E-mail: emb@kth.se, E-mail: enrique.fernandez-martinez@uam.es, E-mail: b.zaldivar.m@csic.es

    2014-01-01

    The popular freeze-out paradigm for Dark Matter (DM) production, relies on DM-baryon couplings of the order of the weak interactions. However, different search strategies for DM have failed to provide a conclusive evidence of such (non-gravitational) interactions, while greatly reducing the parameter space of many representative models. This motivates the study of alternative mechanisms for DM genesis. In the freeze-in framework, the DM is slowly populated from the thermal bath while never reaching equilibrium. In this work, we analyse in detail the possibility of producing a frozen-in DM via a mediator particle which acts as a portal. We give analyticalmore » estimates of different freeze-in regimes and support them with full numerical analyses, taking into account the proper distribution functions of bath particles. Finally, we constrain the parameter space of generic models by requiring agreement with DM relic abundance observations.« less

  5. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    NASA Astrophysics Data System (ADS)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  6. Improved Parameter-Estimation With MRI-Constrained PET Kinetic Modeling: A Simulation Study

    NASA Astrophysics Data System (ADS)

    Erlandsson, Kjell; Liljeroth, Maria; Atkinson, David; Arridge, Simon; Ourselin, Sebastien; Hutton, Brian F.

    2016-10-01

    Kinetic analysis can be applied both to dynamic PET and dynamic contrast enhanced (DCE) MRI data. We have investigated the potential of MRI-constrained PET kinetic modeling using simulated [ 18F]2-FDG data for skeletal muscle. The volume of distribution, Ve, for the extra-vascular extra-cellular space (EES) is the link between the two models: It can be estimated by DCE-MRI, and then used to reduce the number of parameters to estimate in the PET model. We used a 3 tissue-compartment model with 5 rate constants (3TC5k), in order to distinguish between EES and the intra-cellular space (ICS). Time-activity curves were generated by simulation using the 3TC5k model for 3 different Ve values under basal and insulin stimulated conditions. Noise was added and the data were fitted with the 2TC3k model and with the 3TC5k model with and without Ve constraint. One hundred noise-realisations were generated at 4 different noise-levels. The results showed reductions in bias and variance with Ve constraint in the 3TC5k model. We calculated the parameter k3", representing the combined effect of glucose transport across the cellular membrane and phosphorylation, as an extra outcome measure. For k3", the average coefficient of variation was reduced from 52% to 9.7%, while for k3 in the standard 2TC3k model it was 3.4%. The accuracy of the parameters estimated with our new modeling approach depends on the accuracy of the assumed Ve value. In conclusion, we have shown that, by utilising information that could be obtained from DCE-MRI in the kinetic analysis of [ 18F]2-FDG-PET data, it is in principle possible to obtain better parameter estimates with a more complex model, which may provide additional information as compared to the standard model.

  7. Digital robust control law synthesis using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivekananda

    1989-01-01

    Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.

  8. Science with the space-based interferometer eLISA. III: probing the expansion of the universe using gravitational wave standard sirens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamanini, Nicola; Caprini, Chiara; Barausse, Enrico

    We investigate the capability of various configurations of the space interferometer eLISA to probe the late-time background expansion of the universe using gravitational wave standard sirens. We simulate catalogues of standard sirens composed by massive black hole binaries whose gravitational radiation is detectable by eLISA, and which are likely to produce an electromagnetic counterpart observable by future surveys. The main issue for the identification of a counterpart resides in the capability of obtaining an accurate enough sky localisation with eLISA. This seriously challenges the capability of four-link (2 arm) configurations to successfully constrain the cosmological parameters. Conversely, six-link (3 arm)more » configurations have the potential to provide a test of the expansion of the universe up to z ∼ 8 which is complementary to other cosmological probes based on electromagnetic observations only. In particular, in the most favourable scenarios, they can provide a significant constraint on H{sub 0} at the level of 0.5%. Furthermore, (Ω{sub M}, Ω{sub Λ}) can be constrained to a level competitive with present SNIa results. On the other hand, the lack of massive black hole binary standard sirens at low redshift allows to constrain dark energy only at the level of few percent.« less

  9. Tensor non-Gaussianity from axion-gauge-fields dynamics: parameter search

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Fujita, Tomohiro; Komatsu, Eiichiro

    2018-06-01

    We calculate the bispectrum of scale-invariant tensor modes sourced by spectator SU(2) gauge fields during inflation in a model containing a scalar inflaton, a pseudoscalar axion and SU(2) gauge fields. A large bispectrum is generated in this model at tree-level as the gauge fields contain a tensor degree of freedom, and its production is dominated by self-coupling of the gauge fields. This is a unique feature of non-Abelian gauge theory. The shape of the tensor bispectrum is approximately an equilateral shape for 3lesssim mQlesssim 4, where mQ is an effective dimensionless mass of the SU(2) field normalised by the Hubble expansion rate during inflation. The amplitude of non-Gaussianity of the tensor modes, characterised by the ratio Bh/P2h, is inversely proportional to the energy density fraction of the gauge field. This ratio can be much greater than unity, whereas the ratio from the vacuum fluctuation of the metric is of order unity. The bispectrum is effective at constraining large mQ regions of the parameter space, whereas the power spectrum constrains small mQ regions.

  10. Phenomenological Consequences of the Constrained Exceptional Supersymmetric Standard Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athron, Peter; King, S. F.; Miller, D. J.

    2010-02-10

    The Exceptional Supersymmetric Standard Model (E{sub 6}SSM) provides a low energy alternative to the MSSM, with an extra gauged U(1){sub N} symmetry, solving the mu-problem of the MSSM. Inspired by the possible embedding into an E{sub 6} GUT, the matter content fills three generations of E{sub 6} multiplets, thus predicting exciting exotic matter such as diquarks or leptoquarks. We present predictions from a constrained version of the model (cE{sub 6}SSM), with a universal scalar mass m{sub 0}, trilinear mass A and gaugino mass M{sub 1/2}. We reveal a large volume of the cE{sub 6}SSM parameter space where the correct breakdownmore » of the gauge symmetry is achieved and all experimental constraints satisfied. We predict a hierarchical particle spectrum with heavy scalars and light gauginos, while the new exotic matter can be light or heavy depending on parameters. We present representative cE{sub 6}SSM scenarios, demonstrating that there could be light exotic particles, like leptoquarks and a U(1){sub N} Z' boson, with spectacular signals at the LHC.« less

  11. Effective theory of flavor for Minimal Mirror Twin Higgs

    NASA Astrophysics Data System (ADS)

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-01

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ɛ^{n_i} for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ɛ^' {n}_i} , so that spontaneous breaking of the parity P arises from a single parameter ɛ'/ɛ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i , including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ɛ'/ɛ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. In each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.

  12. Changes in brain cell shape create residual extracellular space volume and explain tortuosity behavior during osmotic challenge.

    PubMed

    Chen, K C; Nicholson, C

    2000-07-18

    Diffusion of molecules in brain extracellular space is constrained by two macroscopic parameters, tortuosity factor lambda and volume fraction alpha. Recent studies in brain slices show that when osmolarity is reduced, lambda increases while alpha decreases. In contrast, with increased osmolarity, alpha increases, but lambda attains a plateau. Using homogenization theory and a variety of lattice models, we found that the plateau behavior of lambda can be explained if the shape of brain cells changes nonuniformly during the shrinking or swelling induced by osmotic challenge. The nonuniform cellular shrinkage creates residual extracellular space that temporarily traps diffusing molecules, thus impeding the macroscopic diffusion. The paper also discusses the definition of tortuosity and its independence of the measurement frame of reference.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barack, Leor; Cutler, Curt; Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California 91109

    Inspirals of stellar-mass compact objects (COs) into {approx}10{sup 6}M{sub {center_dot}} black holes are especially interesting sources of gravitational waves for the planned Laser Interferometer Space Antenna (LISA). The orbits of these extreme-mass-ratio inspirals (EMRIs) are highly relativistic, displaying extreme versions of both perihelion precession and Lense-Thirring precession of the orbital plane. We investigate the question of whether the emitted waveforms can be used to strongly constrain the geometry of the central massive object, and in essence check that it corresponds to a Kerr black hole (BH). For a Kerr BH, all multipole moments of the spacetime have a simple, uniquemore » relation to M and S, the BH mass, and spin; in particular, the spacetime's mass quadrupole moment Q is given by Q=-S{sup 2}/M. Here we treat Q as an additional parameter, independent of S and M, and ask how well observation can constrain its difference from the Kerr value. This was already estimated by Ryan, but for the simplified case of circular, equatorial orbits, and Ryan also neglected the signal modulations arising from the motion of the LISA satellites. We consider generic orbits and include the modulations due to the satellite motions. For this analysis, we use a family of approximate (basically post-Newtonian) waveforms, which represent the full parameter space of EMRI sources, and which exhibit the main qualitative features of true, general relativistic waveforms. We extend this parameter space to include (in an approximate manner) an arbitrary value of Q, and then construct the Fisher information matrix for the extended parameter space. By inverting the Fisher matrix, we estimate how accurately Q could be extracted from LISA observations of EMRIs. For 1 yr of coherent data from the inspiral of a 10M{sub {center_dot}} black hole into rotating black holes of masses 10{sup 5.5}M{sub {center_dot}}, 10{sup 6}M{sub {center_dot}}, or 10{sup 6.5}M{sub {center_dot}}, we find {delta}(Q/M{sup 3}){approx}10{sup -4}, 10{sup -3}, or 10{sup -2}, respectively (assuming total signal-to-noise ratio of 100, typical of the brightest detectable EMRIs). These results depend only weakly on the eccentricity of the inspiral orbit or the spin of the central object.« less

  14. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    USGS Publications Warehouse

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.

  15. Constraints on modified gravity models from white dwarfs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Srimanta; Singh, Tejinder P.; Shankar, Swapnil, E-mail: srimanta.banerjee@tifr.res.in, E-mail: swapnil.shankar@cbs.ac.in, E-mail: tpsingh@tifr.res.in

    Modified gravity theories can introduce modifications to the Poisson equation in the Newtonian limit. As a result, we expect to see interesting features of these modifications inside stellar objects. White dwarf stars are one of the most well studied stars in stellar astrophysics. We explore the effect of modified gravity theories inside white dwarfs. We derive the modified stellar structure equations and solve them to study the mass-radius relationships for various modified gravity theories. We also constrain the parameter space of these theories from observations.

  16. Effect of soil property uncertainties on permafrost thaw projections: a calibration-constrained analysis: Modeling Archive

    DOE Data Explorer

    J.C. Rowland; D.R. Harp; C.J. Wilson; A.L. Atchley; V.E. Romanovsky; E.T. Coon; S.L. Painter

    2016-02-02

    This Modeling Archive is in support of an NGEE Arctic publication available at doi:10.5194/tc-10-341-2016. This dataset contains an ensemble of thermal-hydro soil parameters including porosity, thermal conductivity, thermal conductivity shape parameters, and residual saturation of peat and mineral soil. The ensemble was generated using a Null-Space Monte Carlo analysis of parameter uncertainty based on a calibration to soil temperatures collected at the Barrow Environmental Observatory site by the NGEE team. The micro-topography of ice wedge polygons present at the site is included in the analysis using three 1D column models to represent polygon center, rim and trough features. The Arctic Terrestrial Simulator (ATS) was used in the calibration to model multiphase thermal and hydrological processes in the subsurface.

  17. A Coordinated X-Ray and Optical Campaign of the Nearest Massive Eclipsing Binary, δ Orionis Aa. III. Analysis of Optical Photometric (MOST) and Spectroscopic (Ground-based) Variations

    NASA Astrophysics Data System (ADS)

    Pablo, Herbert; Richardson, Noel D.; Moffat, Anthony F. J.; Corcoran, Michael; Shenar, Tomer; Benvenuto, Omar; Fuller, Jim; Nazé, Yaël; Hoffman, Jennifer L.; Miroshnichenko, Anatoly; Maíz Apellániz, Jesús; Evans, Nancy; Eversberg, Thomas; Gayley, Ken; Gull, Ted; Hamaguchi, Kenji; Hamann, Wolf-Rainer; Henrichs, Huib; Hole, Tabetha; Ignace, Richard; Iping, Rosina; Lauer, Jennifer; Leutenegger, Maurice; Lomax, Jamie; Nichols, Joy; Oskinova, Lida; Owocki, Stan; Pollock, Andy; Russell, Christopher M. P.; Waldron, Wayne; Buil, Christian; Garrel, Thierry; Graham, Keith; Heathcote, Bernard; Lemoult, Thierry; Li, Dong; Mauclaire, Benjamin; Potter, Mike; Ribeiro, Jose; Matthews, Jaymie; Cameron, Chris; Guenther, David; Kuschnig, Rainer; Rowe, Jason; Rucinski, Slavek; Sasselov, Dimitar; Weiss, Werner

    2015-08-01

    We report on both high-precision photometry from the Microvariability and Oscillations of Stars (MOST) space telescope and ground-based spectroscopy of the triple system δ Ori A, consisting of a binary O9.5II+early-B (Aa1 and Aa2) with P = 5.7 days, and a more distant tertiary (O9 IV P\\gt 400 years). This data was collected in concert with X-ray spectroscopy from the Chandra X-ray Observatory. Thanks to continuous coverage for three weeks, the MOST light curve reveals clear eclipses between Aa1 and Aa2 for the first time in non-phased data. From the spectroscopy, we have a well-constrained radial velocity (RV) curve of Aa1. While we are unable to recover RV variations of the secondary star, we are able to constrain several fundamental parameters of this system and determine an approximate mass of the primary using apsidal motion. We also detected second order modulations at 12 separate frequencies with spacings indicative of tidally influenced oscillations. These spacings have never been seen in a massive binary, making this system one of only a handful of such binaries that show evidence for tidally induced pulsations.

  18. Transport Regimes Spanning Magnetization-Coupling Phase Space

    NASA Astrophysics Data System (ADS)

    Baalrud, Scott D.; Tiwari, Sanat; Daligault, Jerome

    2017-10-01

    The manner in which transport properties vary over the entire parameter-space of coupling and magnetization strength is explored. Four regimes are identified based on the relative size of the gyroradius compared to other fundamental length scales: the collision mean free path, Debye length, distance of closest approach and interparticle spacing. Molecular dynamics simulations of self-diffusion and temperature anisotropy relaxation spanning the parameter space are found to agree well with the predicted boundaries. Comparison with existing theories reveals regimes where they succeed, where they fail, and where no theory has yet been developed. The results suggest that magnetic fields may be used to assist ultracold neutral plasma experiments to reach regimes of stronger electron coupling by reducing heating of electrons in the direction perpendicular to the magnetic field.. By constraining electron motion along the direction of the magnetic field, the overall electron temperature is reduced nearly by a factor of three. A large temperature anisotropy develops as a result, which can be maintained for a long time in the regime of high electron magnetization. Work supported by LDRD project 20150520ER at LANL, AFOSR FA9550-16-1-0221 and US DOE Award DE-SC00161.

  19. Robust on-off pulse control of flexible space vehicles

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi

    1993-01-01

    The on-off reaction jet control system is often used for attitude and orbital maneuvering of various spacecraft. Future space vehicles such as the orbital transfer vehicles, orbital maneuvering vehicles, and space station will extensively use reaction jets for orbital maneuvering and attitude stabilization. The proposed robust fuel- and time-optimal control algorithm is used for a three-mass spacing model of flexible spacecraft. A fuel-efficient on-off control logic is developed for robust rest-to-rest maneuver of a flexible vehicle with minimum excitation of structural modes. The first part of this report is concerned with the problem of selecting a proper pair of jets for practical trade-offs among the maneuvering time, fuel consumption, structural mode excitation, and performance robustness. A time-optimal control problem subject to parameter robustness constraints is formulated and solved. The second part of this report deals with obtaining parameter insensitive fuel- and time- optimal control inputs by solving a constrained optimization problem subject to robustness constraints. It is shown that sensitivity to modeling errors can be significantly reduced by the proposed, robustified open-loop control approach. The final part of this report deals with sliding mode control design for uncertain flexible structures. The benchmark problem of a flexible structure is used as an example for the feedback sliding mode controller design with bounded control inputs and robustness to parameter variations is investigated.

  20. Non-standard neutrino interactions in the mu–tau sector

    DOE PAGES

    Mocioiu, Irina; Wright, Warren

    2015-04-01

    We discuss neutrino mass hierarchy implications arising from the effects of non-standard neutrino interactions on muon rates in high statistics atmospheric neutrino oscillation experiments like IceCube DeepCore. We concentrate on the mu–tau sector, which is presently the least constrained. It is shown that the magnitude of the effects depends strongly on the sign of the ϵμτ parameter describing this non-standard interaction. A simple analytic model is used to understand the parameter space where differences between the two signs are maximized. We discuss how this effect is partially degenerate with changing the neutrino mass hierarchy, as well as how this degeneracymore » could be lifted.« less

  1. Constraining 3-PG with a new δ13C submodel: a test using the δ13C of tree rings.

    PubMed

    Wei, Liang; Marshall, John D; Link, Timothy E; Kavanagh, Kathleen L; DU, Enhao; Pangle, Robert E; Gag, Peter J; Ubierna, Nerea

    2014-01-01

    A semi-mechanistic forest growth model, 3-PG (Physiological Principles Predicting Growth), was extended to calculate δ(13)C in tree rings. The δ(13)C estimates were based on the model's existing description of carbon assimilation and canopy conductance. The model was tested in two ~80-year-old natural stands of Abies grandis (grand fir) in northern Idaho. We used as many independent measurements as possible to parameterize the model. Measured parameters included quantum yield, specific leaf area, soil water content and litterfall rate. Predictions were compared with measurements of transpiration by sap flux, stem biomass, tree diameter growth, leaf area index and δ(13)C. Sensitivity analysis showed that the model's predictions of δ(13)C were sensitive to key parameters controlling carbon assimilation and canopy conductance, which would have allowed it to fail had the model been parameterized or programmed incorrectly. Instead, the simulated δ(13)C of tree rings was no different from measurements (P > 0.05). The δ(13)C submodel provides a convenient means of constraining parameter space and avoiding model artefacts. This δ(13)C test may be applied to any forest growth model that includes realistic simulations of carbon assimilation and transpiration. © 2013 John Wiley & Sons Ltd.

  2. Information gains from cosmic microwave background experiments

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Amara, Adam; Refregier, Alexandre; Paranjape, Aseem; Akeret, Joël

    2014-07-01

    To shed light on the fundamental problems posed by dark energy and dark matter, a large number of experiments have been performed and combined to constrain cosmological models. We propose a novel way of quantifying the information gained by updates on the parameter constraints from a series of experiments which can either complement earlier measurements or replace them. For this purpose, we use the Kullback-Leibler divergence or relative entropy from information theory to measure differences in the posterior distributions in model parameter space from a pair of experiments. We apply this formalism to a historical series of cosmic microwave background experiments ranging from Boomerang to WMAP, SPT, and Planck. Considering different combinations of these experiments, we thus estimate the information gain in units of bits and distinguish contributions from the reduction of statistical errors and the "surprise" corresponding to a significant shift of the parameters' central values. For this experiment series, we find individual relative entropy gains ranging from about 1 to 30 bits. In some cases, e.g. when comparing WMAP and Planck results, we find that the gains are dominated by the surprise rather than by improvements in statistical precision. We discuss how this technique provides a useful tool for both quantifying the constraining power of data from cosmological probes and detecting the tensions between experiments.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Yong-Seon; Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth, PO1 3FX; Zhao Gongbo

    We explore the complementarity of weak lensing and galaxy peculiar velocity measurements to better constrain modifications to General Relativity. We find no evidence for deviations from General Relativity on cosmological scales from a combination of peculiar velocity measurements (for Luminous Red Galaxies in the Sloan Digital Sky Survey) with weak lensing measurements (from the Canadian France Hawaii Telescope Legacy Survey). We provide a Fisher error forecast for a Euclid-like space-based survey including both lensing and peculiar velocity measurements and show that the expected constraints on modified gravity will be at least an order of magnitude better than with present data,more » i.e. we will obtain {approx_equal}5% errors on the modified gravity parametrization described here. We also present a model-independent method for constraining modified gravity parameters using tomographic peculiar velocity information, and apply this methodology to the present data set.« less

  4. Sub-TeV quintuplet minimal dark matter with left-right symmetry

    NASA Astrophysics Data System (ADS)

    Agarwalla, Sanjib Kumar; Ghosh, Kirtiman; Patra, Ayon

    2018-05-01

    A detailed study of a fermionic quintuplet dark matter in a left-right symmetric scenario is performed in this article. The minimal quintuplet dark matter model is highly constrained from the WMAP dark matter relic density (RD) data. To elevate this constraint, an extra singlet scalar is introduced. It introduces a host of new annihilation and co-annihilation channels for the dark matter, allowing even sub-TeV masses. The phenomenology of this singlet scalar is studied in detail in the context of the Large Hadron Collider (LHC) experiment. The production and decay of this singlet scalar at the LHC give rise to interesting resonant di-Higgs or diphoton final states. We also constrain the RD allowed parameter space of this model in light of the ATLAS bounds on the resonant di-Higgs and diphoton cross-sections.

  5. Global fits of GUT-scale SUSY models with GAMBIT

    NASA Astrophysics Data System (ADS)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  6. Population Synthesis of Radio and Y-ray Millisecond Pulsars Using Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Gonthier, Peter L.; Billman, C.; Harding, A. K.

    2013-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and γ-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of ten radio surveys and by Fermi, predicting the MSP birth rate in the Galaxy. We follow a similar set of assumptions that we have used in previous, more constrained Monte Carlo simulations. The parameters associated with the birth distributions such as those for the accretion rate, magnetic field and period distributions are also free to vary. With the large set of free parameters, we employ Markov Chain Monte Carlo simulations to explore the large and small worlds of the parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and γ-ray pulsar characteristics. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.

  7. Maximal compression of the redshift-space galaxy power spectrum and bispectrum

    NASA Astrophysics Data System (ADS)

    Gualdi, Davide; Manera, Marc; Joachimi, Benjamin; Lahav, Ofer

    2018-05-01

    We explore two methods of compressing the redshift-space galaxy power spectrum and bispectrum with respect to a chosen set of cosmological parameters. Both methods involve reducing the dimension of the original data vector (e.g. 1000 elements) to the number of cosmological parameters considered (e.g. seven ) using the Karhunen-Loève algorithm. In the first case, we run MCMC sampling on the compressed data vector in order to recover the 1D and 2D posterior distributions. The second option, approximately 2000 times faster, works by orthogonalizing the parameter space through diagonalization of the Fisher information matrix before the compression, obtaining the posterior distributions without the need of MCMC sampling. Using these methods for future spectroscopic redshift surveys like DESI, Euclid, and PFS would drastically reduce the number of simulations needed to compute accurate covariance matrices with minimal loss of constraining power. We consider a redshift bin of a DESI-like experiment. Using the power spectrum combined with the bispectrum as a data vector, both compression methods on average recover the 68 {per cent} credible regions to within 0.7 {per cent} and 2 {per cent} of those resulting from standard MCMC sampling, respectively. These confidence intervals are also smaller than the ones obtained using only the power spectrum by 81 per cent, 80 per cent, and 82 per cent respectively, for the bias parameter b1, the growth rate f, and the scalar amplitude parameter As.

  8. Superconducting cosmic strings as sources of cosmological fast radio bursts

    NASA Astrophysics Data System (ADS)

    Ye, Jiani; Wang, Kai; Cai, Yi-Fu

    2017-11-01

    In this paper we calculate the radio burst signals from three kinds of structures of superconducting cosmic strings. By taking into account the observational factors including scattering and relativistic effects, we derive the event rate of radio bursts as a function of redshift with the theoretical parameters Gμ and I of superconducting strings. Our analyses show that cusps and kinks may have noticeable contributions to the event rate and in most cases cusps would dominate the contribution, while the kink-kink collisions tend to have secondary effects. By fitting theoretical predictions with the normalized data of fast radio bursts, we for the first time constrain the parameter space of superconducting strings and report that the parameter space of Gμ ˜ [10^{-14}, 10^{-12}] and I ˜ [10^{-1}, 102] GeV fit the observation well although the statistic significance is low due to the lack of observational data. Moreover, we derive two types of best fittings, with one being dominated by cusps with a redshift z = 1.3, and the other dominated by kinks at the range of the maximal event rate.

  9. Uncertainty quantification of crustal scale thermo-chemical properties in Southeast Australia

    NASA Astrophysics Data System (ADS)

    Mather, B.; Moresi, L. N.; Rayner, P. J.

    2017-12-01

    The thermo-chemical properties of the crust are essential to understanding the mechanical and thermal state of the lithosphere. The uncertainties associated with these parameters are connected to the available geophysical observations and a priori information to constrain the objective function. Often, it is computationally efficient to reduce the parameter space by mapping large portions of the crust into lithologies that have assumed homogeneity. However, the boundaries of these lithologies are, in themselves, uncertain and should also be included in the inverse problem. We assimilate geological uncertainties from an a priori geological model of Southeast Australia with geophysical uncertainties from S-wave tomography and 174 heat flow observations within an adjoint inversion framework. This reduces the computational cost of inverting high dimensional probability spaces, compared to probabilistic inversion techniques that operate in the `forward' mode, but at the sacrifice of uncertainty and covariance information. We overcome this restriction using a sensitivity analysis, that perturbs our observations and a priori information within their probability distributions, to estimate the posterior uncertainty of thermo-chemical parameters in the crust.

  10. Multi-objective trajectory optimization for the space exploration vehicle

    NASA Astrophysics Data System (ADS)

    Qin, Xiaoli; Xiao, Zhen

    2016-07-01

    The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.

  11. Determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1991-01-01

    The final report for work on the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution is presented. Papers and theses prepared during the research report period are included. Among all the research results reported, note should be made of the specific investigation of the determination of design and operation parameters for upper atmospheric research instrumentation to yield optimum resolution with deconvolution. A methodology was developed to determine design and operation parameters for error minimization when deconvolution is included in data analysis. An error surface is plotted versus the signal-to-noise ratio (SNR) and all parameters of interest. Instrumental characteristics will determine a curve in this space. The SNR and parameter values which give the projection from the curve to the surface, corresponding to the smallest value for the error, are the optimum values. These values are constrained by the curve and so will not necessarily correspond to an absolute minimum in the error surface.

  12. Observational constraints on variable equation of state parameters of dark matter and dark energy after Planck

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh; Xu, Lixin

    2014-10-01

    In this paper, we study a cosmological model in general relativity within the framework of spatially flat Friedmann-Robertson-Walker space-time filled with ordinary matter (baryonic), radiation, dark matter and dark energy, where the latter two components are described by Chevallier-Polarski-Linder equation of state parameters. We utilize the observational data sets from SNLS3, BAO and Planck + WMAP9 + WiggleZ measurements of matter power spectrum to constrain the model parameters. We find that the current observational data offer tight constraints on the equation of state parameter of dark matter. We consider the perturbations and study the behavior of dark matter by observing its effects on CMB and matter power spectra. We find that the current observational data favor the cold dark matter scenario with the cosmological constant type dark energy at the present epoch.

  13. Asteroseismic Diagram for Subgiants and Red Giants

    NASA Astrophysics Data System (ADS)

    Gai, Ning; Tang, Yanke; Yu, Peng; Dou, Xianghua

    2017-02-01

    Asteroseismology is a powerful tool for constraining stellar parameters. NASA’s Kepler mission is providing individual eigenfrequencies for a huge number of stars, including thousands of red giants. Besides the frequencies of acoustic modes, an important breakthrough of the Kepler mission is the detection of nonradial gravity-dominated mixed-mode oscillations in red giants. Unlike pure acoustic modes, mixed modes probe deeply into the interior of stars, allowing the stellar core properties and evolution of stars to be derived. In this work, using the gravity-mode period spacing and the large frequency separation, we construct the ΔΠ1-Δν asteroseismic diagram from models of subgiants and red giants with various masses and metallicities. The relationship ΔΠ1-Δν is able to constrain the ages and masses of the subgiants. Meanwhile, for red giants with masses above 1.5 M ⊙, the ΔΠ1-Δν asteroseismic diagram can also work well to constrain the stellar age and mass. Additionally, we calculate the relative “isochrones” τ, which indicate similar evolution states especially for similar mass stars, on the ΔΠ1-Δν diagram.

  14. Small-scale effects of thermal inflation on halo abundance at high-z, galaxy substructure abundance, and 21-cm power spectrum

    NASA Astrophysics Data System (ADS)

    Hong, Sungwook E.; Zoe, Heeseung; Ahn, Kyungjin

    2017-11-01

    We study the impact of thermal inflation on the formation of cosmological structures and present astrophysical observables which can be used to constrain and possibly probe the thermal inflation scenario. These are dark matter halo abundance at high redshifts, satellite galaxy abundance in the Milky Way, and fluctuation in the 21-cm radiation background before the epoch of reionization. The thermal inflation scenario leaves a characteristic signature on the matter power spectrum by boosting the amplitude at a specific wave number determined by the number of e-foldings during thermal inflation (N_{bc}), and strongly suppressing the amplitude for modes at smaller scales. For a reasonable range of parameter space, one of the consequences is the suppression of minihalo formation at high redshifts and that of satellite galaxies in the Milky Way. While this effect is substantial, it is degenerate with other cosmological or astrophysical effects. The power spectrum of the 21-cm background probes this impact more directly, and its observation may be the best way to constrain the thermal inflation scenario due to the characteristic signature in the power spectrum. The Square Kilometre Array (SKA) in phase 1 (SKA1) has sensitivity large enough to achieve this goal for models with N_{bc} ≳ 26 if a 10000-hr observation is performed. The final phase SKA, with anticipated sensitivity about an order of magnitude higher, seems more promising and will cover a wider parameter space.

  15. Electroweak baryogenesis in two Higgs doublet models and B meson anomalies

    NASA Astrophysics Data System (ADS)

    Cline, James M.; Kainulainen, Kimmo; Trott, Michael

    2011-11-01

    Motivated by 3.9 σ evidence of a CP-violating phase beyond the standard model in the like-sign dimuon asymmetry reported by D∅, we examine the potential for two Higgs doublet models (2HDMs) to achieve successful electroweak baryogenesis (EWBG) while explaining the dimuon anomaly. Our emphasis is on the minimal flavour violating 2HDM, but our numerical scans of model parameter space include type I and type II models as special cases. We incorporate relevant particle physics constraints, including electroweak precision data, b → sγ, the neutron electric dipole moment, R b , and perturbative coupling bounds to constrain the model. Surprisingly, we find that a large enough baryon asymmetry is only consistently achieved in a small subset of parameter space in 2HDMs, regardless of trying to simultaneously account for any B physics anomaly. There is some tension between simultaneous explanation of the dimuon anomaly and baryogenesis, but using a Markov chain Monte Carlo we find several models within 1 σ of the central values. We point out shortcomings with previous studies that reached different conclusions. The restricted parameter space that allows for EWBG makes this scenario highly predictive for collider searches. We discuss the most promising signatures to pursue at the LHC for EWBG-compatible models.

  16. Determination of wave-function functionals: The constrained-search variational method

    NASA Astrophysics Data System (ADS)

    Pan, Xiao-Yin; Sahni, Viraht; Massa, Lou

    2005-09-01

    In a recent paper [Phys. Rev. Lett. 93, 130401 (2004)], we proposed the idea of expanding the space of variations in variational calculations of the energy by considering the approximate wave function ψ to be a functional of functions χ , ψ=ψ[χ] , rather than a function. A constrained search is first performed over all functions χ such that the wave-function functional ψ[χ] satisfies a physical constraint or leads to the known value of an observable. A rigorous upper bound to the energy is then obtained via the variational principle. In this paper we generalize the constrained-search variational method, applicable to both ground and excited states, to the determination of arbitrary Hermitian single-particle operators as applied to two-electron atomic and ionic systems. We construct analytical three-parameter ground-state functionals for the H- ion and the He atom through the constraint of normalization. We present the results for the total energy E , the expectations of the single-particle operators W=∑irin , n=-2,-1,1,2 , W=∑iδ(ri) , and W=∑iδ(ri-r) , the structure of the nonlocal Coulomb hole charge ρc(rr') , and the expectations of the two particle operators u2,u,1/u,1/u2 , where u=∣ri-rj∣ . The results for all the expectation values are remarkably accurate when compared with the 1078-parameter wave function of Pekeris, and other wave functions that are not functionals. We conclude by describing our current work on how the constrained-search variational method in conjunction with quantal density-functional theory is being applied to the many-electron case.

  17. Implementation of remote sensing data for flood forecasting

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2016-12-01

    Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.

  18. Numerical Estimation of Balanced and Falling States for Constrained Legged Systems

    NASA Astrophysics Data System (ADS)

    Mummolo, Carlotta; Mangialardi, Luigi; Kim, Joo H.

    2017-08-01

    Instability and risk of fall during standing and walking are common challenges for biped robots. While existing criteria from state-space dynamical systems approach or ground reference points are useful in some applications, complete system models and constraints have not been taken into account for prediction and indication of fall for general legged robots. In this study, a general numerical framework that estimates the balanced and falling states of legged systems is introduced. The overall approach is based on the integration of joint-space and Cartesian-space dynamics of a legged system model. The full-body constrained joint-space dynamics includes the contact forces and moments term due to current foot (or feet) support and another term due to altered contact configuration. According to the refined notions of balanced, falling, and fallen, the system parameters, physical constraints, and initial/final/boundary conditions for balancing are incorporated into constrained nonlinear optimization problems to solve for the velocity extrema (representing the maximum perturbation allowed to maintain balance without changing contacts) in the Cartesian space at each center-of-mass (COM) position within its workspace. The iterative algorithm constructs the stability boundary as a COM state-space partition between balanced and falling states. Inclusion in the resulting six-dimensional manifold is a necessary condition for a state of the given system to be balanced under the given contact configuration, while exclusion is a sufficient condition for falling. The framework is used to analyze the balance stability of example systems with various degrees of complexities. The manifold for a 1-degree-of-freedom (DOF) legged system is consistent with the experimental and simulation results in the existing studies for specific controller designs. The results for a 2-DOF system demonstrate the dependency of the COM state-space partition upon joint-space configuration (elbow-up vs. elbow-down). For both 1- and 2-DOF systems, the results are validated in simulation environments. Finally, the manifold for a biped walking robot is constructed and illustrated against its single-support walking trajectories. The manifold identified by the proposed framework for any given legged system can be evaluated beforehand as a system property and serves as a map for either a specified state or a specific controller's performance.

  19. On The Computation Of The Best-fit Okada-type Tsunami Source

    NASA Astrophysics Data System (ADS)

    Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A.

    2017-12-01

    The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gair, Jonathan R.; Tang, Christopher; Volonteri, Marta

    One of the sources of gravitational waves for the proposed space-based gravitational wave detector, the Laser Interferometer Space Antenna (LISA), are the inspirals of compact objects into supermassive black holes in the centers of galaxies--extreme-mass-ratio inspirals (EMRIs). Using LISA observations, we will be able to measure the parameters of each EMRI system detected to very high precision. However, the statistics of the set of EMRI events observed by LISA will be more important in constraining astrophysical models than extremely precise measurements for individual systems. The black holes to which LISA is most sensitive are in a mass range that ismore » difficult to probe using other techniques, so LISA provides an almost unique window onto these objects. In this paper we explore, using Bayesian techniques, the constraints that LISA EMRI observations can place on the mass function of black holes at low redshift. We describe a general framework for approaching inference of this type--using multiple observations in combination to constrain a parametrized source population. Assuming that the scaling of the EMRI rate with the black-hole mass is known and taking a black-hole distribution given by a simple power law, dn/dlnM=A{sub 0}(M/M{sub *}){sup {alpha}}{sub 0}, we find that LISA could measure the parameters to a precision of {Delta}(lnA{sub 0}){approx}0.08, and {Delta}({alpha}{sub 0}){approx}0.03 for a reference model that predicts {approx}1000 events. Even with as few as 10 events, LISA should constrain the slope to a precision {approx}0.3, which is the current level of observational uncertainty in the low-mass slope of the black-hole mass function. We also consider a model in which A{sub 0} and {alpha}{sub 0} evolve with redshift, but find that EMRI observations alone do not have much power to probe such an evolution.« less

  1. Geometrically constrained kinematic global navigation satellite systems positioning: Implementation and performance

    NASA Astrophysics Data System (ADS)

    Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza

    2015-09-01

    GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.

  2. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2010-01-01

    The extremely massive (> 90 Solar Mass) and luminous (= 5 x 10(exp 6) Solar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the galaxy. However, many of its underlying physical parameters remain a mystery. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision in Eta Car, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of i approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-1) space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  3. Constraining the Properties of the Eta Carinae System via 3-D SPH Models of Space-Based Observations: The Absolute Orientation of the Binary Orbit

    NASA Technical Reports Server (NTRS)

    Madura, Thomas I.; Gull, Theodore R.; Owocki, Stanley P.; Okazaki, Atsuo T.; Russell, Christopher M. P.

    2011-01-01

    The extremely massive (> 90 Stellar Mass) and luminous (= 5 x 10(exp 6) Stellar Luminosity) star Eta Carinae, with its spectacular bipolar "Homunculus" nebula, comprises one of the most remarkable and intensely observed stellar systems in the Galaxy. However, many of its underlying physical parameters remain unknown. Multiwavelength variations observed to occur every 5.54 years are interpreted as being due to the collision of a massive wind from the primary star with the fast, less dense wind of a hot companion star in a highly elliptical (e approx. 0.9) orbit. Using three-dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) simulations of the binary wind-wind collision, together with radiative transfer codes, we compute synthetic spectral images of [Fe III] emission line structures and compare them to existing Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS) observations. We are thus able, for the first time, to tightly constrain the absolute orientation of the binary orbit on the sky. An orbit with an inclination of approx. 40deg, an argument of periapsis omega approx. 255deg, and a projected orbital axis with a position angle of approx. 312deg east of north provides the best fit to the observations, implying that the orbital axis is closely aligned in 3-D space with the Homunculus symmetry axis, and that the companion star orbits clockwise on the sky relative to the primary.

  4. A Risk-Constrained Multi-Stage Decision Making Approach to the Architectural Analysis of Mars Missions

    NASA Technical Reports Server (NTRS)

    Kuwata, Yoshiaki; Pavone, Marco; Balaram, J. (Bob)

    2012-01-01

    This paper presents a novel risk-constrained multi-stage decision making approach to the architectural analysis of planetary rover missions. In particular, focusing on a 2018 Mars rover concept, which was considered as part of a potential Mars Sample Return campaign, we model the entry, descent, and landing (EDL) phase and the rover traverse phase as four sequential decision-making stages. The problem is to find a sequence of divert and driving maneuvers so that the rover drive is minimized and the probability of a mission failure (e.g., due to a failed landing) is below a user specified bound. By solving this problem for several different values of the model parameters (e.g., divert authority), this approach enables rigorous, accurate and systematic trade-offs for the EDL system vs. the mobility system, and, more in general, cross-domain trade-offs for the different phases of a space mission. The overall optimization problem can be seen as a chance-constrained dynamic programming problem, with the additional complexity that 1) in some stages the disturbances do not have any probabilistic characterization, and 2) the state space is extremely large (i.e, hundreds of millions of states for trade-offs with high-resolution Martian maps). To this purpose, we solve the problem by performing an unconventional combination of average and minimax cost analysis and by leveraging high efficient computation tools from the image processing community. Preliminary trade-off results are presented.

  5. Exploring JWST's Capability to Constrain Habitability on Simulated Terrestrial TESS Planets

    NASA Astrophysics Data System (ADS)

    Tremblay, Luke; Britt, Amber; Batalha, Natasha; Schwieterman, Edward; Arney, Giada; Domagal-Goldman, Shawn; Mandell, Avi; Planetary Systems Laboratory; Virtual Planetary Laboratory

    2017-01-01

    In the following, we have worked to develop a flexible "observability" scale of biologically relevant molecules in the atmospheres of newly discovered exoplanets for the instruments aboard NASA's next flagship mission, the James Webb Space Telescope (JWST). We sought to create such a scale in order to provide the community with a tool with which to optimize target selection for JWST observations based on detections of the upcoming Transiting Exoplanet Satellite Survey (TESS). Current literature has laid the groundwork for defining both biologically relevant molecules as well as what characteristics would make a new world "habitable", but it has so far lacked a cohesive analysis of JWST's capabilities to observe these molecules in exoplanet atmospheres and thereby constrain habitability. In developing our Observability Scale, we utilized a range of hypothetical planets (over planetary radii and stellar insolation) and generated three self-consistent atmospheric models (of dierent molecular compositions) for each of our simulated planets. With these planets and their corresponding atmospheres, we utilized the most accurate JWST instrument simulator, created specically to process transiting exoplanet spectra. Through careful analysis of these simulated outputs, we were able to determine the relevant parameters that effected JWST's ability to constrain each individual molecular bands with statistical accuracy and therefore generate a scale based on those key parameters. As a preliminary test of our Observability Scale, we have also applied it to the list of TESS candidate stars in order to determine JWST's observational capabilities for any soon-to-be-detected planet in those solar systems.

  6. Lepton flavorful fifth force and depth-dependent neutrino matter interactions

    NASA Astrophysics Data System (ADS)

    Wise, Mark B.; Zhang, Yue

    2018-06-01

    We consider a fifth force to be an interaction that couples to matter with a strength that grows with the number of atoms. In addition to competing with the strength of gravity a fifth force can give rise to violations of the equivalence principle. Current long range constraints on the strength and range of fifth forces are very impressive. Amongst possible fifth forces are those that couple to lepton flavorful charges L e - L μ or L e - L τ . They have the property that their range and strength are also constrained by neutrino interactions with matter. In this brief note we review the existing constraints on the allowed parameter space in gauged U{(1)}_{L_e-{L}_{μ },{L}_{τ }} . We find two regions where neutrino oscillation experiments are at the frontier of probing such a new force. In particular, there is an allowed range of parameter space where neutrino matter interactions relevant for long baseline oscillation experiments depend on the depth of the neutrino beam below the surface of the earth.

  7. PyLDTk: Python toolkit for calculating stellar limb darkening profiles and model-specific coefficients for arbitrary filters

    NASA Astrophysics Data System (ADS)

    Parviainen, Hannu

    2015-10-01

    PyLDTk automates the calculation of custom stellar limb darkening (LD) profiles and model-specific limb darkening coefficients (LDC) using the library of PHOENIX-generated specific intensity spectra by Husser et al. (2013). It facilitates exoplanet transit light curve modeling, especially transmission spectroscopy where the modeling is carried out for custom narrow passbands. PyLDTk construct model-specific priors on the limb darkening coefficients prior to the transit light curve modeling. It can also be directly integrated into the log posterior computation of any pre-existing transit modeling code with minimal modifications to constrain the LD model parameter space directly by the LD profile, allowing for the marginalization over the whole parameter space that can explain the profile without the need to approximate this constraint by a prior distribution. This is useful when using a high-order limb darkening model where the coefficients are often correlated, and the priors estimated from the tabulated values usually fail to include these correlations.

  8. Diagnostic Simulations of the Lunar Exosphere using Coma and Tail

    NASA Astrophysics Data System (ADS)

    Lee, Dong Wook; Kim, Sang J.

    2017-10-01

    The characteristics of the lunar exosphere can be constrained by comparing simulated models with observational data of the coma and tail (Lee et al., JGR, 2011); and thus far a few independent approaches on this issue have been performed and presented in the literature. Since there are two-different observational constraints for the lunar exosphere, it is interesting to find the best exospheric model that can account for the observed characteristics of the coma and tail. Considering various initial conditions of different sources and space weather, we present preliminary time-dependent simulations between the initial and final stages of the development of the lunar tail. Based on an updated 3-D model, we are planning to conduct numerous simulations to constrain the best model parameters from the coma images obtained from coronagraph observations supported by a NASA monitoring program (Morgan, Killen, and Potter, AGU, 2015) and future tail data.

  9. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; Steffen, J. H.; Weltman, A.

    2010-01-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here, we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss the GammeV-CHameleon Afterglow SEarch, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHameleon Afterglow SEarch. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experimentmore » will be able to probe a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  10. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; /Chicago U., EFI /KICP, Chicago; Steffen, J.H.

    2009-11-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss GammeV-CHASE, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHASE. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experiment will be able to probemore » a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  11. Constraining viscous dark energy models with the latest cosmological data

    NASA Astrophysics Data System (ADS)

    Wang, Deng; Yan, Yang-Jie; Meng, Xin-He

    2017-10-01

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H_0 tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios.

  12. Planck and the reionization of the universe

    NASA Astrophysics Data System (ADS)

    Crill, Brendan

    2016-03-01

    Planck is the third-generation satellite aimed at measuring the cosmic microwave background, a relic of the hot big bang. Planck's temperature and polarization maps of the millimeter-wave sky have constrained parameters of the standard lambda-CDM model of cosmology to incredible precision, and have provided constraints on inflation in the very early universe. Planck's all-sky survey of polarization in seven frequency bands can remove contamination from nearby Galactic emission and constrain the optical depth of the reionized Universe, giving insight into the properties of the earliest star formation. The final 2016 data release from Planck will include a refined optical depth measurement using the full sensitivity of both the High Frequency and Low Frequency instruments. I present the status of the reionization measurement and discuss future prospects for further measurements of the early Universe with the CMB from Planck and future space and suborbital platforms.

  13. Tunnelling in Dante's Inferno

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furuuchi, Kazuyuki; Sperling, Marcus, E-mail: kazuyuki.furuuchi@manipal.edu, E-mail: marcus.sperling@univie.ac.at

    2017-05-01

    We study quantum tunnelling in Dante's Inferno model of large field inflation. Such a tunnelling process, which will terminate inflation, becomes problematic if the tunnelling rate is rapid compared to the Hubble time scale at the time of inflation. Consequently, we constrain the parameter space of Dante's Inferno model by demanding a suppressed tunnelling rate during inflation. The constraints are derived and explicit numerical bounds are provided for representative examples. Our considerations are at the level of an effective field theory; hence, the presented constraints have to hold regardless of any UV completion.

  14. Effective theory of flavor for Minimal Mirror Twin Higgs

    DOE PAGES

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-03

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ϵ more » $$n_i$$ for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ϵ' $$n_i$$, so that spontaneous breaking of the parity P arises from a single parameter ϵ'/ϵ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i, including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ϵ'/ϵ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. Lastly, in each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.« less

  15. Cost-optimized methods extending the solution space of lightweight spaceborne monolithic ZERODUR® mirrors to larger sizes

    NASA Astrophysics Data System (ADS)

    Leys, Antoine; Hull, Tony; Westerhoff, Thomas

    2015-09-01

    We address the problem that larger spaceborne mirrors require greater sectional thickness to achieve a sufficient first eigen frequency that is resilient to launch loads, and to be stable during optical telescope assembly integration and test, this added thickness results in unacceptable added mass if we simply scale up solutions for smaller mirrors. Special features, like cathedral ribs, arch, chamfers, and back-side following the contour of the mirror face have been considered for these studies. For computational efficiency, we have conducted detailed analysis on various configurations of a 800 mm hexagonal segment and of a 1.2-m mirror, in a manner that they can be constrained by manufacturing parameters as would be a 4-m mirror. Furthermore each model considered also has been constrained by cost-effective machining practice as defined in the SCHOTT Mainz factory. Analysis on variants of this 1.2-m mirror has shown a favorable configuration. We have then scaled this optimal configuration to 4-m aperture. We discuss resulting parameters of costoptimized 4-m mirrors. We also discuss the advantages and disadvantages this analysis reveals of going to cathedral rib architecture on 1-m class mirror substrates.

  16. Baryon acoustic oscillations in 2D: Modeling redshift-space power spectrum from perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taruya, Atsushi; Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa, Chiba 277-8568; Nishimichi, Takahiro

    2010-09-15

    We present an improved prescription for the matter power spectrum in redshift space taking proper account of both nonlinear gravitational clustering and redshift distortion, which are of particular importance for accurately modeling baryon acoustic oscillations (BAOs). Contrary to the models of redshift distortion phenomenologically introduced but frequently used in the literature, the new model includes the corrections arising from the nonlinear coupling between the density and velocity fields associated with two competitive effects of redshift distortion, i.e., Kaiser and Finger-of-God effects. Based on the improved treatment of perturbation theory for gravitational clustering, we compare our model predictions with the monopolemore » and quadrupole power spectra of N-body simulations, and an excellent agreement is achieved over the scales of BAOs. Potential impacts on constraining dark energy and modified gravity from the redshift-space power spectrum are also investigated based on the Fisher-matrix formalism, particularly focusing on the measurements of the Hubble parameter, angular diameter distance, and growth rate for structure formation. We find that the existing phenomenological models of redshift distortion produce a systematic error on measurements of the angular diameter distance and Hubble parameter by 1%-2%, and the growth-rate parameter by {approx}5%, which would become non-negligible for future galaxy surveys. Correctly modeling redshift distortion is thus essential, and the new prescription for the redshift-space power spectrum including the nonlinear corrections can be used as an accurate theoretical template for anisotropic BAOs.« less

  17. Constrained optimization of image restoration filters

    NASA Technical Reports Server (NTRS)

    Riemer, T. E.; Mcgillem, C. D.

    1973-01-01

    A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.

  18. Detecting kinematic boundary surfaces in phase space: particle mass measurements in SUSY-like events

    DOE PAGES

    Debnath, Dipsikha; Gainer, James S.; Kilic, Can; ...

    2017-06-19

    We critically examine the classic endpoint method for particle mass determination, focusing on difficult corners of parameter space, where some of the measurements are not independent, while others are adversely affected by the experimental resolution. In such scenarios, mass differences can be measured relatively well, but the overall mass scale remains poorly constrained. Using the example of the standard SUSY decay chain q ~→χ ~ 0 2→ℓ ~→χ ~ 0 1 , we demonstrate that sensitivity to the remaining mass scale parameter can be recovered by measuring the two-dimensional kinematical boundary in the relevant three-dimensional phase space of invariant massesmore » squared. We develop an algorithm for detecting this boundary, which uses the geometric properties of the Voronoi tessellation of the data, and in particular, the relative standard deviation (RSD) of the volumes of the neighbors for each Voronoi cell in the tessellation. We propose a new observable, Σ¯ , which is the average RSD per unit area, calculated over the hypothesized boundary. We show that the location of the Σ¯ maximum correlates very well with the true values of the new particle masses. Our approach represents the natural extension of the one-dimensional kinematic endpoint method to the relevant three dimensions of invariant mass phase space.« less

  19. Detecting kinematic boundary surfaces in phase space: particle mass measurements in SUSY-like events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Debnath, Dipsikha; Gainer, James S.; Kilic, Can

    We critically examine the classic endpoint method for particle mass determination, focusing on difficult corners of parameter space, where some of the measurements are not independent, while others are adversely affected by the experimental resolution. In such scenarios, mass differences can be measured relatively well, but the overall mass scale remains poorly constrained. Using the example of the standard SUSY decay chain q ~→χ ~ 0 2→ℓ ~→χ ~ 0 1 , we demonstrate that sensitivity to the remaining mass scale parameter can be recovered by measuring the two-dimensional kinematical boundary in the relevant three-dimensional phase space of invariant massesmore » squared. We develop an algorithm for detecting this boundary, which uses the geometric properties of the Voronoi tessellation of the data, and in particular, the relative standard deviation (RSD) of the volumes of the neighbors for each Voronoi cell in the tessellation. We propose a new observable, Σ¯ , which is the average RSD per unit area, calculated over the hypothesized boundary. We show that the location of the Σ¯ maximum correlates very well with the true values of the new particle masses. Our approach represents the natural extension of the one-dimensional kinematic endpoint method to the relevant three dimensions of invariant mass phase space.« less

  20. Detecting kinematic boundary surfaces in phase space: particle mass measurements in SUSY-like events

    NASA Astrophysics Data System (ADS)

    Debnath, Dipsikha; Gainer, James S.; Kilic, Can; Kim, Doojin; Matchev, Konstantin T.; Yang, Yuan-Pao

    2017-06-01

    We critically examine the classic endpoint method for particle mass determination, focusing on difficult corners of parameter space, where some of the measurements are not independent, while others are adversely affected by the experimental resolution. In such scenarios, mass differences can be measured relatively well, but the overall mass scale remains poorly constrained. Using the example of the standard SUSY decay chain \\tilde{q}\\to {\\tilde{χ}}_2^0\\to \\tilde{ℓ}\\to {\\tilde{χ}}_1^0 , we demonstrate that sensitivity to the remaining mass scale parameter can be recovered by measuring the two-dimensional kinematical boundary in the relevant three-dimensional phase space of invariant masses squared. We develop an algorithm for detecting this boundary, which uses the geometric properties of the Voronoi tessellation of the data, and in particular, the relative standard deviation (RSD) of the volumes of the neighbors for each Voronoi cell in the tessellation. We propose a new observable, \\overline{Σ} , which is the average RSD per unit area, calculated over the hypothesized boundary. We show that the location of the \\overline{Σ} maximum correlates very well with the true values of the new particle masses. Our approach represents the natural extension of the one-dimensional kinematic endpoint method to the relevant three dimensions of invariant mass phase space.

  1. High-Contrast Near-Infrared Imaging Polarimetry of the Protoplanetary Disk around RY Tau

    NASA Technical Reports Server (NTRS)

    Takami, Michihiro; Karr, Jennifer L.; Hashimoto, Jun; Kim, Hyosun; Wisenewski, John; Henning, Thomas; Grady, Carol; Kandori, Ryo; Hodapp, Klaus W.; Kudo, Tomoyuki; hide

    2013-01-01

    We present near-infrared coronagraphic imaging polarimetry of RY Tau. The scattered light in the circumstellar environment was imaged at H-band at a high resolution (approx. 0.05) for the first time, using Subaru-HiCIAO. The observed polarized intensity (PI) distribution shows a butterfly-like distribution of bright emission with an angular scale similar to the disk observed at millimeter wavelengths. This distribution is offset toward the blueshifted jet, indicating the presence of a geometrically thick disk or a remnant envelope, and therefore the earliest stage of the Class II evolutionary phase. We perform comparisons between the observed PI distribution and disk models with: (1) full radiative transfer code, using the spectral energy distribution (SED) to constrain the disk parameters; and (2) monochromatic simulations of scattered light which explore a wide range of parameters space to constrain the disk and dust parameters. We show that these models cannot consistently explain the observed PI distribution, SED, and the viewing angle inferred by millimeter interferometry. We suggest that the scattered light in the near-infrared is associated with an optically thin and geometrically thick layer above the disk surface, with the surface responsible for the infrared SED. Half of the scattered light and thermal radiation in this layer illuminates the disk surface, and this process may significantly affect the thermal structure of the disk.

  2. An object correlation and maneuver detection approach for space surveillance

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Hu, Wei-Dong; Xin, Qin; Du, Xiao-Yong

    2012-10-01

    Object correlation and maneuver detection are persistent problems in space surveillance and maintenance of a space object catalog. We integrate these two problems into one interrelated problem, and consider them simultaneously under a scenario where space objects only perform a single in-track orbital maneuver during the time intervals between observations. We mathematically formulate this integrated scenario as a maximum a posteriori (MAP) estimation. In this work, we propose a novel approach to solve the MAP estimation. More precisely, the corresponding posterior probability of an orbital maneuver and a joint association event can be approximated by the Joint Probabilistic Data Association (JPDA) algorithm. Subsequently, the maneuvering parameters are estimated by optimally solving the constrained non-linear least squares iterative process based on the second-order cone programming (SOCP) algorithm. The desired solution is derived according to the MAP criterions. The performance and advantages of the proposed approach have been shown by both theoretical analysis and simulation results. We hope that our work will stimulate future work on space surveillance and maintenance of a space object catalog.

  3. System Analysis and Evaluation of Greenhouse Modules within Moon/Mars Habitats

    NASA Astrophysics Data System (ADS)

    Prasad Nagendra, Narayan; Schubert, Daniel; Zabel, Paul

    2012-07-01

    Long term settlement on different planets of the solar system is a fascination for mankind. Some researchers contemplate that planetary settlement is a necessity for the survival of the human race over millions of years. The generation of food for self sufficiency in space or on planetary bases is a vital part of this vision of space habitation. The amount of mass that can be transported in deep space missions is constrained by the launcher capability and its costs. The space community has proposed and designed various greenhouse modules to cater to human culinary requirements and act as part of life support systems. A survey of the different greenhouse space concepts and terrestrial test facilities is presented, drawing a list of measurable factors (e.g. growth area, power consumption, human activity index, etc.) for the evaluation of greenhouse modules. These factors include tangible and intangible parameters that have been used in the development of an evaluation method on greenhouse concepts as a subsystem of planetary habitats at the DLR Institute of Space Systems, Bremen.

  4. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    NASA Astrophysics Data System (ADS)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  5. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  6. Supernova 1987A Constraints on Sub-GeV Dark Sectors, Millicharged Particles, the QCD Axion, and an Axion-like Particle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Jae Hyeok; Essig, Rouven; McDermott, Samuel D.

    We consider the constraints from Supernova 1987A on particles with small couplings to the Standard Model. We discuss a model with a fermion coupled to a dark photon, with various mass relations in the dark sector; millicharged particles; dark-sector fermions with inelastic transitions; the hadronic QCD axion; and an axion-like particle that couples to Standard Model fermions with couplings proportional to their mass. In the fermion cases, we develop a new diagnostic for assessing when such a particle is trapped at large mixing angles. Our bounds for a fermion coupled to a dark photon constrain small couplings and masses <200more » MeV, and do not decouple for low fermion masses. They exclude parameter space that is otherwise unconstrained by existing accelerator-based and direct-detection searches. In addition, our bounds are complementary to proposed laboratory searches for sub-GeV dark matter, and do not constrain several "thermal" benchmark-model targets. For a millicharged particle, we exclude charges between 10^(-9) to a few times 10^(-6) in units of the electron charge; this excludes parameter space to higher millicharges and masses than previous bounds. For the QCD axion and an axion-like particle, we apply several updated nuclear physics calculations and include the energy dependence of the optical depth to accurately account for energy loss at large couplings. We rule out a hadronic axion of mass between 0.1 and a few hundred eV, or equivalently bound the PQ scale between a few times 10^4 and 10^8 GeV, closing the hadronic axion window. For an axion-like particle, our bounds disfavor decay constants between a few times 10^5 GeV up to a few times 10^8 GeV. In all cases, our bounds differ from previous work by more than an order of magnitude across the entire parameter space. We also provide estimated systematic errors due to the uncertainties of the progenitor.« less

  7. A New Family of Solvable Pearson-Dirichlet Random Walks

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2011-07-01

    An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.

  8. Top-philic dark matter within and beyond the WIMP paradigm

    NASA Astrophysics Data System (ADS)

    Garny, Mathias; Heisig, Jan; Hufnagel, Marco; Lülf, Benedikt

    2018-04-01

    We present a comprehensive analysis of top-philic Majorana dark matter that interacts via a colored t -channel mediator. Despite the simplicity of the model—introducing three parameters only—it provides an extremely rich phenomenology allowing us to accommodate the relic density for a large range of coupling strengths spanning over 6 orders of magnitude. This model features all "exceptional" mechanisms for dark matter freeze-out, including the recently discovered conversion-driven freeze-out mode, with interesting signatures of long-lived colored particles at colliders. We constrain the cosmologically allowed parameter space with current experimental limits from direct, indirect and collider searches, with special emphasis on light dark matter below the top mass. In particular, we explore the interplay between limits from Xenon1T, Fermi-LAT and AMS-02 as well as limits from stop, monojet and Higgs invisible decay searches at the LHC. We find that several blind spots for light dark matter evade current constraints. The region in parameter space where the relic density is set by the mechanism of conversion-driven freeze-out can be conclusively tested by R -hadron searches at the LHC with 300 fb-1 .

  9. Constraints of beyond Standard Model parameters from the study of neutrinoless double beta decay

    NASA Astrophysics Data System (ADS)

    Stoica, Sabin

    2017-12-01

    Neutrinoless double beta (0νββ) decay is a beyond Standard Model (BSM) process whose discovery would clarify if the lepton number is conserved, decide on the neutrinos character (are they Dirac or Majorana particles?) and give a hint on the scale of their absolute masses. Also, from the study of 0νββ one can constrain other BSM parameters related to different scenarios by which this process can occur. In this paper I make first a short review on the actual challenges to calculate precisely the phase space factors and nuclear matrix elements entering the 0νββ decay lifetimes, and I report results of our group for these quantities. Then, taking advance of the most recent experimental limits for 0νββ lifetimes, I present new constraints of the neutrino mass parameters associated with different mechanisms of occurrence of the 0νββ decay mode.

  10. Two algorithms for neural-network design and training with application to channel equalization.

    PubMed

    Sweatman, C Z; Mulgrew, B; Gibson, G J

    1998-01-01

    We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.

  11. Statistics of equivalent width data and new oscillator strengths for Si II, Fe II, and Mn II. [in interstellar medium

    NASA Technical Reports Server (NTRS)

    Van Buren, Dave

    1986-01-01

    Equivalent width data from Copernicus and IUE appear to have an exponential, rather than a Gaussian distribution of errors. This is probably because there is one dominant source of error: the assignment of the background continuum shape. The maximum likelihood method of parameter estimation is presented for the case of exponential statistics, in enough generality for application to many problems. The method is applied to global fitting of Si II, Fe II, and Mn II oscillator strengths and interstellar gas parameters along many lines of sight. The new values agree in general with previous determinations but are usually much more tightly constrained. Finally, it is shown that care must be taken in deriving acceptable regions of parameter space because the probability contours are not generally ellipses whose axes are parallel to the coordinate axes.

  12. An opinion-driven behavioral dynamics model for addictive behaviors

    NASA Astrophysics Data System (ADS)

    Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; Ambrose, Bridget K.; Brodsky, Nancy S.; Brown, Theresa J.; Husten, Corinne; Glass, Robert J.

    2015-04-01

    We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual's behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters provide targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. This has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.

  13. Dilepton production from the quark-gluon plasma using (3 +1 )-dimensional anisotropic dissipative hydrodynamics

    NASA Astrophysics Data System (ADS)

    Ryblewski, Radoslaw; Strickland, Michael

    2015-07-01

    We compute dilepton production from the deconfined phase of the quark-gluon plasma using leading-order (3 +1 )-dimensional anisotropic hydrodynamics. The anisotropic hydrodynamics equations employed describe the full spatiotemporal evolution of the transverse temperature, spheroidal momentum-space anisotropy parameter, and the associated three-dimensional collective flow of the matter. The momentum-space anisotropy is also taken into account in the computation of the dilepton production rate, allowing for a self-consistent description of dilepton production from the quark-gluon plasma. For our final results, we present predictions for high-energy dilepton yields as a function of invariant mass, transverse momentum, and pair rapidity. We demonstrate that high-energy dilepton production is extremely sensitive to the assumed level of initial momentum-space anisotropy of the quark-gluon plasma. As a result, it may be possible to experimentally constrain the early-time momentum-space anisotropy of the quark-gluon plasma generated in relativistic heavy-ion collisions using high-energy dilepton yields.

  14. Asteroseismic Diagram for Subgiants and Red Giants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gai, Ning; Tang, Yanke; Yu, Peng

    Asteroseismology is a powerful tool for constraining stellar parameters. NASA’s Kepler mission is providing individual eigenfrequencies for a huge number of stars, including thousands of red giants. Besides the frequencies of acoustic modes, an important breakthrough of the Kepler mission is the detection of nonradial gravity-dominated mixed-mode oscillations in red giants. Unlike pure acoustic modes, mixed modes probe deeply into the interior of stars, allowing the stellar core properties and evolution of stars to be derived. In this work, using the gravity-mode period spacing and the large frequency separation, we construct the ΔΠ{sub 1}–Δ ν asteroseismic diagram from models ofmore » subgiants and red giants with various masses and metallicities. The relationship ΔΠ{sub 1}–Δ ν is able to constrain the ages and masses of the subgiants. Meanwhile, for red giants with masses above 1.5 M {sub ⊙}, the ΔΠ{sub 1}–Δ ν asteroseismic diagram can also work well to constrain the stellar age and mass. Additionally, we calculate the relative “isochrones” τ , which indicate similar evolution states especially for similar mass stars, on the ΔΠ{sub 1}–Δ ν diagram.« less

  15. Designing a space-based galaxy redshift survey to probe dark energy

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Percival, Will; Cimatti, Andrea; Mukherjee, Pia; Guzzo, Luigi; Baugh, Carlton M.; Carbone, Carmelita; Franzetti, Paolo; Garilli, Bianca; Geach, James E.; Lacey, Cedric G.; Majerotto, Elisabetta; Orsi, Alvaro; Rosati, Piero; Samushia, Lado; Zamorani, Giovanni

    2010-12-01

    A space-based galaxy redshift survey would have enormous power in constraining dark energy and testing general relativity, provided that its parameters are suitably optimized. We study viable space-based galaxy redshift surveys, exploring the dependence of the Dark Energy Task Force (DETF) figure-of-merit (FoM) on redshift accuracy, redshift range, survey area, target selection and forecast method. Fitting formulae are provided for convenience. We also consider the dependence on the information used: the full galaxy power spectrum P(k), P(k) marginalized over its shape, or just the Baryon Acoustic Oscillations (BAO). We find that the inclusion of growth rate information (extracted using redshift space distortion and galaxy clustering amplitude measurements) leads to a factor of ~3 improvement in the FoM, assuming general relativity is not modified. This inclusion partially compensates for the loss of information when only the BAO are used to give geometrical constraints, rather than using the full P(k) as a standard ruler. We find that a space-based galaxy redshift survey covering ~20000deg2 over with σz/(1 + z) <= 0.001 exploits a redshift range that is only easily accessible from space, extends to sufficiently low redshifts to allow both a vast 3D map of the universe using a single tracer population, and overlaps with ground-based surveys to enable robust modelling of systematic effects. We argue that these parameters are close to their optimal values given current instrumental and practical constraints.

  16. Constraining the near-core rotation of the γ Doradus star 43 Cygni using BRITE-Constellation data

    NASA Astrophysics Data System (ADS)

    Zwintz, K.; Van Reeth, T.; Tkachenko, A.; Gössl, S.; Pigulski, A.; Kuschnig, R.; Handler, G.; Moffat, A. F. J.; Popowicz, A.; Wade, G.; Weiss, W. W.

    2017-12-01

    Context. Photometric time series of the γ Doradus star 43 Cyg obtained with the BRITE-Constellation nano-satellites allow us to study its pulsational properties in detail and to constrain its interior structure. Aims: We aim to find a g-mode period-spacing pattern that allows us to determine the near-core rotation rate of 43 Cyg and redetermine the star's fundamental atmospheric parameters and chemical composition. Methods: We conducted a frequency analysis using the 156-day long data set obtained with the BRITE-Toronto satellite and employed a suite of MESA/GYRE models to derive the mode identification, asymptotic period-spacing, and near-core rotation rate. We also used high-resolution spectroscopic data with high signal-to-noise ratio obtained at the 1.2 m Mercator telescope with the HERMES spectrograph to redetermine the fundamental atmospheric parameters and chemical composition of 43 Cyg using the software Spectroscopy Made Easy (SME). Results: We detected 43 intrinsic pulsation frequencies and identified 18 of them to be part of a period-spacing pattern consisting of prograde dipole modes with an asymptotic period-spacing ΔΠl = 1 of 2970-570+700 s. The near-core rotation rate was determined to be frot = 0.56-0.14+0.12 d-1. The atmosphere of 43 Cyg shows solar chemical composition at an effective temperature, Teff, of 7150 ± 150 K, a log g of 4.2 ± 0.6 dex, and a projected rotational velocity, υsini, of 44 ± 4 km s-1. Conclusions: The morphology of the observed period-spacing patterns shows indications of a significant chemical gradient in the stellar interior. Based on data collected by the BRITE Constellation satellite mission, designed, built, launched, operated and supported by the Austrian Research Promotion Agency (FFG), the University of Vienna, the Technical University of Graz, the Canadian Space Agency (CSA), the University of Toronto Institute for Aerospace Studies (UTIAS), the Foundation for Polish Science & Technology (FNiTP MNiSW), and National Science Centre (NCN).The light curves (in tabular form) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A103

  17. New method to design stellarator coils without the winding surface

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-01-01

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal ‘winding’ surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code, named flexible optimized coils using space curves (FOCUS), has been developed. Applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.

  18. Constraining the loop quantum gravity parameter space from phenomenology

    NASA Astrophysics Data System (ADS)

    Brahma, Suddhasattwa; Ronco, Michele

    2018-03-01

    Development of quantum gravity theories rarely takes inputs from experimental physics. In this letter, we take a small step towards correcting this by establishing a paradigm for incorporating putative quantum corrections, arising from canonical quantum gravity (QG) theories, in deriving falsifiable modified dispersion relations (MDRs) for particles on a deformed Minkowski space-time. This allows us to differentiate and, hopefully, pick between several quantization choices via testable, state-of-the-art phenomenological predictions. Although a few explicit examples from loop quantum gravity (LQG) (such as the regularization scheme used or the representation of the gauge group) are shown here to establish the claim, our framework is more general and is capable of addressing other quantization ambiguities within LQG and also those arising from other similar QG approaches.

  19. Explaining postseismic and aseismic transient deformation in subduction zones with rate and state friction modeling constrained by lab and geodetic observations

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Dedontney, N. L.; Rice, J. R.

    2007-12-01

    Rate and state friction, as applied to modeling subduction earthquake sequences, routinely predicts postseismic slip. It also predicts spontaneous aseismic slip transients, at least when pore pressure p is highly elevated near and downdip from the stability transition [Liu and Rice, 2007]. Here we address how to make such postseismic and transient predictions more fully compatible with geophysical observations. For example, lab observations can determine the a, b parameters and state evolution slip L of rate and state friction as functions of lithology and temperature and, with aid of a structural and thermal model of the subduction zone, as functions of downdip distance. Geodetic observations constrain interseismic, postseismic and aseismic transient deformations, which are controlled in the modeling by the distributions of a \\barσ and b \\barσ (parameters which also partly control the seismic rupture phase), where \\barσ = σ - p. Elevated p, controlled by tectonic compression and dehydration, may be constrained by petrologic and seismic observations. The amount of deformation and downdip extent of the slipping zone associated with the spontaneous quasi- periodic transients, as thus far modeled [Liu and Rice, 2007], is generally smaller than that observed during episodes of slow slip events in northern Cascadia and SW Japan subduction zones. However, the modeling was based on lab data for granite gouge under hydrothermal conditions because data is most complete for that case. We here report modeling based on lab data on dry granite gouge [Stesky, 1975; Lockner et al., 1986], involving no or lessened chemical interaction with water and hence being a possibly closer analog to dehydrated oceanic crust, and limited data on gabbro gouge [He et al., 2007], an expected lithology. Both data sets show a much less rapid increase of a-b with temperature above the stability transition (~ 350 °C) than does wet granite gouge; a-b increases to ~ 0.08 for wet granite at 600 °C, but to only ~ 0.01 in the dry granite and gabbro cases. We find that the lessened high-T a - b does, for the same \\barσ, modestly extend the transient slip episodes further downdip, although a majority of slip is still contributed near and in the updip rate-weakening region. However, postseismic slip, for the same \\barσ, propagates much further downdip into the rate-strengthening region. To better constrain the downdip distribution of (a - b) \\barσ, and possibly a \\barσ and L, we focus on the geodetically constrained [Hutton et al., 2001] space-time distribution of postseismic slip for the 1995 Mw = 8.0 Colima-Jalisco earthquake. This is a similarly shallow dipping subduction zone with a thermal profile [Currie et al., 2001] comparable to those that have thus far been shown to exhibit aseismic transients and non-volcanic tremor [Peacock et al., 2002]. We extrapolate the modeled 2-D postseismic slip, following a thrust earthquake with a coseismic slip similar to the 1995 event, to a spatial-temporal 3-D distribution. Surface deformation due to such slips on the thrust fault in an elastic half space is calculated and compared to that observed at western Mexico GPS stations, to constrain the above depth-variable model parameters.

  20. Constraining the optical potential in the search for η-mesic 4He

    NASA Astrophysics Data System (ADS)

    Skurzok, M.; Moskal, P.; Kelkar, N. G.; Hirenzaki, S.; Nagahiro, H.; Ikeno, N.

    2018-07-01

    A consistent description of the dd →4Heη and dd → (4Heη)bound→ X cross sections was recently proposed with a broad range of real (V0) and imaginary (W0), η-4He optical potential parameters leading to a good agreement with the dd →4Heη data. Here we compare the predictions of the model below the η production threshold, with the WASA-at-COSY excitation functions for the dd →3HeNπ reactions to put stronger constraints on (V0 ,W0). The allowed parameter space (with |V0 | < ∼ 60 MeV and |W0 | < ∼ 7 MeV estimated at 90% CL) excludes most optical model predictions of η-4He nuclei except for some loosely bound narrow states.

  1. Missing Title

    NASA Astrophysics Data System (ADS)

    Cook, T. A.; Chakrabarti, S.; Bifano, T. G.; Lane, B.; Levine, B. M.; Shao, M.

    2004-05-01

    The study of extrasolar planets is one of the most exciting research endeavors of modern astrophysics. While the list of known planets continues to grow, no direct image of any extrasolar planet has been obtained to date. Ground-breaking radial velocity measurements have identified many potential targets but other measurements are needed to obtain physical parameters of the extrasolar planets. For example, for most extrasolar giant planets we only know their minimum projected mass (M sin i). Even a single image of one extrasolar planet will fully determine its orbital parameters and thus its true mass. A single image would also provide albedo information which would begin to constrain their atmospheric properties. This is the objective of PICTURE, a low-cost space mission specifically designed to obtain the first direct image of extrasolar giant planets.

  2. Methodology for comparing worldwide performance of diverse weight-constrained high energy laser systems

    NASA Astrophysics Data System (ADS)

    Bartell, Richard J.; Perram, Glen P.; Fiorino, Steven T.; Long, Scott N.; Houle, Marken J.; Rice, Christopher A.; Manning, Zachary P.; Bunch, Dustin W.; Krizo, Matthew J.; Gravley, Liesebet E.

    2005-06-01

    The Air Force Institute of Technology's Center for Directed Energy has developed a software model, the High Energy Laser End-to-End Operational Simulation (HELEEOS), under the sponsorship of the High Energy Laser Joint Technology Office (JTO), to facilitate worldwide comparisons across a broad range of expected engagement scenarios of expected performance of a diverse range of weight-constrained high energy laser system types. HELEEOS has been designed to meet JTO's goals of supporting a broad range of analyses applicable to the operational requirements of all the military services, constraining weapon effectiveness through accurate engineering performance assessments allowing its use as an investment strategy tool, and the establishment of trust among military leaders. HELEEOS is anchored to respected wave optics codes and all significant degradation effects, including thermal blooming and optical turbulence, are represented in the model. The model features operationally oriented performance metrics, e.g. dwell time required to achieve a prescribed probability of kill and effective range. Key features of HELEEOS include estimation of the level of uncertainty in the calculated Pk and generation of interactive nomographs to allow the user to further explore a desired parameter space. Worldwide analyses are enabled at five wavelengths via recently available databases capturing climatological, seasonal, diurnal, and geographical spatial-temporal variability in atmospheric parameters including molecular and aerosol absorption and scattering profiles and optical turbulence strength. Examples are provided of the impact of uncertainty in weight-power relationships, coupled with operating condition variability, on results of performance comparisons between chemical and solid state lasers.

  3. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2011-12-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  4. Uncertainty Quantification and Parameter Tuning: A Case Study of Convective Parameterization Scheme in the WRF Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.

    2012-04-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  5. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2012-03-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  6. Search for Muonic Dark Forces at BABAR

    NASA Astrophysics Data System (ADS)

    Godang, Romulus

    2017-04-01

    Many models of physics beyond Standard Model predict the existence of light Higgs states, dark photons, and new gauge bosons mediating interactions between dark sectors and the Standard Model. Using a full data sample collected with the BABAR detector at the PEP-II e+e- collider, we report searches for a light non-Standard Model Higgs boson, dark photon, and a new muonic dark force mediated by a gauge boson (Z') coupling only to the second and third lepton families. Our results significantly improve upon the current bounds and further constrain the remaining region of the allowed parameter space.

  7. Optimal control of first order distributed systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Johnson, T. L.

    1972-01-01

    The problem of characterizing optimal controls for a class of distributed-parameter systems is considered. The system dynamics are characterized mathematically by a finite number of coupled partial differential equations involving first-order time and space derivatives of the state variables, which are constrained at the boundary by a finite number of algebraic relations. Multiple control inputs, extending over the entire spatial region occupied by the system ("distributed controls') are to be designed so that the response of the system is optimal. A major example involving boundary control of an unstable low-density plasma is developed from physical laws.

  8. Trajectory Design Considerations for Exploration Mission 1

    NASA Technical Reports Server (NTRS)

    Dawn, Timothy F.; Gutkowski, Jeffrey P.; Batcha, Amelia L.

    2017-01-01

    Exploration Mission 1 (EM-1) will be the first mission to send an uncrewed Orion vehicle to cislunar space in 2018, targeted to a Distant Retrograde Orbit (DRO). Analysis of EM-1 DRO mission opportunities in 2018 help characterize mission parameters that are of interest to other subsystems (e.g., power, thermal, communications, flight operations, etc). Subsystems request mission design trades which include: landing lighting, addition of an Orion main engine checkout burn, and use of auxiliary thruster only cases. This paper examines the evolving trade studies that incorporate subsystem feedback and demonstrate the feasibility of these constrained mission trajectory designs and contingencies.

  9. Dark-matter decay as a complementary probe of multicomponent dark sectors.

    PubMed

    Dienes, Keith R; Kumar, Jason; Thomas, Brooks; Yaylali, David

    2015-02-06

    In single-component theories of dark matter, the 2→2 amplitudes for dark-matter production, annihilation, and scattering can be related to each other through various crossing symmetries. The detection techniques based on these processes are thus complementary. However, multicomponent theories exhibit an additional direction for dark-matter complementarity: the possibility of dark-matter decay from heavier to lighter components. We discuss how this new detection channel may be correlated with the others, and demonstrate that the enhanced complementarity which emerges can be an important ingredient in probing and constraining the parameter spaces of such models.

  10. QCD axion dark matter from long-lived domain walls during matter domination

    NASA Astrophysics Data System (ADS)

    Harigaya, Keisuke; Kawasaki, Masahiro

    2018-07-01

    The domain wall problem of the Peccei-Quinn mechanism can be solved if the Peccei-Quinn symmetry is explicitly broken by a small amount. Domain walls decay into axions, which may account for dark matter of the universe. This scheme is however strongly constrained by overproduction of axions unless the phase of the explicit breaking term is tuned. We investigate the case where the universe is matter-dominated around the temperature of the MeV scale and domain walls decay during this matter dominated epoch. We show how the viable parameter space is expanded.

  11. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: constraining modified gravity

    NASA Astrophysics Data System (ADS)

    Mueller, Eva-Maria; Percival, Will; Linder, Eric; Alam, Shadab; Zhao, Gong-Bo; Sánchez, Ariel G.; Beutler, Florian; Brinkmann, Jon

    2018-04-01

    We use baryon acoustic oscillation and redshift space distortion from the completed Baryon Oscillation Spectroscopic Survey, corresponding to Data Release 12 of the Sloan Digital Sky Survey, combined sample analysis in combination with cosmic microwave background, supernova, and redshift space distortion measurements from additional spectroscopic surveys to test deviations from general relativity. We present constraints on several phenomenological models of modified gravity: First, we parametrize the growth of structure using the growth index γ, finding γ = 0.566 ± 0.058 (68 per cent C.L.). Secondly, we modify the relation of the two Newtonian potentials by introducing two additional parameters, GM and GL. In this approach, GM refers to modifications of the growth of structure whereas GL to modification of the lensing potential. We consider a power law to model the redshift dependence of GM and GL as well as binning in redshift space, introducing four additional degrees of freedom, GM(z < 0.5), GM(z > 0.5), GL(z < 0.5), and GL(z > 0.5). At 68 per cent C.L., we measure GM = 0.980 ± 0.096 and GL = 1.082 ± 0.060 for a linear model, GM = 1.01 ± 0.36 and GL = 1.31 ± 0.19 for a cubic model as well as GM(z < 0.5) = 1.26 ± 0.32, GM(z > 0.5) = 0.986 ± 0.022, GL(z < 0.5) = 1.067 ± 0.058, and GL(z > 0.5) = 1.037 ± 0.029. Thirdly, we investigate general scalar tensor theories of gravity, finding the model to be mostly unconstrained by current data. Assuming a one-parameter f(R) model, we can constrain B0 < 7.7 × 10-5 (95 per cent C.L). For all models we considered, we find good agreement with general relativity.

  12. Post-LHC7 fine-tuning in the minimal supergravity/CMSSM model with a 125 GeV Higgs boson

    NASA Astrophysics Data System (ADS)

    Baer, Howard; Barger, Vernon; Huang, Peisi; Mickelson, Dan; Mustafayev, Azar; Tata, Xerxes

    2013-02-01

    The recent discovery of a 125 GeV Higgs-like resonance at LHC, coupled with the lack of evidence for weak scale supersymmetry (SUSY), has severely constrained SUSY models such as minimal supergravity (mSUGRA)/CMSSM. As LHC probes deeper into SUSY model parameter space, the little hierarchy problem—how to reconcile the Z and Higgs boson mass scale with the scale of SUSY breaking—will become increasingly exacerbated unless a sparticle signal is found. We evaluate two different measures of fine-tuning in the mSUGRA/CMSSM model. The more stringent of these, ΔHS, includes effects that arise from the high-scale origin of the mSUGRA parameters while the second measure, ΔEW, is determined only by weak scale parameters: hence, it is universal to any model with the same particle spectrum and couplings. Our results incorporate the latest constraints from LHC7 sparticle searches, LHCb limits from Bs→μ+μ- and also require a light Higgs scalar with mh˜123-127GeV. We present fine-tuning contours in the m0 vs m1/2 plane for several sets of A0 and tan⁡β values. We also present results for ΔHS and ΔEW from a scan over the entire viable model parameter space. We find a ΔHS≳103, or at best 0.1%, fine-tuning. For the less stringent electroweak fine-tuning, we find ΔEW≳102, or at best 1%, fine-tuning. Two benchmark points are presented that have the lowest values of ΔHS and ΔEW. Our results provide a quantitative measure for ascertaining whether or not the remaining mSUGRA/CMSSM model parameter space is excessively fine-tuned and so could provide impetus for considering alternative SUSY models.

  13. Hot-spot model for accretion disc variability as random process. II. Mathematics of the power-spectrum break frequency

    NASA Astrophysics Data System (ADS)

    Pecháček, T.; Goosmann, R. W.; Karas, V.; Czerny, B.; Dovčiak, M.

    2013-08-01

    Context. We study some general properties of accretion disc variability in the context of stationary random processes. In particular, we are interested in mathematical constraints that can be imposed on the functional form of the Fourier power-spectrum density (PSD) that exhibits a multiply broken shape and several local maxima. Aims: We develop a methodology for determining the regions of the model parameter space that can in principle reproduce a PSD shape with a given number and position of local peaks and breaks of the PSD slope. Given the vast space of possible parameters, it is an important requirement that the method is fast in estimating the PSD shape for a given parameter set of the model. Methods: We generated and discuss the theoretical PSD profiles of a shot-noise-type random process with exponentially decaying flares. Then we determined conditions under which one, two, or more breaks or local maxima occur in the PSD. We calculated positions of these features and determined the changing slope of the model PSD. Furthermore, we considered the influence of the modulation by the orbital motion for a variability pattern assumed to result from an orbiting-spot model. Results: We suggest that our general methodology can be useful for describing non-monotonic PSD profiles (such as the trend seen, on different scales, in exemplary cases of the high-mass X-ray binary Cygnus X-1 and the narrow-line Seyfert galaxy Ark 564). We adopt a model where these power spectra are reproduced as a superposition of several Lorentzians with varying amplitudes in the X-ray-band light curve. Our general approach can help in constraining the model parameters and in determining which parts of the parameter space are accessible under various circumstances.

  14. SU-G-BRA-08: Diaphragm Motion Tracking Based On KV CBCT Projections with a Constrained Linear Regression Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, J; Chao, M

    2016-06-15

    Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less

  15. Analysing 21cm signal with artificial neural network

    NASA Astrophysics Data System (ADS)

    Shimabukuro, Hayato; a Semelin, Benoit

    2018-05-01

    The 21cm signal at epoch of reionization (EoR) should be observed within next decade. We expect that cosmic 21cm signal at the EoR provides us both cosmological and astrophysical information. In order to extract fruitful information from observation data, we need to develop inversion method. For such a method, we introduce artificial neural network (ANN) which is one of the machine learning techniques. We apply the ANN to inversion problem to constrain astrophysical parameters from 21cm power spectrum. We train the architecture of the neural network with 70 training datasets and apply it to 54 test datasets with different value of parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameter sets at a given redshift and also find that the accuracy of reconstruction is improved by increasing the number of given redshifts. We conclude that the ANN is viable inversion method whose main strength is that they require a sparse extrapolation of the parameter space and thus should be usable with full simulation.

  16. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522

  17. Universally Sloppy Parameter Sensitivities in Systems Biology Models

    PubMed Central

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-01-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568

  18. Universally sloppy parameter sensitivities in systems biology models.

    PubMed

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  19. Lepton Flavorful Fifth Force and Depth-Dependent Neutrino Matter Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wise, Mark B.; Zhang, Yue

    We consider a fifth force to be an interaction that couples to matter with a strength that grows with the number of atoms. In addition to competing with the strength of gravity a fifth force can give rise to violations of the equivalence principle. Current long range constraints on the strength and range of fifth forces are very impressive. Amongst possible fifth forces are those that couple to lepton flavorful chargesmore » $$L_e-L_{\\mu}$$ or $$L_e-L_{\\tau}$$. They have the property that their range and strength are also constrained by neutrino interactions with matter. In this brief note we review the existing constraints on the allowed parameter space in gauged $$U(1)_{L_e-L_{\\mu}, L_{\\tau}}$$. We find two regions where neutrino oscillation experiments are at the frontier of probing such a new force. In particular, there is an allowed range of parameter space where neutrino matter interactions relevant for long baseline oscillation experiments depend on the depth of the neutrino beam below the surface of the earth.« less

  20. Predicting Instability Timescales in Closely-Packed Planetary Systems

    NASA Astrophysics Data System (ADS)

    Tamayo, Daniel; Hadden, Samuel; Hussain, Naireen; Silburt, Ari; Gilbertson, Christian; Rein, Hanno; Menou, Kristen

    2018-04-01

    Many of the multi-planet systems discovered around other stars are maximally packed. This implies that simulations with masses or orbital parameters too far from the actual values will destabilize on short timescales; thus, long-term dynamics allows one to constrain the orbital architectures of many closely packed multi-planet systems. A central challenge in such efforts is the large computational cost of N-body simulations, which preclude a full survey of the high-dimensional parameter space of orbital architectures allowed by observations. I will present our recent successes in training machine learning models capable of reliably predicting orbital stability a million times faster than N-body simulations. By engineering dynamically relevant features that we feed to a gradient-boosted decision tree algorithm (XGBoost), we are able to achieve a precision and recall of 90% on a holdout test set of N-body simulations. This opens a wide discovery space for characterizing new exoplanet discoveries and for elucidating how orbital architectures evolve through time as the next generation of spaceborne exoplanet surveys prepare for launch this year.

  1. Cosmic structures and gravitational waves in ghost-free scalar-tensor theories of gravity

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Karmakar, Purnendu; Matarrese, Sabino; Scomparin, Mattia

    2018-05-01

    We study cosmic structures in the quadratic Degenerate Higher Order Scalar Tensor (qDHOST) model, which has been proposed as the most general scalar-tensor theory (up to quadratic dependence on the covariant derivatives of the scalar field), which is not plagued by the presence of ghost instabilities. We then study a static, spherically symmetric object embedded in de Sitter space-time for the qDHOST model. This model exhibits breaking of the Vainshtein mechanism inside the cosmic structure and Schwarzschild-de Sitter space-time outside, where General Relativity (GR) can be recovered within the Vainshtein radius. We constrained the parameters of the qDHOST model by requiring the validity of the Vainshtein screening mechanism inside the cosmic structures and the consistency with the recently established bounds on gravitational wave speed from GW170817/GRB170817A event. We find that these two constraints rule out the same set of parameters, corresponding to the Lagrangians that are quadratic in second-order derivatives of the scalar field, for the shift symmetric qDHOST.

  2. Strong constraints on sub-GeV dark sectors from SLAC beam dump E137.

    PubMed

    Batell, Brian; Essig, Rouven; Surujon, Ze'ev

    2014-10-24

    We present new constraints on sub-GeV dark matter and dark photons from the electron beam-dump experiment E137 conducted at SLAC in 1980-1982. Dark matter interacting with electrons (e.g., via a dark photon) could have been produced in the electron-target collisions and scattered off electrons in the E137 detector, producing the striking, zero-background signature of a high-energy electromagnetic shower that points back to the beam dump. E137 probes new and significant ranges of parameter space and constrains the well-motivated possibility that dark photons that decay to light dark-sector particles can explain the ∼3.6σ discrepancy between the measured and standard model value of the muon anomalous magnetic moment. It also restricts the parameter space in which the relic density of dark matter in these models is obtained from thermal freeze-out. E137 also convincingly demonstrates that (cosmic) backgrounds can be controlled and thus serves as a powerful proof of principle for future beam-dump searches for sub-GeV dark-sector particles scattering off electrons in the detector.

  3. PLUTO'S SEASONS: NEW PREDICTIONS FOR NEW HORIZONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, L. A.

    Since the last Pluto volatile transport models were published in 1996, we have (1) new stellar occultation data from 2002 and 2006-2012 that show roughly twice the pressure as the first definitive occultation from 1988, (2) new information about the surface properties of Pluto, (3) a spacecraft due to arrive at Pluto in 2015, and (4) a new volatile transport model that is rapid enough to allow a large parameter-space search. Such a parameter-space search coarsely constrained by occultation results reveals three broad solutions: a high-thermal inertia, large volatile inventory solution with permanent northern volatiles (PNVs; using the rotational northmore » pole convention); a lower thermal-inertia, smaller volatile inventory solution with exchanges of volatiles between hemispheres and a pressure plateau beyond 2015 (exchange with pressure plateau, EPP); and solutions with still smaller volatile inventories, with exchanges of volatiles between hemispheres and an early collapse of the atmosphere prior to 2015 (exchange with early collapse, EEC). PNV and EPP are favored by stellar occultation data, but EEC cannot yet be definitively ruled out without more atmospheric modeling or additional occultation observations and analysis.« less

  4. Explaining dark matter and B decay anomalies with an L μ - L τ model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altmannshofer, Wolfgang; Gori, Stefania; Profumo, Stefano

    We present a dark sector model based on gauging the L μ - L τ symmetry that addresses anomalies in b→ sμ +μ - decays and that features a particle dark matter candidate. The dark matter particle candidate is a vector-like Dirac fermion coupled to the Z' gauge boson of the L μ - L τ symmetry. We compute the dark matter thermal relic density, its pair-annihilation cross section, and the loop-suppressed dark matter-nucleon scattering cross section, and compare our predictions with current and future experimental results. We demonstrate that after taking into account bounds from Bs meson oscillations, darkmore » matter direct detection, and the CMB, the model is highly predictive: B physics anomalies and a viable particle dark matter candidate, with a mass of ~ (5 - 23) GeV, can be accommodated only in a tightly-constrained region of parameter space, with sharp predictions for future experimental tests. The viable region of parameter space expands if the dark matter is allowed to have L μ - L τ charges that are smaller than those of the SM leptons.« less

  5. Explaining dark matter and B decay anomalies with an L μ - L τ model

    DOE PAGES

    Altmannshofer, Wolfgang; Gori, Stefania; Profumo, Stefano; ...

    2016-12-20

    We present a dark sector model based on gauging the L μ - L τ symmetry that addresses anomalies in b→ sμ +μ - decays and that features a particle dark matter candidate. The dark matter particle candidate is a vector-like Dirac fermion coupled to the Z' gauge boson of the L μ - L τ symmetry. We compute the dark matter thermal relic density, its pair-annihilation cross section, and the loop-suppressed dark matter-nucleon scattering cross section, and compare our predictions with current and future experimental results. We demonstrate that after taking into account bounds from Bs meson oscillations, darkmore » matter direct detection, and the CMB, the model is highly predictive: B physics anomalies and a viable particle dark matter candidate, with a mass of ~ (5 - 23) GeV, can be accommodated only in a tightly-constrained region of parameter space, with sharp predictions for future experimental tests. The viable region of parameter space expands if the dark matter is allowed to have L μ - L τ charges that are smaller than those of the SM leptons.« less

  6. Optimal design of focused experiments and surveys

    NASA Astrophysics Data System (ADS)

    Curtis, Andrew

    1999-10-01

    Experiments and surveys are often performed to obtain data that constrain some previously underconstrained model. Often, constraints are most desired in a particular subspace of model space. Experiment design optimization requires that the quality of any particular design can be both quantified and then maximized. This study shows how the quality can be defined such that it depends on the amount of information that is focused in the particular subspace of interest. In addition, algorithms are presented which allow one particular focused quality measure (from the class of focused measures) to be evaluated efficiently. A subclass of focused quality measures is also related to the standard variance and resolution measures from linearized inverse theory. The theory presented here requires that the relationship between model parameters and data can be linearized around a reference model without significant loss of information. Physical and financial constraints define the space of possible experiment designs. Cross-well tomographic examples are presented, plus a strategy for survey design to maximize information about linear combinations of parameters such as bulk modulus, κ =λ+ 2μ/3.

  7. Cornering pseudoscalar-mediated dark matter with the LHC and cosmology

    NASA Astrophysics Data System (ADS)

    Banerjee, Shankha; Barducci, Daniele; Bélanger, Geneviève; Fuks, Benjamin; Goudelis, Andreas; Zaldivar, Bryan

    2017-07-01

    Models in which dark matter particles communicate with the visible sector through a pseudoscalar mediator are well-motivated both from a theoretical and from a phenomenological standpoint. With direct detection bounds being typically subleading in such scenarios, the main constraints stem either from collider searches for dark matter, or from indirect detection experiments. However, LHC searches for the mediator particles themselves can not only compete with — or even supersede — the reach of direct collider dark matter probes, but they can also test scenarios in which traditional monojet searches become irrelevant, especially when the mediator cannot decay on-shell into dark matter particles or its decay is suppressed. In this work we perform a detailed analysis of a pseudoscalar-mediated dark matter simplified model, taking into account a large set of collider constraints and concentrating on the parameter space regions favoured by cos-mological and astrophysical data. We find that mediator masses above 100-200 GeV are essentially excluded by LHC searches in the case of large couplings to the top quark, while forthcoming collider and astrophysical measurements will further constrain the available parameter space.

  8. Mitigating direct detection bounds in non-minimal Higgs portal scalar dark matter models

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Subhaditya; Ghosh, Purusottam; Maity, Tarak Nath; Ray, Tirtha Sankar

    2017-10-01

    The minimal Higgs portal dark matter model is increasingly in tension with recent results form direct detection experiments like LUX and XENON. In this paper we make a systematic study of simple extensions of the Z_2 stabilized singlet scalar Higgs portal scenario in terms of their prospects at direct detection experiments. We consider both enlarging the stabilizing symmetry to Z_3 and incorporating multipartite features in the dark sector. We demonstrate that in these non-minimal models the interplay of annihilation, co-annihilation and semi-annihilation processes considerably relax constraints from present and proposed direct detection experiments while simultaneously saturating observed dark matter relic density. We explore in particular the resonant semi-annihilation channel within the multipartite Z_3 framework which results in new unexplored regions of parameter space that would be difficult to constrain by direct detection experiments in the near future. The role of dark matter exchange processes within multi-component Z_3× Z_3^' } framework is illustrated. We make quantitative estimates to elucidate the role of various annihilation processes in the different allowed regions of parameter space within these models.

  9. Use of strontium isotopes to identify buried water main leakage into groundwater in a highly urbanized coastal area.

    PubMed

    Leung, Chi-Man; Jiao, Jiu Jimmy

    2006-11-01

    Previous studies indicate that the local aquifer systems in the Mid-Levels, a highly urbanized coastal area in Hong Kong, have commonly been affected by leakage from water mains. The identification of leakage locations was done by conventional water quality parameters including major and trace elements. However, these parameters may lead to ambiguous results and fail to identify leakage locations especially where the leakage is from drinking water mains because the chemical composition of drinking water is similar to that of natural groundwater. In this study, natural groundwater, seepage in the developed spaces, leakage from water mains, and parent aquifer materials were measured for strontium isotope (87Sr/86Sr) compositions to explore the feasibility of using these ratios to better constrain the seepage sources. The results show that the 87Sr/86Sr ratios of natural groundwater and leakage from water mains are distinctly different and thus, they can provide additional information on the sources of seepage in developed spaces. A classification system based on the aqueous 87Sr/86Sr ratio is proposed for seepage source identification.

  10. The retrieval of a buried cylindrical obstacle by a constrained modified gradient method in the H-polarization case and for Maxwellian materials

    NASA Astrophysics Data System (ADS)

    Lambert, M.; Lesselier, D.; Kooij, B. J.

    1998-10-01

    The retrieval of an unknown, possibly inhomogeneous, penetrable cylindrical obstacle buried entirely in a known homogeneous half-space - the constitutive material parameters of the obstacle and of its embedding obey a Maxwell model - is considered from single- or multiple-frequency aspect-limited data collected by ideal sensors located in air above the embedding half-space, when a small number of time-harmonic transverse electric (TE)-polarized line sources - the magnetic field H is directed along the axis of the cylinder - is also placed in air. The wavefield is modelled from a rigorous H-field domain integral-differential formulation which involves the dot product of the gradients of the single component of H and of the Green function of the stratified environment times a scalar-valued contrast function which contains the obstacle parameters (the frequency-independent, position-dependent relative permittivity and conductivity). A modified gradient method is developed in order to reconstruct the maps of such parameters within a prescribed search domain from the iterative minimization of a cost functional which incorporates both the error in reproducing the data and the error on the field built inside this domain. Non-physical values are excluded and convergence reached by incorporating in the solution algorithm, from a proper choice of unknowns, the condition that the relative permittivity be larger than or equal to 1, and the conductivity be non-negative. The efficiency of the constrained method is illustrated from noiseless and noisy synthetic data acquired independently. The importance of the choice of the initial values of the sought quantities, the need for a periodic refreshment of the constitutive parameters to avoid the algorithm providing inconsistent results, and the interest of a frequency-hopping strategy to obtain finer and finer features of the obstacle when the frequency is raised, are underlined. It is also shown that though either the permittivity map or the conductivity map can be obtained for a fair variety of cases, retrieving both of them may be difficult unless further information is made available.

  11. Supersymmetry searches in GUT models with non-universal scalar masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannoni, M.; Gómez, M.E.; Ellis, J.

    2016-03-01

    We study SO(10), SU(5) and flipped SU(5) GUT models with non-universal soft supersymmetry-breaking scalar masses, exploring how they are constrained by LHC supersymmetry searches and cold dark matter experiments, and how they can be probed and distinguished in future experiments. We find characteristic differences between the various GUT scenarios, particularly in the coannihilation region, which is very sensitive to changes of parameters. For example, the flipped SU(5) GUT predicts the possibility of ∼t{sub 1}−χ coannihilation, which is absent in the regions of the SO(10) and SU(5) GUT parameter spaces that we study. We use the relic density predictions in differentmore » models to determine upper bounds for the neutralino masses, and we find large differences between different GUT models in the sparticle spectra for the same LSP mass, leading to direct connections of distinctive possible experimental measurements with the structure of the GUT group. We find that future LHC searches for generic missing E{sub T}, charginos and stops will be able to constrain the different GUT models in complementary ways, as will the Xenon 1 ton and Darwin dark matter scattering experiments and future FERMI or CTA γ-ray searches.« less

  12. Time and Memory Efficient Online Piecewise Linear Approximation of Sensor Signals.

    PubMed

    Grützmacher, Florian; Beichler, Benjamin; Hein, Albert; Kirste, Thomas; Haubelt, Christian

    2018-05-23

    Piecewise linear approximation of sensor signals is a well-known technique in the fields of Data Mining and Activity Recognition. In this context, several algorithms have been developed, some of them with the purpose to be performed on resource constrained microcontroller architectures of wireless sensor nodes. While microcontrollers are usually constrained in computational power and memory resources, all state-of-the-art piecewise linear approximation techniques either need to buffer sensor data or have an execution time depending on the segment’s length. In the paper at hand, we propose a novel piecewise linear approximation algorithm, with a constant computational complexity as well as a constant memory complexity. Our proposed algorithm’s worst-case execution time is one to three orders of magnitude smaller and its average execution time is three to seventy times smaller compared to the state-of-the-art Piecewise Linear Approximation (PLA) algorithms in our experiments. In our evaluations, we show that our algorithm is time and memory efficient without sacrificing the approximation quality compared to other state-of-the-art piecewise linear approximation techniques, while providing a maximum error guarantee per segment, a small parameter space of only one parameter, and a maximum latency of one sample period plus its worst-case execution time.

  13. Asteroseismic inversions in the Kepler era: application to the Kepler Legacy sample

    NASA Astrophysics Data System (ADS)

    Buldgen, Gaël; Reese, Daniel; Dupret, Marc-Antoine

    2017-10-01

    In the past few years, the CoRoT and Kepler missions have carried out what is now called the space photometry revolution. This revolution is still ongoing thanks to K2 and will be continued by the Tess and Plato2.0 missions. However, the photometry revolution must also be followed by progress in stellar modelling, in order to lead to more precise and accurate determinations of fundamental stellar parameters such as masses, radii and ages. In this context, the long-lasting problems related to mixing processes in stellar interior is the main obstacle to further improvements of stellar modelling. In this contribution, we will apply structural asteroseismic inversion techniques to targets from the Kepler Legacy sample and analyse how these can help us constrain the fundamental parameters and mixing processes in these stars. Our approach is based on previous studies using the SOLA inversion technique [1] to determine integrated quantities such as the mean density [2], the acoustic radius, and core conditions indicators [3], and has already been successfully applied to the 16Cyg binary system [4]. We will show how this technique can be applied to the Kepler Legacy sample and how new indicators can help us to further constrain the chemical composition profiles of stars as well as provide stringent constraints on stellar ages.

  14. Constraining the Structure of Hot Jupiter Atmospheres Using a Hybrid Version of the NEMESIS Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Badhan, Mahmuda A.; Mandell, Avi M.; Hesman, Brigette; Nixon, Conor; Deming, Drake; Irwin, Patrick; Barstow, Joanna; Garland, Ryan

    2015-11-01

    Understanding the formation environments and evolution scenarios of planets in nearby planetary systems requires robust measures for constraining their atmospheric physical properties. Here we have utilized a combination of two different parameter retrieval approaches, Optimal Estimation and Markov Chain Monte Carlo, as part of the well-validated NEMESIS atmospheric retrieval code, to infer a range of temperature profiles and molecular abundances of H2O, CO2, CH4 and CO from available dayside thermal emission observations of several hot-Jupiter candidates. In order to keep the number of parameters low and henceforth retrieve more plausible profile shapes, we have used a parametrized form of the temperature profile based upon an analytic radiative equilibrium derivation in Guillot et al. 2010 (Line et al. 2012, 2014). We show retrieval results on published spectroscopic and photometric data from both the Hubble Space Telescope and Spitzer missions, and compare them with simulations from the upcoming JWST mission. In addition, since NEMESIS utilizes correlated distribution of absorption coefficients (k-distribution) amongst atmospheric layers to compute these models, updates to spectroscopic databases can impact retrievals quite significantly for such high-temperature atmospheres. As high-temperature line databases are continually being improved, we also compare retrievals between old and newer databases.

  15. Transition disks: four candidates for ongoing giant planet formation in Ophiuchus

    NASA Astrophysics Data System (ADS)

    Orellana, M.; Cieza, L. A.; Schreiber, M. R.; Merín, B.; Brown, J. M.; Pellizza, L. J.; Romero, G. A.

    2012-03-01

    Among the large set of Spitzer-selected transitional disks that we have examined in the Ophiuchus molecular, four disks have been identified as (giant) planet-forming candidates based on the morphology of their spectral energy distributions (SEDs), their apparent lack of stellar companions, and evidence of accretion. Here we characterize the structures of these disks modeling their optical, infrared, and (sub)millimeter SEDs. We use the Monte Carlo radiative transfer package RADMC to construct a parametric model of the dust distribution in a flared disk with an inner cavity and calculate the temperature structure that is consistent with the density profile, when the disk is in thermal equilibrium with the irradiating star. For each object, we conducted a Bayesian exploration of the parameter space generating Monte Carlo Markov chains (MCMC) that allow the identification of the best-fit model parameters and to constrain their range of statistical confidence. Our calculations imply that evacuated cavities with radii ~2-8 AU are present that appear to have been carved by embedded giant planets. We found parameter values that are consistent with those previously given in the literature, indicating that there has been a mild degree of grain growth and dust settling, which deserves to be investigated with further modeling and follow-up observations. Resolved images with (sub)millimeter interferometers would be required to break some of the degeneracies of the models and more tightly constrain the physical properties of these fascinating disks.

  16. Acoustic interference and recognition space within a complex assemblage of dendrobatid frogs

    PubMed Central

    Amézquita, Adolfo; Flechas, Sandra Victoria; Lima, Albertina Pimentel; Gasser, Herbert; Hödl, Walter

    2011-01-01

    In species-rich assemblages of acoustically communicating animals, heterospecific sounds may constrain not only the evolution of signal traits but also the much less-studied signal-processing mechanisms that define the recognition space of a signal. To test the hypothesis that the recognition space is optimally designed, i.e., that it is narrower toward the species that represent the higher potential for acoustic interference, we studied an acoustic assemblage of 10 diurnally active frog species. We characterized their calls, estimated pairwise correlations in calling activity, and, to model the recognition spaces of five species, conducted playback experiments with 577 synthetic signals on 531 males. Acoustic co-occurrence was not related to multivariate distance in call parameters, suggesting a minor role for spectral or temporal segregation among species uttering similar calls. In most cases, the recognition space overlapped but was greater than the signal space, indicating that signal-processing traits do not act as strictly matched filters against sounds other than homospecific calls. Indeed, the range of the recognition space was strongly predicted by the acoustic distance to neighboring species in the signal space. Thus, our data provide compelling evidence of a role of heterospecific calls in evolutionarily shaping the frogs' recognition space within a complex acoustic assemblage without obvious concomitant effects on the signal. PMID:21969562

  17. A Stochastic Fractional Dynamics Model of Space-time Variability of Rain

    NASA Technical Reports Server (NTRS)

    Kundu, Prasun K.; Travis, James E.

    2013-01-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.

  18. Effect of soil property uncertainties on permafrost thaw projections: A calibration-constrained analysis

    DOE PAGES

    Harp, Dylan R.; Atchley, Adam L.; Painter, Scott L.; ...

    2016-02-11

    Here, the effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21more » $$^{st}$$ century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.« less

  19. Effect of soil property uncertainties on permafrost thaw projections: A calibration-constrained analysis

    DOE PAGES

    Harp, D. R.; Atchley, A. L.; Painter, S. L.; ...

    2015-06-29

    The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows formore » the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. As a result, by comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.« less

  20. Effect of soil property uncertainties on permafrost thaw projections: a calibration-constrained analysis

    NASA Astrophysics Data System (ADS)

    Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.

    2016-02-01

    The effects of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The null-space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of predictive uncertainty (due to soil property (parametric) uncertainty) and the inter-annual climate variability due to year to year differences in CESM climate forcings. After calibrating to measured borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant predictive uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Inter-annual climate variability in projected soil moisture content and Stefan number are small. A volume- and time-integrated Stefan number decreases significantly, indicating a shift in subsurface energy utilization in the future climate (latent heat of phase change becomes more important than heat conduction). Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we quantify the relative magnitude of soil property uncertainty to another source of permafrost uncertainty, structural climate model uncertainty. We show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.

  1. Effect of soil property uncertainties on permafrost thaw projections: a calibration-constrained analysis

    NASA Astrophysics Data System (ADS)

    Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.

    2015-06-01

    The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.

  2. Global Gross Primary Productivity for 2015 Inferred from OCO-2 SIF and a Carbon-Cycle Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Norton, A.; Rayner, P. J.; Scholze, M.; Koffi, E. N. D.

    2016-12-01

    The intercomparison study CMIP5 among other studies (e.g. Bodman et al., 2013) has shown that the land carbon flux contributes significantly to the uncertainty in projections of future CO2 concentration and climate (Friedlingstein et al., 2014)). The main challenge lies in disaggregating the relatively well-known net land carbon flux into its component fluxes, gross primary production (GPP) and respiration. Model simulations of these processes disagree considerably, and accurate observations of photosynthetic activity have proved a hindrance. Here we build upon the Carbon Cycle Data Assimilation System (CCDAS) (Rayner et al., 2005) to constrain estimates of one of these uncertain fluxes, GPP, using satellite observations of Solar Induced Fluorescence (SIF). SIF has considerable benefits over other proxy observations as it tracks not just the presence of vegetation but actual photosynthetic activity (Walther et al., 2016; Yang et al., 2015). To combine these observations with process-based simulations of GPP we have coupled the model SCOPE with the CCDAS model BETHY. This provides a mechanistic relationship between SIF and GPP, and the means to constrain the processes relevant to SIF and GPP via model parameters in a data assimilation system. We ingest SIF observations from NASA's Orbiting Carbon Observatory 2 (OCO-2) for 2015 into the data assimilation system to constrain estimates of GPP in space and time, while allowing for explicit consideration of uncertainties in parameters and observations. Here we present first results of the assimilation with SIF. Preliminary results indicate a constraint on global annual GPP of at least 75% when using SIF observations, reducing the uncertainty to < 3 PgC yr-1. A large portion of the constraint is propagated via parameters that describe leaf phenology. These results help to bring together state-of-the-art observations and model to improve understanding and predictive capability of GPP.

  3. Determining Crust and Upper Mantle Structure by Bayesian Joint Inversion of Receiver Functions and Surface Wave Dispersion at a Single Station: Preparation for Data from the InSight Mission

    NASA Astrophysics Data System (ADS)

    Jia, M.; Panning, M. P.; Lekic, V.; Gao, C.

    2017-12-01

    The InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission will deploy a geophysical station on Mars in 2018. Using seismology to explore the interior structure of the Mars is one of the main targets, and as part of the mission, we will use 3-component seismic data to constrain the crust and upper mantle structure including P and S wave velocities and densities underneath the station. We will apply a reversible jump Markov chain Monte Carlo algorithm in the transdimensional hierarchical Bayesian inversion framework, in which the number of parameters in the model space and the noise level of the observed data are also treated as unknowns in the inversion process. Bayesian based methods produce an ensemble of models which can be analyzed to quantify uncertainties and trade-offs of the model parameters. In order to get better resolution, we will simultaneously invert three different types of seismic data: receiver functions, surface wave dispersion (SWD), and ZH ratios. Because the InSight mission will only deliver a single seismic station to Mars, and both the source location and the interior structure will be unknown, we will jointly invert the ray parameter in our approach. In preparation for this work, we first verify our approach by using a set of synthetic data. We find that SWD can constrain the absolute value of velocities while receiver functions constrain the discontinuities. By joint inversion, the velocity structure in the crust and upper mantle is well recovered. Then, we apply our approach to real data from an earth-based seismic station BFO located in Black Forest Observatory in Germany, as already used in a demonstration study for single station location methods. From the comparison of the results, our hierarchical treatment shows its advantage over the conventional method in which the noise level of observed data is fixed as a prior.

  4. Constraining the cosmology of the phantom brane using distance measures

    NASA Astrophysics Data System (ADS)

    Alam, Ujjaini; Bag, Satadru; Sahni, Varun

    2017-01-01

    The phantom brane has several important distinctive features: (i) Its equation of state is phantomlike, but there is no future "big rip" singularity, and (ii) the effective cosmological constant on the brane is dynamically screened, because of which the expansion rate is smaller than that in Λ CDM at high redshifts. In this paper, we constrain the Phantom braneworld using distance measures such as type-Ia supernovae (SNeIa), baryon acoustic oscillations (BAO), and the compressed cosmic microwave background (CMB) data. We find that the simplest braneworld models provide a good fit to the data. For instance, BAO +SNeIa data can be accommodated by the braneworld for a large region in parameter space 0 ≤Ωℓ≲0.3 at 1 σ . The Hubble parameter can be as high as H0≲78 km s-1 Mpc-1 , and the effective equation of state at present can show phantomlike behavior with w0≲-1.2 at 1 σ . We note a correlation between H0 and w0, with higher values of H0 leading to a lower, and more phantomlike, value of w0. Inclusion of CMB data provides tighter constraints Ωℓ≲0.1 . (Here Ωℓ encodes the ratio of the five- and four-dimensional Planck mass.) The Hubble parameter in this case is more tightly constrained to H0≲71 km s-1 Mpc-1 , and the effective equation of state to w0≲-1.1 . Interestingly, we find that the Universe is allowed to be closed or open, with -0.5 ≲Ωκ≲0.5 , even on including the compressed CMB data. There appears to be some tension in the low and high-z BAO data which may either be resolved by future data, or act as a pointer to interesting new cosmology.

  5. The tractable cognition thesis.

    PubMed

    Van Rooij, Iris

    2008-09-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories of cognition. To utilize this constraint, a precise and workable definition of "computational tractability" is needed. Following computer science tradition, many cognitive scientists and psychologists define computational tractability as polynomial-time computability, leading to the P-Cognition thesis. This article explains how and why the P-Cognition thesis may be overly restrictive, risking the exclusion of veridical computational-level theories from scientific investigation. An argument is made to replace the P-Cognition thesis by the FPT-Cognition thesis as an alternative formalization of the Tractable Cognition thesis (here, FPT stands for fixed-parameter tractable). Possible objections to the Tractable Cognition thesis, and its proposed formalization, are discussed, and existing misconceptions are clarified. 2008 Cognitive Science Society, Inc.

  6. X-ray lines from dark matter: the good, the bad, and the unlikely

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frandsen, Mads T.; Sannino, Francesco; Shoemaker, Ian M.

    2014-05-01

    We consider three classes of dark matter (DM) models to account for the recently observed 3.5 keV line: metastable excited state DM, annihilating DM, and decaying DM. We study two examples of metastable excited state DM. The first, millicharged composite DM, has both inelasticity and photon emission built in, but with a very constrained parameter space. In the second example, up-scattering and decay come from separate sectors and is thus less constrained. The decay of the excited state can potentially be detectable at direct detection experiments. However we find that CMB constraints are at the border of excluding this asmore » an interpretation of the DAMA signal. The annihilating DM interpretation of the X-ray line is found to be in mild tension with CMB constraints. Lastly, a generalized version of decaying DM can account for the data with a lifetime exceeding the age of the Universe for masses ∼<10{sup 6} GeV.« less

  7. Constraining the Optical Emission from the Double Pulsar System J0737-3039

    NASA Astrophysics Data System (ADS)

    Ferraro, F. R.; Mignani, R. P.; Pallanca, C.; Dalessandro, E.; Lanzoni, B.; Pellizzoni, A.; Possenti, A.; Burgay, M.; Camilo, F.; D'Amico, N.; Lyne, A. G.; Kramer, M.; Manchester, R. N.

    2012-04-01

    We present the first optical observations of the unique system J0737-3039 (composed of two pulsars, hereafter PSR-A and PSR-B). Ultra-deep optical observations, performed with the High Resolution Camera of the Advanced Camera for Surveys on board the Hubble Space Telescope, could not detect any optical emission from the system down to m F435W = 27.0 and m F606W = 28.3. The estimated optical flux limits are used to constrain the three-component (two thermal and one non-thermal) model recently proposed to reproduce the XMM-Newton X-ray spectrum. They suggest the presence of a break at low energies in the non-thermal power-law component of PSR-A and are compatible with the expected blackbody emission from the PSR-B surface. The corresponding efficiency of the optical emission from PSR-A's magnetosphere would be comparable to that of other Myr-old pulsars, thus suggesting that this parameter may not dramatically evolve over a timescale of a few Myr.

  8. Pseudoscalar portal dark matter and new signatures of vector-like fermions

    DOE PAGES

    Fan, JiJi; Koushiappas, Savvas M.; Landsberg, Greg

    2016-01-19

    Fermionic dark matter interacting with the Standard Model sector through a pseudoscalar portal could evade the direct detection constraints while preserving a WIMP miracle. Here, we study the LHC constraints on the pseudoscalar production in simplified models with the pseudoscalar either dominantly coupled to b quarks ormore » $${{\\tau}}$$ leptons and explore their implications for the GeV excesses in gamma ray observations. We also investigate models with new vector-like fermions that could realize the simplfied models of pseudoscalar portal dark matter. Furthermore, these models yield new decay channels and signatures of vector-like fermions, for instance, bbb; b$${{\\tau}}$$ $${{\\tau}}$$, and $${{\\tau}}$$ $${{\\tau}}$$ $${{\\tau}}$$ resonances. Some of the signatures have already been strongly constrained by the existing LHC searches and the parameter space fitting the gamma ray excess is further restricted. Conversely, the pure $${{\\tau}}$$-rich final state is only weakly constrained so far due to the small electroweak production rate.« less

  9. RX J1856-3754: Evidence for a Stiff Equation of State

    NASA Astrophysics Data System (ADS)

    Braje, Timothy M.; Romani, Roger W.

    2002-12-01

    We have examined the soft X-ray plus optical/UV spectrum of the nearby isolated neutron star RX J1856-3754, comparing it with detailed models of a thermally emitting surface. Like previous investigators, we find that the spectrum is best fitted by a two-temperature blackbody model. In addition, our simulations constrain the allowed viewing geometry from the observed pulse fraction upper limits. These simulations show that RX J1856-3754 is very likely to be a normal young pulsar, with the nonthermal radio beam missing Earth's line of sight. The spectral energy distribution limits on the model parameter space put a strong constraint on the star's M/R. At the measured parallax distance, the allowed range for MNS=1.5Msolar is RNS=13.7+/-0.6km. Under this interpretation, the equation of state (EOS) is relatively stiff near nuclear density, and the quark star EOS posited in some previous studies is strongly excluded. The data also constrain the surface T distribution over the polar cap.

  10. DECIPHERING THERMAL PHASE CURVES OF DRY, TIDALLY LOCKED TERRESTRIAL PLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koll, Daniel D. B.; Abbot, Dorian S., E-mail: dkoll@uchicago.edu

    2015-03-20

    Next-generation space telescopes will allow us to characterize terrestrial exoplanets. To do so effectively it will be crucial to make use of all available data. We investigate which atmospheric properties can, and cannot, be inferred from the broadband thermal phase curve of a dry and tidally locked terrestrial planet. First, we use dimensional analysis to show that phase curves are controlled by six nondimensional parameters. Second, we use an idealized general circulation model to explore the relative sensitivity of phase curves to these parameters. We find that the feature of phase curves most sensitive to atmospheric parameters is the peak-to-troughmore » amplitude. Moreover, except for hot and rapidly rotating planets, the phase amplitude is primarily sensitive to only two nondimensional parameters: (1) the ratio of dynamical to radiative timescales and (2) the longwave optical depth at the surface. As an application of this technique, we show how phase curve measurements can be combined with transit or emission spectroscopy to yield a new constraint for the surface pressure and atmospheric mass of terrestrial planets. We estimate that a single broadband phase curve, measured over half an orbit with the James Webb Space Telescope, could meaningfully constrain the atmospheric mass of a nearby super-Earth. Such constraints will be important for studying the atmospheric evolution of terrestrial exoplanets as well as characterizing the surface conditions on potentially habitable planets.« less

  11. Population synthesis of radio and gamma-ray millisecond pulsars using Markov Chain Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice

    2016-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).

  12. CP4 miracle: shaping Yukawa sector with CP symmetry of order four

    NASA Astrophysics Data System (ADS)

    Ferreira, P. M.; Ivanov, Igor P.; Jiménez, Enrique; Pasechnik, Roman; Serôdio, Hugo

    2018-01-01

    We explore the phenomenology of a unique three-Higgs-doublet model based on the single CP symmetry of order 4 (CP4) without any accidental symmetries. The CP4 symmetry is imposed on the scalar potential and Yukawa interactions, strongly shaping both sectors of the model and leading to a very characteristic phenomenology. The scalar sector is analyzed in detail, and in the Yukawa sector we list all possible CP4-symmetric structures which do not run into immediate conflict with experiment, namely, do not lead to massless or mass-degenerate quarks nor to insufficient mixing or CP -violation in the CKM matrix. We show that the parameter space of the model, although very constrained by CP4, is large enough to comply with the electroweak precision data and the LHC results for the 125 GeV Higgs boson phenomenology, as well as to perfectly reproduce all fermion masses, mixing, and CP violation. Despite the presence of flavor changing neutral currents mediated by heavy Higgs scalars, we find through a parameter space scan many points which accurately reproduce the kaon CP -violating parameter ɛ K as well as oscillation parameters in K and B ( s) mesons. Thus, CP4 offers a novel minimalistic framework for building models with very few assumptions, sufficient predictive power, and rich phenomenology yet to be explored.

  13. A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions

    NASA Astrophysics Data System (ADS)

    Lienert, Sebastian; Joos, Fortunat

    2018-05-01

    A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.

  14. Cosmology with galaxy cluster phase spaces

    NASA Astrophysics Data System (ADS)

    Stark, Alejo; Miller, Christopher J.; Huterer, Dragan

    2017-07-01

    We present a novel approach to constrain accelerating cosmologies with galaxy cluster phase spaces. With the Fisher matrix formalism we forecast constraints on the cosmological parameters that describe the cosmological expansion history. We find that our probe has the potential of providing constraints comparable to, or even stronger than, those from other cosmological probes. More specifically, with 1000 (100) clusters uniformly distributed in the redshift range 0 ≤z ≤0.8 , after applying a conservative 80% mass scatter prior on each cluster and marginalizing over all other parameters, we forecast 1 σ constraints on the dark energy equation of state w and matter density parameter ΩM of σw=0.138 (0.431 ) and σΩM=0.007(0.025 ) in a flat universe. Assuming 40% mass scatter and adding a prior on the Hubble constant we can achieve a constraint on the Chevallier-Polarski-Linder parametrization of the dark energy equation of state parameters w0 and wa with 100 clusters in the same redshift range: σw 0=0.191 and σwa=2.712. Dropping the assumption of flatness and assuming w =-1 we also attain competitive constraints on the matter and dark energy density parameters: σΩ M=0.101 and σΩ Λ=0.197 for 100 clusters uniformly distributed in the range 0 ≤z ≤0.8 after applying a prior on the Hubble constant. We also discuss various observational strategies for tightening constraints in both the near and far future.

  15. CONSTRAINING RELATIVISTIC BOW SHOCK PROPERTIES IN ROTATION-POWERED MILLISECOND PULSAR BINARIES.

    PubMed

    Wadiasingh, Zorawar; Harding, Alice K; Venter, Christo; Böttcher, Markus; Baring, Matthew G

    2017-04-20

    Multiwavelength followup of unidentified Fermi sources has vastly expanded the number of known galactic-field "black widow" and "redback" millisecond pulsar binaries. Focusing on their rotation-powered state, we interpret the radio to X-ray phenomenology in a consistent framework. We advocate the existence of two distinct modes differing in their intrabinary shock orientation, distinguished by the phase-centering of the double-peaked X-ray orbital modulation originating from mildly-relativistic Doppler boosting. By constructing a geometric model for radio eclipses, we constrain the shock geometry as functions of binary inclination and shock stand-off R 0 . We develop synthetic X-ray synchrotron orbital light curves and explore the model parameter space allowed by radio eclipse constraints applied on archetypal systems B1957+20 and J1023+0038. For B1957+20, from radio eclipses the stand-off is R 0 ~ 0.15-0.3 fraction of binary separation from the companion center, depending on the orbit inclination. Constructed X-ray light curves for B1957+20 using these values are qualitatively consistent with those observed, and we find occultation of the shock by the companion as a minor influence, demanding significant Doppler factors to yield double peaks. For J1023+0038, radio eclipses imply R 0 ≲ 0.4 while X-ray light curves suggest 0.1 ≲ R 0 ≲ 0.3 (from the pulsar). Degeneracies in the model parameter space encourage further development to include transport considerations. Generically, the spatial variation along the shock of the underlying electron power-law index should yield energy-dependence in the shape of light curves motivating future X-ray phase-resolved spectroscopic studies to probe the unknown physics of pulsar winds and relativistic shock acceleration therein.

  16. CONSTRAINING RELATIVISTIC BOW SHOCK PROPERTIES IN ROTATION-POWERED MILLISECOND PULSAR BINARIES

    PubMed Central

    Wadiasingh, Zorawar; Harding, Alice K.; Venter, Christo; Böttcher, Markus; Baring, Matthew G.

    2018-01-01

    Multiwavelength followup of unidentified Fermi sources has vastly expanded the number of known galactic-field “black widow” and “redback” millisecond pulsar binaries. Focusing on their rotation-powered state, we interpret the radio to X-ray phenomenology in a consistent framework. We advocate the existence of two distinct modes differing in their intrabinary shock orientation, distinguished by the phase-centering of the double-peaked X-ray orbital modulation originating from mildly-relativistic Doppler boosting. By constructing a geometric model for radio eclipses, we constrain the shock geometry as functions of binary inclination and shock stand-off R0. We develop synthetic X-ray synchrotron orbital light curves and explore the model parameter space allowed by radio eclipse constraints applied on archetypal systems B1957+20 and J1023+0038. For B1957+20, from radio eclipses the stand-off is R0 ~ 0.15–0.3 fraction of binary separation from the companion center, depending on the orbit inclination. Constructed X-ray light curves for B1957+20 using these values are qualitatively consistent with those observed, and we find occultation of the shock by the companion as a minor influence, demanding significant Doppler factors to yield double peaks. For J1023+0038, radio eclipses imply R0 ≲ 0.4 while X-ray light curves suggest 0.1 ≲ R0 ≲ 0.3 (from the pulsar). Degeneracies in the model parameter space encourage further development to include transport considerations. Generically, the spatial variation along the shock of the underlying electron power-law index should yield energy-dependence in the shape of light curves motivating future X-ray phase-resolved spectroscopic studies to probe the unknown physics of pulsar winds and relativistic shock acceleration therein. PMID:29651167

  17. Constraining Relativistic Bow Shock Properties in Rotation-Powered Millisecond Pulsar Binaries

    NASA Technical Reports Server (NTRS)

    Wadiasingh, Zorawar; Harding, Alice K.; Venter, Christo; Bottcher, Markus; Baring, Matthew G.

    2017-01-01

    Multiwavelength follow-up of unidentified Fermi sources has vastly expanded the number of known galactic-field "black widow" and "redback" millisecond pulsar binaries. Focusing on their rotation-powered state, we interpret the radio to X-ray phenomenology in a consistent framework. We advocate the existence of two distinct modes differing in their intrabinary shock orientation, distinguished by the phase-centering of the double-peaked X-ray orbital modulation originating from mildly-relativistic Doppler boosting. By constructing a geometric model for radio eclipses, we constrain the shock geometry as functions of binary inclination and shock stand-off R(sub 0). We develop synthetic X-ray synchrotron orbital light curves and explore the model parameter space allowed by radio eclipse constraints applied on archetypal systems B1957+20 and J1023+0038. For B1957+20, from radio eclipses the stand-off is R(sub 0) approximately 0:15 - 0:3 fraction of binary separation from the companion center, depending on the orbit inclination. Constructed X-ray light curves for B1957+20 using these values are qualitatively consistent with those observed, and we find occultation of the shock by the companion as a minor influence, demanding significant Doppler factors to yield double peaks. For J1023+0038, radio eclipses imply R(sub 0) is approximately less than 0:4 while X-ray light curves suggest 0:1 is approximately less than R(sub 0) is approximately less than 0:3 (from the pulsar). Degeneracies in the model parameter space encourage further development to include transport considerations. Generically, the spatial variation along the shock of the underlying electron power-law index should yield energy-dependence in the shape of light curves motivating future X-ray phase-resolved spectroscopic studies to probe the unknown physics of pulsar winds and relativistic shock acceleration therein.

  18. Constraining Relativistic Bow Shock Properties in Rotation-powered Millisecond Pulsar Binaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wadiasingh, Zorawar; Venter, Christo; Böttcher, Markus

    2017-04-20

    Multiwavelength follow-up of unidentified Fermi sources has vastly expanded the number of known galactic-field “black widow” and “redback” millisecond pulsar binaries. Focusing on their rotation-powered state, we interpret the radio to X-ray phenomenology in a consistent framework. We advocate the existence of two distinct modes differing in their intrabinary shock orientation, distinguished by the phase centering of the double-peaked X-ray orbital modulation originating from mildly relativistic Doppler boosting. By constructing a geometric model for radio eclipses, we constrain the shock geometry as functions of binary inclination and shock standoff R {sub 0}. We develop synthetic X-ray synchrotron orbital light curvesmore » and explore the model parameter space allowed by radio eclipse constraints applied on archetypal systems B1957+20 and J1023+0038. For B1957+20, from radio eclipses the standoff is R {sub 0} ∼ 0.15–0.3 fraction of binary separation from the companion center, depending on the orbit inclination. Constructed X-ray light curves for B1957+20 using these values are qualitatively consistent with those observed, and we find occultation of the shock by the companion as a minor influence, demanding significant Doppler factors to yield double peaks. For J1023+0038, radio eclipses imply R {sub 0} ≲ 0.4, while X-ray light curves suggest 0.1 ≲ R {sub 0} ≲ 0.3 (from the pulsar). Degeneracies in the model parameter space encourage further development to include transport considerations. Generically, the spatial variation along the shock of the underlying electron power-law index should yield energy dependence in the shape of light curves, motivating future X-ray phase-resolved spectroscopic studies to probe the unknown physics of pulsar winds and relativistic shock acceleration therein.« less

  19. Extending amulti-scale parameter regionalization (MPR) method by introducing parameter constrained optimization and flexible transfer functions

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2015-04-01

    A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956.

  20. New method to design stellarator coils without the winding surface

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2017-11-06

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less

  1. Modeling of exoplanets interiors in the framework of future space missions

    NASA Astrophysics Data System (ADS)

    Brugger, B.; Mousis, O.; Deleuil, M.

    2017-12-01

    Probing the interior of exoplanets with known masses and radii is possible via the use of models of internal structure. Here we present a model able to handle various planetary compositions, from terrestrial bodies to ocean worlds or carbon-rich planets, and its application to the case of CoRoT-7b. Using the elemental abundances of an exoplanet’s host star, we significantly reduce the degeneracy limiting such models. This further constrains the type and state of material present at the surface, and helps estimating the composition of a secondary atmosphere that could form in these conditions through potential outgassing. Upcoming space missions dedicated to exoplanet characterization, such as PLATO, will provide accurate fundamental parameters of Earth-like planets orbiting in the habitable zone, for which our model is well adapted.

  2. Emittance preservation during bunch compression with a magnetized beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stratakis, Diktys

    2015-09-02

    The deleterious effects of coherent synchrotron radiation (CSR) on the phase-space and energy spread of high-energy beams in accelerator light sources can significantly constrain the machine design and performance. In this paper, we present a simple method to preserve the beam emittance by means of using magnetized beams that exhibit a large aspect ratio on their transverse dimensions. The concept is based on combining a finite solenoid field where the beam is generated together with a special optics adapter. Numerical simulations of this new type of beam source show that the induced phase-space density growth can be notably suppressed tomore » less than 1% for any bunch charge. This work elucidates the key parameters that are needed for emittance preservation, such as the required field and aspect ratio for a given bunch charge.« less

  3. New method to design stellarator coils without the winding surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    Finding an easy-to-build coils set has been a critical issue for stellarator design for decades. Conventional approaches assume a toroidal 'winding' surface, but a poorly chosen winding surface can unnecessarily constrain the coil optimization algorithm, This article presents a new method to design coils for stellarators. Each discrete coil is represented as an arbitrary, closed, one-dimensional curve embedded in three-dimensional space. A target function to be minimized that includes both physical requirements and engineering constraints is constructed. The derivatives of the target function with respect to the parameters describing the coil geometries and currents are calculated analytically. A numerical code,more » named flexible optimized coils using space curves (FOCUS), has been developed. Furthermore, applications to a simple stellarator configuration, W7-X and LHD vacuum fields are presented.« less

  4. Two statistics for evaluating parameter identifiability and error reduction

    USGS Publications Warehouse

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  5. Design of the 1.5 MW, 30-96 MHz ultra-wideband 3 dB high power hybrid coupler for Ion Cyclotron Resonance Frequency (ICRF) heating in fusion grade reactor.

    PubMed

    Yadav, Rana Pratap; Kumar, Sunil; Kulkarni, S V

    2016-01-01

    Design and developmental procedure of strip-line based 1.5 MW, 30-96 MHz, ultra-wideband high power 3 dB hybrid coupler has been presented and its applicability in ion cyclotron resonance heating (ICRH) in tokamak is discussed. For the high power handling capability, spacing between conductors and ground need to very high. Hence other structural parameters like strip-width, strip thickness coupling gap, and junction also become large which can be gone upto optimum limit where various constrains like fabrication tolerance, discontinuities, and excitation of higher TE and TM modes become prominent and significantly deteriorates the desired parameters of the coupled lines system. In designed hybrid coupler, two 8.34 dB coupled lines are connected in tandem to get desired coupling of 3 dB and air is used as dielectric. The spacing between ground and conductors are taken as 0.164 m for 1.5 MW power handling capability. To have the desired spacing, each of 8.34 dB segments are designed with inner dimension of 3.6 × 1.0 × 40 cm where constraints have been significantly realized, compensated, and applied in designing of 1.5 MW hybrid coupler and presented in paper.

  6. Design of the 1.5 MW, 30-96 MHz ultra-wideband 3 dB high power hybrid coupler for Ion Cyclotron Resonance Frequency (ICRF) heating in fusion grade reactor

    NASA Astrophysics Data System (ADS)

    Yadav, Rana Pratap; Kumar, Sunil; Kulkarni, S. V.

    2016-01-01

    Design and developmental procedure of strip-line based 1.5 MW, 30-96 MHz, ultra-wideband high power 3 dB hybrid coupler has been presented and its applicability in ion cyclotron resonance heating (ICRH) in tokamak is discussed. For the high power handling capability, spacing between conductors and ground need to very high. Hence other structural parameters like strip-width, strip thickness coupling gap, and junction also become large which can be gone upto optimum limit where various constrains like fabrication tolerance, discontinuities, and excitation of higher TE and TM modes become prominent and significantly deteriorates the desired parameters of the coupled lines system. In designed hybrid coupler, two 8.34 dB coupled lines are connected in tandem to get desired coupling of 3 dB and air is used as dielectric. The spacing between ground and conductors are taken as 0.164 m for 1.5 MW power handling capability. To have the desired spacing, each of 8.34 dB segments are designed with inner dimension of 3.6 × 1.0 × 40 cm where constraints have been significantly realized, compensated, and applied in designing of 1.5 MW hybrid coupler and presented in paper.

  7. Space Weathering of Itokawa Particles: Implications for Regolith Evolution

    NASA Technical Reports Server (NTRS)

    Berger, Eve L.; Keller, Lindsay P.

    2015-01-01

    Space weathering processes such as solar wind irradiation and micrometeorite impacts are known to alter the the properties of regolith materials exposed on airless bodies. The rates of space weathering processes however, are poorly constrained for asteroid regoliths, with recent estimates ranging over many orders of magnitude. The return of surface samples by JAXA's Hayabusa mission to asteroid 25143 Itokawa, and their laboratory analysis provides "ground truth" to anchor the timescales for space weathering processes on airless bodies. Here, we use the effects of solar wind irradiation and the accumulation of solar flare tracks recorded in Itokawa grains to constrain the rates of space weathering and yield information about regolith dynamics on these timescales.

  8. Nonlinear programming extensions to rational function approximations of unsteady aerodynamics

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1987-01-01

    This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.

  9. New bounds on axionlike particles from the Fermi Large Area Telescope observation of PKS 2155 -304

    NASA Astrophysics Data System (ADS)

    Zhang, Cun; Liang, Yun-Feng; Li, Shang; Liao, Neng-Hui; Feng, Lei; Yuan, Qiang; Fan, Yi-Zhong; Ren, Zhong-Zhou

    2018-03-01

    The axionlike particle (ALP)-photon mixing in the magnetic field around γ -ray sources or along the line of sight could induce oscillation between photons and ALPs, which then causes irregularities in the γ -ray spectra. In this work we search for such spectral irregularities in the spectrum of PKS 2155 -304 using 8.6 years of data from the Fermi Large Area Telescope (Fermi-LAT). No significant evidence for the presence of ALP-photon oscillation is obtained, and the parameter space of ALPs is constrained. The exclusion region sensitively depends on the poorly known magnetic field of the host galaxy cluster of PKS 2155 -304 . If the magnetic field is as high as ˜10 μ G , the "holelike" parameter region allowed in Ref. [1] can be ruled out.

  10. Killing the cMSSM softly

    DOE PAGES

    Bechtle, Philip; Camargo-Molina, José Eliel; Desch, Klaus; ...

    2016-02-24

    We investigate the constrained Minimal Supersymmetric Standard Model (cMSSM) in the light of constraining experimental and observational data from precision measurements, astrophysics, direct supersymmetry searches at the LHC and measurements of the properties of the Higgs boson, by means of a global fit using the program Fittino. As in previous studies, we find rather poor agreement of the best fit point with the global data. We also investigate the stability of the electro-weak vacuum in the preferred region of parameter space around the best fit point.We find that the vacuum is metastable, with a lifetime significantly longer than the agemore » of the Universe. For the first time in a global fit of supersymmetry, we employ a consistent methodology to evaluate the goodness-of-fit of the cMSSM in a frequentist approach by deriving p values from large sets of toy experiments. We analyse analytically and quantitatively the impact of the choice of the observable set on the p value, and in particular its dilution when confronting the model with a large number of barely constraining measurements. Lastly, for the preferred sets of observables, we obtain p values for the cMSSM below 10 %, i.e. we exclude the cMSSM as a model at the 90 % confidence level.« less

  11. Tests of chameleon gravity

    NASA Astrophysics Data System (ADS)

    Burrage, Clare; Sakstein, Jeremy

    2018-03-01

    Theories of modified gravity, where light scalars with non-trivial self-interactions and non-minimal couplings to matter—chameleon and symmetron theories—dynamically suppress deviations from general relativity in the solar system. On other scales, the environmental nature of the screening means that such scalars may be relevant. The highly-nonlinear nature of screening mechanisms means that they evade classical fifth-force searches, and there has been an intense effort towards designing new and novel tests to probe them, both in the laboratory and using astrophysical objects, and by reinterpreting existing datasets. The results of these searches are often presented using different parametrizations, which can make it difficult to compare constraints coming from different probes. The purpose of this review is to summarize the present state-of-the-art searches for screened scalars coupled to matter, and to translate the current bounds into a single parametrization to survey the state of the models. Presently, commonly studied chameleon models are well-constrained but less commonly studied models have large regions of parameter space that are still viable. Symmetron models are constrained well by astrophysical and laboratory tests, but there is a desert separating the two scales where the model is unconstrained. The coupling of chameleons to photons is tightly constrained but the symmetron coupling has yet to be explored. We also summarize the current bounds on f( R) models that exhibit the chameleon mechanism (Hu and Sawicki models). The simplest of these are well constrained by astrophysical probes, but there are currently few reported bounds for theories with higher powers of R. The review ends by discussing the future prospects for constraining screened modified gravity models further using upcoming and planned experiments.

  12. 3-D model-based Bayesian classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soenneland, L.; Tenneboe, P.; Gehrmann, T.

    1994-12-31

    The challenging task of the interpreter is to integrate different pieces of information and combine them into an earth model. The sophistication level of this earth model might vary from the simplest geometrical description to the most complex set of reservoir parameters related to the geometrical description. Obviously the sophistication level also depend on the completeness of the available information. The authors describe the interpreter`s task as a mapping between the observation space and the model space. The information available to the interpreter exists in observation space and the task is to infer a model in model-space. It is well-knownmore » that this inversion problem is non-unique. Therefore any attempt to find a solution depend son constraints being added in some manner. The solution will obviously depend on which constraints are introduced and it would be desirable to allow the interpreter to modify the constraints in a problem-dependent manner. They will present a probabilistic framework that gives the interpreter the tools to integrate the different types of information and produce constrained solutions. The constraints can be adapted to the problem at hand.« less

  13. The HVAC Challenges of Upgrading an Old Lab for High-end Light Microscopes

    PubMed Central

    Richard, R.; Martone, P.; Callahan, L.M.

    2014-01-01

    The University of Rochester Medical Center forms the centerpiece of the University of Rochester's health research, teaching, patient care, and community outreach missions. Within this large facility of over 5 million square feet, demolition and remodeling of existing spaces is a constant activity. With more than $145 million in federal research funding, lab space is frequently repurposed and renovated to support this work. The URMC Medical Center Facilities Organization supporting small to medium space renovations is constantly challenged and constrained by the existing mechanical infrastructure and budgets to deliver a renovated space that functions within the equipment environmental parameters. One recent project, sponsored by the URMC Shared Resources Laboratory, demonstrates these points. The URMC Light Microscopy Shared Resource Laboratory requested renovation of a 121 sq. ft. room in a 40 year old building which would enable placement of a laser capture microdissection microscope and a Pascal 5 laser scanning confocal microscope with the instruments separated by a blackout curtain. This poster discusses the engineering approach implemented to bring an older lab into the environmental specifications needed for the proper operation of the high-end light microscopes.

  14. Lightweight Radiator for in Space Nuclear Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Craven, Paul; Tomboulian, Briana; SanSoucie, Michael

    2014-01-01

    Nuclear electric propulsion (NEP) is a promising option for high-speed in-space travel due to the high energy density of nuclear fission power sources and efficient electric thrusters. Advanced power conversion technologies may require high operating temperatures and would benefit from lightweight radiator materials. Radiator performance dictates power output for nuclear electric propulsion systems. Game-changing propulsion systems are often enabled by novel designs using advanced materials. Pitch-based carbon fiber materials have the potential to offer significant improvements in operating temperature, thermal conductivity, and mass. These properties combine to allow advances in operational efficiency and high temperature feasibility. An effort at the NASA Marshall Space Flight Center to show that woven high thermal conductivity carbon fiber mats can be used to replace standard metal and composite radiator fins to dissipate waste heat from NEP systems is ongoing. The goals of this effort are to demonstrate a proof of concept, to show that a significant improvement of specific power (power/mass) can be achieved, and to develop a thermal model with predictive capabilities making use of constrained input parameter space. A description of this effort is presented.

  15. Coordinated trajectory planning of dual-arm space robot using constrained particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich

    2018-05-01

    Application of the multi-arm space robot will be more effective than single arm especially when the target is tumbling. This paper investigates the application of particle swarm optimization (PSO) strategy to coordinated trajectory planning of the dual-arm space robot in free-floating mode. In order to overcome the dynamics singularities issue, the direct kinematics equations in conjunction with constrained PSO are employed for coordinated trajectory planning of dual-arm space robot. The joint trajectories are parametrized with Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for coordinated trajectory planning of two kinematically redundant manipulators mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.

  16. The extended Baryon Oscillation Spectroscopic Survey: a cosmological forecast

    NASA Astrophysics Data System (ADS)

    Zhao, Gong-Bo; Wang, Yuting; Ross, Ashley J.; Shandera, Sarah; Percival, Will J.; Dawson, Kyle S.; Kneib, Jean-Paul; Myers, Adam D.; Brownstein, Joel R.; Comparat, Johan; Delubac, Timothée; Gao, Pengyuan; Hojjati, Alireza; Koyama, Kazuya; McBride, Cameron K.; Meza, Andrés; Newman, Jeffrey A.; Palanque-Delabrouille, Nathalie; Pogosian, Levon; Prada, Francisco; Rossi, Graziano; Schneider, Donald P.; Seo, Hee-Jong; Tao, Charling; Wang, Dandan; Yèche, Christophe; Zhang, Hanyu; Zhang, Yuecheng; Zhou, Xu; Zhu, Fangzhou; Zou, Hu

    2016-04-01

    We present a science forecast for the extended Baryon Oscillation Spectroscopic Survey (eBOSS) survey. Focusing on discrete tracers, we forecast the expected accuracy of the baryonic acoustic oscillation (BAO), the redshift-space distortion (RSD) measurements, the fNL parameter quantifying the primordial non-Gaussianity, the dark energy and modified gravity parameters. We also use the line-of-sight clustering in the Lyman α forest to constrain the total neutrino mass. We find that eBOSS luminous red galaxies, emission line galaxies and clustering quasars can achieve a precision of 1, 2.2 and 1.6 per cent, respectively, for spherically averaged BAO distance measurements. Using the same samples, the constraint on fσ8 is expected to be 2.5, 3.3 and 2.8 per cent, respectively. For primordial non-Gaussianity, eBOSS alone can reach an accuracy of σ(fNL) ˜ 10-15. eBOSS can at most improve the dark energy figure of merit by a factor of 3 for the Chevallier-Polarski-Linder parametrization, and can well constrain three eigenmodes for the general equation-of-state parameter. eBOSS can also significantly improve constraints on modified gravity parameters by providing the RSD information, which is highly complementary to constraints obtained from weak lensing measurements. A principal component analysis shows that eBOSS can measure the eigenmodes of the effective Newton's constant to 2 per cent precision; this is a factor of 10 improvement over that achievable without eBOSS. Finally, we derive the eBOSS constraint (combined with Planck, Dark Energy Survey and BOSS) on the total neutrino mass, σ(Σmν) = 0.03 eV (68 per cent CL), which in principle makes it possible to distinguish between the two scenarios of neutrino mass hierarchies.

  17. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  18. Characterization of the High-Albedo NEA 3691 Bede

    NASA Technical Reports Server (NTRS)

    Wooden, Diane H.; Lederer, Susan M.; Jehin, Emmanuel; Rozitis, Benjamin; Jefferson, Jeffrey D.; Nelson, Tyler W.; Dotson, Jessie L.; Ryan, Erin L.; Howell, Ellen S.; Fernandez, Yanga R.; hide

    2016-01-01

    Characterization of NEAs provides important inputs to models for atmospheric entry, risk assessment and mitigation. Diameter is a key parameter because diameter translates to kinetic energy in atmospheric entry. Diameters can be derived from the absolute magnitude, H(PA=0deg), and from thermal modeling of observed IR fluxes. For both methods, the albedo (pv) is important - high pv surfaces have cooler temperatures, larger diameters for a given Hmag, and shallower phase curves (larger slope parameter G). Thermal model parameters are coupled, however, so that a higher thermal inertia also results in a cooler surface temperature. Multiple parameters contribute to constraining the diameter. Observations made at multiple observing geometries can contribute to understanding the relationships between and potentially breaking some of the degeneracies between parameters. We present data and analyses on NEA 3691 Bede with the aim of best constraining the diameter and pv from a combination of thermal modeling and light curve analyses. We employ our UKIRT+Michelle mid-IR photometric observations of 3691 Bede's thermal emission at 2 phase angles (27&43 deg 2015-03-19 & 04-13), in addition to WISE data (33deg 2010-05-27, Mainzer+2011). Observing geometries differ by solar phase angles and by moderate changes in heliocentric distance (e.g., further distances produce somewhat cooler surface temperatures). With the NEATM model and for a constant IR beaming parameter (eta=constant), there is a family of solutions for (diameter, pv, G, eta) where G is the slope parameter from the H-G Relation. NEATM models employing Pravec+2012's choice of G=0.43, produce D=1.8 km and pv˜0.4, given that G=0.43 is assumed from studies of main belt asteroids (Warner+2009). We present an analysis of the light curve of 3691 Bede to constrain G from observations. We also investigate fitting thermophysical models (TPM, Rozitis+11) to constrain the coupled parameters of thermal inertia (Gamma) and surface roughness, which in turn affect diameter and pv. Surface composition can be related to pv. This study focuses on understanding and characterizing the dependency of parameters with the aim of constraining diameter, pv and thermal inertia for 3691 Bede.

  19. Characterization of the high-albedo NEA 3691 Bede

    NASA Astrophysics Data System (ADS)

    Wooden, Diane H.; Lederer, Susan M.; Jehin, Emmanuel; Rozitis, Benjamin; Jefferson, Jeffrey D.; Nelson, Tyler W.; Dotson, Jessie L.; Ryan, Erin L.; Howell, Ellen S.; Fernandez, Yanga R.; Lovell, Amy J.; Woodward, Charles E.; Harker, David Emerson

    2016-10-01

    Characterization of NEAs provides important inputs to models for atmospheric entry, risk assessment and mitigation. Diameter is a key parameter because diameter translates to kinetic energy in atmospheric entry. Diameters can be derived from the absolute magnitude, H(PA=0deg), and from thermal modeling of observed IR fluxes. For both methods, the albedo (pv) is important - high pv surfaces have cooler temperatures, larger diameters for a given Hmag, and shallower phase curves (larger slope parameter G). Thermal model parameters are coupled, however, so that a higher thermal inertia also results in a cooler surface temperature. Multiple parameters contribute to constraining the diameter.Observations made at multiple observing geometries can contribute to understanding the relationships between and potentially breaking some of the degeneracies between parameters. We present data and analyses on NEA 3691 Bede with the aim of best constraining the diameter and pv from a combination of thermal modeling and light curve analyses. We employ our UKIRT+Michelle mid-IR photometric observations of 3691 Bede's thermal emission at 2 phase angles (27&43 deg 2015-03-19 & 04-13), in addition to WISE data (33deg 2010-05-27, Mainzer+2011).Observing geometries differ by solar phase angles and by moderate changes in heliocentric distance (e.g., further distances produce somewhat cooler surface temperatures). With the NEATM model and for a constant IR beaming parameter (eta=constant), there is a family of solutions for (diameter, pv, G, eta) where G is the slope parameter from the H-G Relation. NEATM models employing Pravec+2012's choice of G=0.43, produce D=1.8 km and pv≈0.4, given that G=0.43 is assumed from studies of main belt asteroids (Warner+2009). We present an analysis of the light curve of 3691 Bede to constrain G from observations. We also investigate fitting thermophysical models (TPM, Rozitis+11) to constrain the coupled parameters of thermal inertia (Gamma) and surface roughness, which in turn affect diameter and pv. Surface composition can be related to pv. This study focuses on understanding and characterizing the dependency of parameters with the aim of constraining diameter, pv and thermal inertia for 3691 Bede.

  20. Effects of crustal layering on source parameter inversion from coseismic geodetic data

    NASA Astrophysics Data System (ADS)

    Amoruso, A.; Crescentini, L.; Fidani, C.

    2004-10-01

    We study the effect of a superficial layer overlying a half-space on the surface displacements caused by uniform slipping of a dip-slip normal rectangular fault. We compute static coseismic displacements using a 3-D analytical code for different characteristics of the layered medium, different fault geometries and different configurations of bench marks to simulate different kinds of geodetic data (GPS, Synthetic Aperture Radar, and levellings). We perform both joint and separate inversions of the three components of synthetic displacement without constraining fault parameters, apart from strike and rake, and using a non-linear global inversion technique under the assumption of homogeneous half-space. Differences between synthetic displacements computed in the presence of the superficial soft layer and in a homogeneous half-space do not show a simple regular behaviour, even if a few features can be identified. Consequently, also retrieved parameters of the homogeneous equivalent fault obtained by unconstrained inversion of surface displacements do not show a simple regular behaviour. We point out that the presence of a superficial layer may lead to misestimating several fault parameters both using joint and separate inversions of the three components of synthetic displacement and that the effects of the presence of the superficial layer can change whether all fault parameters are left free in the inversions or not. In the inversion of any kind of coseismic geodetic data, fault size and slip can be largely misestimated, but the product (fault length) × (fault width) × slip, which is proportional to the seismic moment for a given rigidity modulus, is often well determined (within a few per cent). Because inversion of coseismic geodetic data assuming a layered medium is impracticable, we suggest that only a case-to-case study involving some kind of recursive determination of fault parameters through data correction seems to give the proper approach when layering is important.

  1. An opinion-driven behavioral dynamics model for addictive behaviors

    DOE PAGES

    Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; ...

    2015-04-08

    We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual’s behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Additionally, individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters providemore » targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. Furthermore, this has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.« less

  2. Jet array impingement with crossflow-correlation of streamwise resolved flow and heat transfer distributions

    NASA Technical Reports Server (NTRS)

    Florschuetz, L. W.; Metzger, D. E.; Truman, C. R.

    1981-01-01

    Correlations for heat transfer coefficients for jets of circular offices and impinging on a surface parallel to the jet orifice plate are presented. The air, following impingement, is constrained to exit in a single direction along the channel formed by the jet orifice plate and the heat transfer (impingement) surface. The downstream jets are subjected to a crossflow originating from the upstream jets. Impingement surface heat transfer coefficients resolved to one streamwise jet orifice spacing, averaged across the channel span, are correlated with the associated individual spanwise orifice row jet and crossflow velocities, and with the geometric parameters.

  3. Wavy and Cycloidal Lineament Formation on Europa from Combined Diurnal and Nonsynchronous Stresses

    NASA Technical Reports Server (NTRS)

    Gleeson, Damhnait; Crawford, Zane; Barr, Amy C.; Mullen, McCall; Pappalardo, Robert T.; Prockter, Louise M.; Stempel, Michelle M.; Wahr, John

    2005-01-01

    In a companion abstract, we show that fractures propagated into combined diurnal and nonsynchronous rotation (NSR) stress fields can be cycloidal, "wavy," or arcuate in planform as the relative proportion of NSR stress in increased. These transitions occur as NSR stress accumulates over approx. 0 to 10 deg of ice shell rotation, for average fracture propagation speeds of approx. 1 to 3 m/s. Here we consider the NSR speed parameter space for these morphological transitions, and explore the effects on cycloids of adding NSR to diurnal stress. Fitting individual Europan lineaments can constrain the combined NSR plus diurnal stress field at the time of formation.

  4. On-Board Generation of Three-Dimensional Constrained Entry Trajectories

    NASA Technical Reports Server (NTRS)

    Shen, Zuojun; Lu, Ping; Jackson, Scott (Technical Monitor)

    2002-01-01

    A methodology for very fast design of 3DOF entry trajectories subject to all common inequality and equality constraints is developed. The approach make novel use of the well known quasi-equilibrium glide phenomenon in lifting entry as a center piece for conveniently enforcing the inequality constraints which are otherwise difficulty to handle. The algorithm is able to generate a complete feasible 3DOF entry trajectory, given the entry conditions, values of constraint parameters, and final conditions in about 2 seconds on a PC. Numerical simulations with the X-33 vehicle model for various entry missions to land at Kennedy Space Center will be presented.

  5. Catchment Tomography - Joint Estimation of Surface Roughness and Hydraulic Conductivity with the EnKF

    NASA Astrophysics Data System (ADS)

    Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.

    2017-12-01

    Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.

  6. OCO-2 Column Carbon Dioxide and Biometric Data Jointly Constrain Parameterization and Projection of a Global Land Model

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III

    2015-12-01

    Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.

  7. On the Landau-de Gennes Elastic Energy of a Q-Tensor Model for Soft Biaxial Nematics

    NASA Astrophysics Data System (ADS)

    Mucci, Domenico; Nicolodi, Lorenzo

    2017-12-01

    In the Landau-de Gennes theory of liquid crystals, the propensities for alignments of molecules are represented at each point of the fluid by an element Q of the vector space S_0 of 3× 3 real symmetric traceless matrices, or Q-tensors. According to Longa and Trebin (1989), a biaxial nematic system is called soft biaxial if the tensor order parameter Q satisfies the constraint tr(Q^2) = {const}. After the introduction of a Q-tensor model for soft biaxial nematic systems and the description of its geometric structure, we address the question of coercivity for the most common four-elastic-constant form of the Landau-de Gennes elastic free-energy (Iyer et al. 2015) in this model. For a soft biaxial nematic system, the tensor field Q takes values in a four-dimensional sphere S^4_ρ of radius ρ ≤ √{2/3} in the five-dimensional space S_0 with inner product < Q, P > = tr(QP). The rotation group it{SO}(3) acts orthogonally on S_0 by conjugation and hence induces an action on S^4_ρ \\subset {S}_0. This action has generic orbits of codimension one that are diffeomorphic to an eightfold quotient S^3/H of the unit three-sphere S^3, where H={± 1, ± i, ± j, ± k} is the quaternion group, and has two degenerate orbits of codimension two that are diffeomorphic to the projective plane RP^2. Each generic orbit can be interpreted as the order parameter space of a constrained biaxial nematic system and each singular orbit as the order parameter space of a constrained uniaxial nematic system. It turns out that S^4_ρ is a cohomogeneity one manifold, i.e., a manifold with a group action whose orbit space is one-dimensional. Another important geometric feature of the model is that the set Σ _ρ of diagonal Q-tensors of fixed norm ρ is a (geodesic) great circle in S^4_ρ which meets every orbit of S^4_ρ orthogonally and is then a section for S^4_ρ in the sense of the general theory of canonical forms. We compute necessary and sufficient coercivity conditions for the elastic energy by exploiting the it{SO}(3)-invariance of the elastic energy (frame-indifference), the existence of the section Σ _ρ for S^4_ρ , and the geometry of the model, which allow us to reduce to a suitable invariant problem on (an arc of) Σ _ρ . Our approach can ultimately be seen as an application of the general method of reduction of variables, or cohomogeneity method.

  8. The signal of mantle anisotropy in the coupling of normal modes

    NASA Astrophysics Data System (ADS)

    Beghein, Caroline; Resovsky, Joseph; van der Hilst, Robert D.

    2008-12-01

    We investigate whether the coupling of normal mode (NM) multiplets can help us constrain mantle anisotropy. We first derive explicit expressions of the generalized structure coefficients of coupled modes in terms of elastic coefficients, including the Love parameters describing radial anisotropy and the parameters describing azimuthal anisotropy (Jc, Js, Kc, Ks, Mc, Ms, Bc, Bs, Gc, Gs, Ec, Es, Hc, Hs, Dc and Ds). We detail the selection rules that describe which modes can couple together and which elastic parameters govern their coupling. We then focus on modes of type 0Sl - 0Tl+1 and determine whether they can be used to constrain mantle anisotropy. We show that they are sensitive to six elastic parameters describing azimuthal anisotropy, in addition to the two shear-wave elastic parameters L and N (i.e. VSV and VSH). We find that neither isotropic nor radially anisotropic mantle models can fully explain the observed degree two signal. We show that the NM signal that remains after correction for the effect of the crust and mantle radial anisotropy can be explained by the presence of azimuthal anisotropy in the upper mantle. Although the data favour locating azimuthal anisotropy below 400km, its depth extent and distribution is still not well constrained by the data. Consideration of NM coupling can thus help constrain azimuthal anisotropy in the mantle, but joint analyses with surface-wave phase velocities is needed to reduce the parameter trade-offs and improve our constraints on the individual elastic parameters and the depth location of the azimuthal anisotropy.

  9. Heat transfer characteristics within an array of impinging jets. Effects of crossflow temperature relative to jet temperature

    NASA Technical Reports Server (NTRS)

    Florschuetz, L. W.; Su, C. C.

    1985-01-01

    Spanwise average heat fluxes, resolved in the streamwise direction to one stream-wise hole spacing were measured for two-dimensional arrays of circular air jets impinging on a heat transfer surface parallel to the jet orifice plate. The jet flow, after impingement, was constrained to exit in a single direction along the channel formed by the jet orifice plate and heat transfer surface. The crossflow originated from the jets following impingement and an initial crossflow was present that approached the array through an upstream extension of the channel. The regional average heat fluxes are considered as a function of parameters associated with corresponding individual spanwise rows within the array. A linear superposition model was employed to formulate appropriate governing parameters for the individual row domain. The effects of flow history upstream of an individual row domain are also considered. The results are formulated in terms of individual spanwise row parameters. A corresponding set of streamwise resolved heat transfer characteristics formulated in terms of flow and geometric parameters characterizing the overall arrays is described.

  10. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  11. Constrained spectral clustering under a local proximity structure assumption

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Xu, Qianjun; des Jardins, Marie

    2005-01-01

    This work focuses on incorporating pairwise constraints into a spectral clustering algorithm. A new constrained spectral clustering method is proposed, as well as an active constraint acquisition technique and a heuristic for parameter selection. We demonstrate that our constrained spectral clustering method, CSC, works well when the data exhibits what we term local proximity structure.

  12. A viable dark fluid model

    NASA Astrophysics Data System (ADS)

    Elkhateeb, Esraa

    2018-01-01

    We consider a cosmological model based on a generalization of the equation of state proposed by Nojiri and Odintsov (2004) and Štefančić (2005, 2006). We argue that this model works as a dark fluid model which can interpolate between dust equation of state and the dark energy equation of state. We show how the asymptotic behavior of the equation of state constrained the parameters of the model. The causality condition for the model is also studied to constrain the parameters and the fixed points are tested to determine different solution classes. Observations of Hubble diagram of SNe Ia supernovae are used to further constrain the model. We present an exact solution of the model and calculate the luminosity distance and the energy density evolution. We also calculate the deceleration parameter to test the state of the universe expansion.

  13. A stochastic fractional dynamics model of space-time variability of rain

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun K.; Travis, James E.

    2013-09-01

    varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.

  14. Constraining the noncommutative spectral action via astrophysical observations.

    PubMed

    Nelson, William; Ochoa, Joseph; Sakellariadou, Mairi

    2010-09-03

    The noncommutative spectral action extends our familiar notion of commutative spaces, using the data encoded in a spectral triple on an almost commutative space. Varying a rather simple action, one can derive all of the standard model of particle physics in this setting, in addition to a modified version of Einstein-Hilbert gravity. In this Letter we use observations of pulsar timings, assuming that no deviation from general relativity has been observed, to constrain the gravitational sector of this theory. While the bounds on the coupling constants remain rather weak, they are comparable to existing bounds on deviations from general relativity in other settings and are likely to be further constrained by future observations.

  15. Constraining the relative velocity effect using the Baryon Oscillation Spectroscopic Survey

    DOE PAGES

    Beutler, Florian; Seljak, Uroš; Vlah, Zvonimir

    2017-05-16

    Here, we analyse the power spectrum of the Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 to constrain the relative velocity effect, which represents a potential systematic for measurements of the baryon acoustic oscillation (BAO) scale. The relative velocity effect is sourced by the different evolution of baryon and cold dark matter perturbations before decoupling. Our power spectrum model includes all one-loop redshift-space terms corresponding to vbc parametrized by the bias parameter bmore » $$2\\atop{v}$$ . We also include the linear terms proportional to the relative density, δbc, and relative velocity dispersion, θbc, which we parametrize with the bias parameters b$$bc\\atop{δ}$$ and b$$bc\\atop{θ}$$. This data does not support a detection of the relative velocity effect in any of these parameters. Combining the low- and high-redshift bins of BOSS, we find limits of b$$2\\atop{v}$$=0.012±0.015(±0.031) , b$$bc\\atop{δ}$$=-1.0±2.5(±6.2) and b$$bc\\atop{θ}$$=-114±55(±175) with 68 percent (95 percent) confidence levels. These constraints restrict the potential systematic shift in D A(z), H(z) and fσ8, due to the relative velocity, to 1 percent, 0.8 percent and 2 percent, respectively. Given the current uncertainties on the BAO measurements of BOSS, these shifts correspond to 0.53σ, 0.5σ and 0.22σ for DA(z), H(z) and fσ8, respectively.« less

  16. Constraining the relative velocity effect using the Baryon Oscillation Spectroscopic Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutler, Florian; Seljak, Uroš; Vlah, Zvonimir

    Here, we analyse the power spectrum of the Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 to constrain the relative velocity effect, which represents a potential systematic for measurements of the baryon acoustic oscillation (BAO) scale. The relative velocity effect is sourced by the different evolution of baryon and cold dark matter perturbations before decoupling. Our power spectrum model includes all one-loop redshift-space terms corresponding to vbc parametrized by the bias parameter bmore » $$2\\atop{v}$$ . We also include the linear terms proportional to the relative density, δbc, and relative velocity dispersion, θbc, which we parametrize with the bias parameters b$$bc\\atop{δ}$$ and b$$bc\\atop{θ}$$. This data does not support a detection of the relative velocity effect in any of these parameters. Combining the low- and high-redshift bins of BOSS, we find limits of b$$2\\atop{v}$$=0.012±0.015(±0.031) , b$$bc\\atop{δ}$$=-1.0±2.5(±6.2) and b$$bc\\atop{θ}$$=-114±55(±175) with 68 percent (95 percent) confidence levels. These constraints restrict the potential systematic shift in D A(z), H(z) and fσ8, due to the relative velocity, to 1 percent, 0.8 percent and 2 percent, respectively. Given the current uncertainties on the BAO measurements of BOSS, these shifts correspond to 0.53σ, 0.5σ and 0.22σ for DA(z), H(z) and fσ8, respectively.« less

  17. Limits on Active to Sterile Neutrino Oscillations from Disappearance Searches in the MINOS, Daya Bay, and Bugey-3 Experiments

    NASA Astrophysics Data System (ADS)

    Adamson, P.; An, F. P.; Anghel, I.; Aurisano, A.; Balantekin, A. B.; Band, H. R.; Barr, G.; Bishai, M.; Blake, A.; Blyth, S.; Bock, G. J.; Bogert, D.; Cao, D.; Cao, G. F.; Cao, J.; Cao, S. V.; Carroll, T. J.; Castromonte, C. M.; Cen, W. R.; Chan, Y. L.; Chang, J. F.; Chang, L. C.; Chang, Y.; Chen, H. S.; Chen, Q. Y.; Chen, R.; Chen, S. M.; Chen, Y.; Chen, Y. X.; Cheng, J.; Cheng, J.-H.; Cheng, Y. P.; Cheng, Z. K.; Cherwinka, J. J.; Childress, S.; Chu, M. C.; Chukanov, A.; Coelho, J. A. B.; Corwin, L.; Cronin-Hennessy, D.; Cummings, J. P.; de Arcos, J.; De Rijck, S.; Deng, Z. Y.; Devan, A. V.; Devenish, N. E.; Ding, X. F.; Ding, Y. Y.; Diwan, M. V.; Dolgareva, M.; Dove, J.; Dwyer, D. A.; Edwards, W. R.; Escobar, C. O.; Evans, J. J.; Falk, E.; Feldman, G. J.; Flanagan, W.; Frohne, M. V.; Gabrielyan, M.; Gallagher, H. R.; Germani, S.; Gill, R.; Gomes, R. A.; Gonchar, M.; Gong, G. H.; Gong, H.; Goodman, M. C.; Gouffon, P.; Graf, N.; Gran, R.; Grassi, M.; Grzelak, K.; Gu, W. Q.; Guan, M. Y.; Guo, L.; Guo, R. P.; Guo, X. H.; Guo, Z.; Habig, A.; Hackenburg, R. W.; Hahn, S. R.; Han, R.; Hans, S.; Hartnell, J.; Hatcher, R.; He, M.; Heeger, K. M.; Heng, Y. K.; Higuera, A.; Holin, A.; Hor, Y. K.; Hsiung, Y. B.; Hu, B. Z.; Hu, T.; Hu, W.; Huang, E. C.; Huang, H. X.; Huang, J.; Huang, X. T.; Huber, P.; Huo, W.; Hussain, G.; Hylen, J.; Irwin, G. M.; Isvan, Z.; Jaffe, D. E.; Jaffke, P.; James, C.; Jen, K. L.; Jensen, D.; Jetter, S.; Ji, X. L.; Ji, X. P.; Jiao, J. B.; Johnson, R. A.; de Jong, J. K.; Joshi, J.; Kafka, T.; Kang, L.; Kasahara, S. M. S.; Kettell, S. H.; Kohn, S.; Koizumi, G.; Kordosky, M.; Kramer, M.; Kreymer, A.; Kwan, K. K.; Kwok, M. W.; Kwok, T.; Lang, K.; Langford, T. J.; Lau, K.; Lebanowski, L.; Lee, J.; Lee, J. H. C.; Lei, R. T.; Leitner, R.; Leung, J. K. C.; Li, C.; Li, D. J.; Li, F.; Li, G. S.; Li, Q. J.; Li, S.; Li, S. C.; Li, W. D.; Li, X. N.; Li, Y. F.; Li, Z. B.; Liang, H.; Lin, C. J.; Lin, G. L.; Lin, S.; Lin, S. K.; Lin, Y.-C.; Ling, J. J.; Link, J. M.; Litchfield, P. J.; Littenberg, L.; Littlejohn, B. R.; Liu, D. W.; Liu, J. C.; Liu, J. L.; Loh, C. W.; Lu, C.; Lu, H. Q.; Lu, J. S.; Lucas, P.; Luk, K. B.; Lv, Z.; Ma, Q. M.; Ma, X. B.; Ma, X. Y.; Ma, Y. Q.; Malyshkin, Y.; Mann, W. A.; Marshak, M. L.; Martinez Caicedo, D. A.; Mayer, N.; McDonald, K. T.; McGivern, C.; McKeown, R. D.; Medeiros, M. M.; Mehdiyev, R.; Meier, J. R.; Messier, M. D.; Miller, W. H.; Mishra, S. R.; Mitchell, I.; Mooney, M.; Moore, C. D.; Mualem, L.; Musser, J.; Nakajima, Y.; Naples, D.; Napolitano, J.; Naumov, D.; Naumova, E.; Nelson, J. K.; Newman, H. B.; Ngai, H. Y.; Nichol, R. J.; Ning, Z.; Nowak, J. A.; O'Connor, J.; Ochoa-Ricoux, J. P.; Olshevskiy, A.; Orchanian, M.; Pahlka, R. B.; Paley, J.; Pan, H.-R.; Park, J.; Patterson, R. B.; Patton, S.; Pawloski, G.; Pec, V.; Peng, J. C.; Perch, A.; Pfützner, M. M.; Phan, D. D.; Phan-Budd, S.; Pinsky, L.; Plunkett, R. K.; Poonthottathil, N.; Pun, C. S. J.; Qi, F. Z.; Qi, M.; Qian, X.; Qiu, X.; Radovic, A.; Raper, N.; Rebel, B.; Ren, J.; Rosenfeld, C.; Rosero, R.; Roskovec, B.; Ruan, X. C.; Rubin, H. A.; Sail, P.; Sanchez, M. C.; Schneps, J.; Schreckenberger, A.; Schreiner, P.; Sharma, R.; Moed Sher, S.; Sousa, A.; Steiner, H.; Sun, G. X.; Sun, J. L.; Tagg, N.; Talaga, R. L.; Tang, W.; Taychenachev, D.; Thomas, J.; Thomson, M. A.; Tian, X.; Timmons, A.; Todd, J.; Tognini, S. C.; Toner, R.; Torretta, D.; Treskov, K.; Tsang, K. V.; Tull, C. E.; Tzanakos, G.; Urheim, J.; Vahle, P.; Viaux, N.; Viren, B.; Vorobel, V.; Wang, C. H.; Wang, M.; Wang, N. Y.; Wang, R. G.; Wang, W.; Wang, X.; Wang, Y. F.; Wang, Z.; Wang, Z. M.; Webb, R. C.; Weber, A.; Wei, H. Y.; Wen, L. J.; Whisnant, K.; White, C.; Whitehead, L.; Whitehead, L. H.; Wise, T.; Wojcicki, S. G.; Wong, H. L. H.; Wong, S. C. F.; Worcester, E.; Wu, C.-H.; Wu, Q.; Wu, W. J.; Xia, D. M.; Xia, J. K.; Xing, Z. Z.; Xu, J. L.; Xu, J. Y.; Xu, Y.; Xue, T.; Yang, C. G.; Yang, H.; Yang, L.; Yang, M. S.; Yang, M. T.; Ye, M.; Ye, Z.; Yeh, M.; Young, B. L.; Yu, Z. Y.; Zeng, S.; Zhan, L.; Zhang, C.; Zhang, H. H.; Zhang, J. W.; Zhang, Q. M.; Zhang, X. T.; Zhang, Y. M.; Zhang, Y. X.; Zhang, Z. J.; Zhang, Z. P.; Zhang, Z. Y.; Zhao, J.; Zhao, Q. W.; Zhao, Y. B.; Zhong, W. L.; Zhou, L.; Zhou, N.; Zhuang, H. L.; Zou, J. H.; Daya Bay Collaboration

    2016-10-01

    Searches for a light sterile neutrino have been performed independently by the MINOS and the Daya Bay experiments using the muon (anti)neutrino and electron antineutrino disappearance channels, respectively. In this Letter, results from both experiments are combined with those from the Bugey-3 reactor neutrino experiment to constrain oscillations into light sterile neutrinos. The three experiments are sensitive to complementary regions of parameter space, enabling the combined analysis to probe regions allowed by the Liquid Scintillator Neutrino Detector (LSND) and MiniBooNE experiments in a minimally extended four-neutrino flavor framework. Stringent limits on sin22 θμ e are set over 6 orders of magnitude in the sterile mass-squared splitting Δ m412. The sterile-neutrino mixing phase space allowed by the LSND and MiniBooNE experiments is excluded for Δ m412<0.8 eV2 at 95 % CLs .

  18. Mapping the Milky Way Galaxy with LISA

    NASA Technical Reports Server (NTRS)

    McKinnon, Jose A.; Littenberg, Tyson

    2012-01-01

    Gravitational wave detectors in the mHz band (such as the Laser Interferometer Space Antenna, or LISA) will observe thousands of compact binaries in the galaxy which can be used to better understand the structure of the Milky Way. To test the effectiveness of LISA to measure the distribution of the galaxy, we simulated the Close White Dwarf Binary (CWDB) gravitational wave sky using different models for the Milky Way. To do so, we have developed a galaxy density distribution modeling code based on the Markov Chain Monte Carlo method. The code uses different distributions to construct realizations of the galaxy. We then use the Fisher Information Matrix to estimate the variance and covariance of the recovered parameters for each detected CWDB. This is the first step toward characterizing the capabilities of space-based gravitational wave detectors to constrain models for galactic structure, such as the size and orientation of the bar in the center of the Milky Way

  19. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for ecosystem carbon cycle studies

    Treesearch

    Y. He; Q. Zhuang; A.D. McGuire; Y. Liu; M. Chen

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations inmodeling regional carbon dynamics and explore the...

  20. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  1. The N2HDM under theoretical and experimental scrutiny

    NASA Astrophysics Data System (ADS)

    Mühlleitner, Margarete; Sampaio, Marco O. P.; Santos, Rui; Wittbrodt, Jonas

    2017-03-01

    The N2HDM is based on the CP-conserving 2HDM extended by a real scalar singlet field. Its enlarged parameter space and its fewer symmetry conditions as compared to supersymmetric models allow for an interesting phenomenology compatible with current experimental constraints, while adding to the 2HDM sector the possibility of Higgs-to-Higgs decays with three different Higgs bosons. In this paper the N2HDM is subjected to detailed scrutiny. Regarding the theoretical constraints we implement tests of tree-level perturbativity and vacuum stability. Moreover, we present, for the first time, a thorough analysis of the global minimum of the N2HDM. The model and the theoretical constraints have been implemented in ScannerS, and we provide N2HDECAY, a code based on HDECAY, for the computation of the N2HDM branching ratios and total widths including the state-of-the-art higher order QCD corrections and off-shell decays. We then perform an extensive parameter scan in the N2HDM parameter space, with all theoretical and experimental constraints applied, and analyse its allowed regions. We find that large singlet admixtures are still compatible with the Higgs data and investigate which observables will allow to restrict the singlet nature most effectively in the next runs of the LHC. Similarly to the 2HDM, the N2HDM exhibits a wrong-sign parameter regime, which will be constrained by future Higgs precision measurements.

  2. Instant preheating in quintessential inflation with α -attractors

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Konstantinos; Wood, Leonora Donaldson; Owen, Charlotte

    2018-03-01

    We investigate a compelling model of quintessential inflation in the context of α -attractors, which naturally result in a scalar potential featuring two flat regions; the inflationary plateau and the quintessential tail. The "asymptotic freedom" of α -attractors, near the kinetic poles, suppresses radiative corrections and interactions, which would otherwise threaten to lift the flatness of the quintessential tail and cause a 5th-force problem respectively. Since this is a nonoscillatory inflation model, we reheat the Universe through instant preheating. The parameter space is constrained by both inflation and dark energy requirements. We find an excellent correlation between the inflationary observables and model predictions, in agreement with the α -attractors setup. We also obtain successful quintessence for natural values of the parameters. Our model predicts potentially sizeable tensor perturbations (at the level of 1%) and a slightly varying equation of state for dark energy, to be probed in the near future.

  3. Isocurvature forecast in the anthropic axion window

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamann, J.; Hannestad, S.; Raffelt, G.G.

    2009-06-01

    We explore the cosmological sensitivity to the amplitude of isocurvature fluctuations that would be caused by axions in the ''anthropic window'' where the axion decay constant f{sub a} >> 10{sup 12} GeV and the initial misalignment angle Θ{sub i} << 1. In a minimal ΛCDM cosmology extended with subdominant scale-invariant isocurvature fluctuations, existing data constrain the isocurvature fraction to α < 0.09 at 95% C.L. If no signal shows up, Planck can improve this constraint to 0.042 while an ultimate CMB probe limited only by cosmic variance in both temperature and E-polarisation can reach 0.017, about a factor of fivemore » better than the current limit. In the parameter space of f{sub a} and H{sub I} (Hubble parameter during inflation) we identify a small region where axion detection remains within the reach of realistic cosmological probes.« less

  4. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  5. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  6. Computing Fault Displacements from Surface Deformations

    NASA Technical Reports Server (NTRS)

    Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy

    2006-01-01

    Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work

  7. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  8. Thermal dark matter through the Dirac neutrino portal

    NASA Astrophysics Data System (ADS)

    Batell, Brian; Han, Tao; McKeen, David; Haghi, Barmak Shams Es

    2018-04-01

    We study a simple model of thermal dark matter annihilating to standard model neutrinos via the neutrino portal. A (pseudo-)Dirac sterile neutrino serves as a mediator between the visible and the dark sectors, while an approximate lepton number symmetry allows for a large neutrino Yukawa coupling and, in turn, efficient dark matter annihilation. The dark sector consists of two particles, a Dirac fermion and complex scalar, charged under a symmetry that ensures the stability of the dark matter. A generic prediction of the model is a sterile neutrino with a large active-sterile mixing angle that decays primarily invisibly. We derive existing constraints and future projections from direct detection experiments, colliders, rare meson and tau decays, electroweak precision tests, and small scale structure observations. Along with these phenomenological tests, we investigate the consequences of perturbativity and scalar mass fine tuning on the model parameter space. A simple, conservative scheme to confront the various tests with the thermal relic target is outlined, and we demonstrate that much of the cosmologically-motivated parameter space is already constrained. We also identify new probes of this scenario such as multibody kaon decays and Drell-Yan production of W bosons at the LHC.

  9. Search for supersymmetry in hadronic final states using M T2 in pp collisions at $$ \\sqrt{s}=7 $$ TeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.

    A search for supersymmetry or other new physics resulting in similar final states is presented using a data sample of 4.73 inverse femtobarns of pp collisions collected atmore » $$ \\sqrt{s}=7 $$ TeV with the CMS detector at the LHC. Fully hadronic final states are selected based on the variable MT2, an extension of the transverse mass in events with two invisible particles. Two complementary studies are performed. The first targets the region of parameter space with medium to high squark and gluino masses, in which the signal can be separated from the standard model backgrounds by a tight requirement on MT2. The second is optimized to be sensitive to events with a light gluino and heavy squarks. In this case, the MT2 requirement is relaxed, but a higher jet multiplicity and at least one b-tagged jet are required. No significant excess of events over the standard model expectations is observed. Exclusion limits are derived for the parameter space of the constrained minimal supersymmetric extension of the standard model, as well as on a variety of simplified model spectra.« less

  10. The Ground Flash Fraction Retrieval Algorithm Employing Differential Evolution: Simulations and Applications

    NASA Technical Reports Server (NTRS)

    Koshak, William; Solakiewicz, Richard

    2012-01-01

    The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.

  11. Scenarios for gluino coannihilation

    DOE PAGES

    Ellis, John; Evans, Jason L.; Luo, Feng; ...

    2016-02-11

    In this article, we study supersymmetric scenarios in which the gluino is the next-to-lightest supersymmetric particle (NLSP), with a mass sufficiently close to that of the lightest supersymmetric particle (LSP) that gluino coannihilation becomes important. One of these scenarios is the MSSM with soft supersymmetry-breaking squark and slepton masses that are universal at an input GUT renormalization scale, but with non-universal gaugino masses. The other scenario is an extension of the MSSM to include vector-like supermultiplets. In both scenarios, we identify the regions of parameter space where gluino coannihilation is important, and discuss their relations to other regions of parametermore » space where other mechanisms bring the dark matter density into the range allowed by cosmology. In the case of the non-universal MSSM scenario, we find that the allowed range of parameter space is constrained by the requirement of electroweak symmetry breaking, the avoidance of a charged LSP and the measured mass of the Higgs boson, in particular, as well as the appearance of other dark matter (co)annihilation processes. Nevertheless, LSP masses m X ≲ 8TeV with the correct dark matter density are quite possible. In the case of pure gravity mediation with additional vector-like supermultiplets, changes to the anomaly-mediated gluino mass and the threshold effects associated with these states can make the gluino almost degenerate with the LSP, and we find a similar upper bound.« less

  12. Design of the 1.5 MW, 30-96 MHz ultra-wideband 3 dB high power hybrid coupler for Ion Cyclotron Resonance Frequency (ICRF) heating in fusion grade reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, Rana Pratap, E-mail: ranayadav97@gmail.com; Kumar, Sunil; Kulkarni, S. V.

    2016-01-15

    Design and developmental procedure of strip-line based 1.5 MW, 30-96 MHz, ultra-wideband high power 3 dB hybrid coupler has been presented and its applicability in ion cyclotron resonance heating (ICRH) in tokamak is discussed. For the high power handling capability, spacing between conductors and ground need to very high. Hence other structural parameters like strip-width, strip thickness coupling gap, and junction also become large which can be gone upto optimum limit where various constrains like fabrication tolerance, discontinuities, and excitation of higher TE and TM modes become prominent and significantly deteriorates the desired parameters of the coupled lines system. Inmore » designed hybrid coupler, two 8.34 dB coupled lines are connected in tandem to get desired coupling of 3 dB and air is used as dielectric. The spacing between ground and conductors are taken as 0.164 m for 1.5 MW power handling capability. To have the desired spacing, each of 8.34 dB segments are designed with inner dimension of 3.6 × 1.0 × 40 cm where constraints have been significantly realized, compensated, and applied in designing of 1.5 MW hybrid coupler and presented in paper.« less

  13. Sensitivity of finite helical axis parameters to temporally varying realistic motion utilizing an idealized knee model.

    PubMed

    Johnson, T S; Andriacchi, T P; Erdman, A G

    2004-01-01

    Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time step increments during periods of little or no rotation.

  14. Constraints on moduli cosmology from the production of dark matter and baryon isocurvature fluctuations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemoine, Martin; Martin, Jerome; Yokoyama, Jun'ichi

    2009-12-15

    We set constraints on moduli cosmology from the production of dark matter - radiation and baryon -radiation isocurvature fluctuations through modulus decay, assuming the modulus remains light during inflation. We find that the moduli problem becomes worse at the perturbative level as a significant part of the parameter space m{sub {sigma}} (modulus mass) - {sigma}{sub inf} (modulus vacuum expectation value at the end of inflation) is constrained by the nonobservation of significant isocurvature fluctuations. We discuss in detail the evolution of the modulus vacuum expectation value and perturbations, in particular, the consequences of Hubble scale corrections to the modulus potential,more » and the stochastic motion of the modulus during inflation. We show, in particular, that a high modulus mass scale m{sub {sigma}} > or approx. 100 TeV, which allows the modulus to evade big bang nucleosynthesis constraints is strongly constrained at the perturbative level. We find that generically, solving the moduli problem requires the inflationary scale to be much smaller than 10{sup 13} GeV.« less

  15. Search for intermediate mass black hole binaries in the first observing run of Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Afrough, M.; Agarwal, B.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Allen, B.; Allen, G.; Allocca, A.; Almoubayyed, H.; Altin, P. A.; Amato, A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Antier, S.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; AultONeal, K.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Bae, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Banagiri, S.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bawaj, M.; Bazzan, M.; Bécsy, B.; Beer, C.; Bejger, M.; Belahcene, I.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Bode, N.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Canepa, M.; Canizares, P.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Carney, M. F.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chatterjee, D.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, A. K. W.; Chung, S.; Ciani, G.; Ciolfi, R.; Cirelli, C. E.; Cirone, A.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L. R.; Constancio, M.; Conti, L.; Cooper, S. J.; Corban, P.; Corbitt, T. R.; Corley, K. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Creighton, J. D. E.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; De, S.; DeBra, D.; Deelman, E.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Renzo, F.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Duncan, J.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z. B.; Etzel, T.; Evans, M.; Evans, T. M.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Feicht, J.; Fejer, M. M.; Fernandez-Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, P. W. F.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gabel, M.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Ganija, M. R.; Gaonkar, S. G.; Garufi, F.; Gaudio, S.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, D.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glover, L.; Goetz, E.; Goetz, R.; Gomes, S.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Gruning, P.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hannuksela, O. A.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Horst, C.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Intini, G.; Isa, H. N.; Isac, J.-M.; Isi, M.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katolik, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kemball, A. J.; Kennedy, R.; Kent, C.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, W.; Kim, W. S.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kumar, S.; Kuo, L.; Kutynia, A.; Kwang, S.; Lackey, B. D.; Lai, K. H.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, H. W.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; London, L. T.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lousto, C. O.; Lovelace, G.; Lück, H.; Lumaca, D.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña Hernandez, I.; Magaña-Sandoval, F.; Magaña Zertuche, L.; Magee, R. M.; Majorana, E.; Maksimovic, I.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markakis, C.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matas, A.; Matichard, F.; Matone, L.; Mavalvala, N.; Mayani, R.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McCuller, L.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Mejuto-Villa, E.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minazzoli, O.; Minenkov, Y.; Ming, J.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Ng, K. K. Y.; Nguyen, T. T.; Nichols, D.; Nielsen, A. B.; Nissanke, S.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; Ormiston, R.; Ortega, L. F.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pace, A. E.; Page, J.; Page, M. A.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pang, B.; Pang, P. T. H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poggiani, R.; Popolizio, P.; Porter, E. K.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Ramirez, K. E.; Rapagnani, P.; Raymond, V.; Razzano, M.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Ricker, P. M.; Rieger, S.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romel, C. L.; Romie, J. H.; Rosińska, D.; Ross, M. P.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Rynge, M.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schulte, B. W.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Seidel, E.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Shaddock, D. A.; Shaffer, T. J.; Shah, A. A.; Shahriar, M. S.; Shao, L.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; Smith, R. J. E.; Son, E. J.; Sonnenberg, J. A.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Stratta, G.; Strigin, S. E.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, J. A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tsang, K. W.; Tse, M.; Tso, R.; Tuyenbayev, D.; Ueno, K.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahi, K.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walet, R.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, J. Z.; Wang, M.; Wang, Y.-F.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wessel, E. K.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Wofford, J.; Wong, K. W. K.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; ZadroŻny, A.; Zanolin, M.; Zelenova, T.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.-H.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration

    2017-07-01

    During their first observational run, the two Advanced LIGO detectors attained an unprecedented sensitivity, resulting in the first direct detections of gravitational-wave signals produced by stellar-mass binary black hole systems. This paper reports on an all-sky search for gravitational waves (GWs) from merging intermediate mass black hole binaries (IMBHBs). The combined results from two independent search techniques were used in this study: the first employs a matched-filter algorithm that uses a bank of filters covering the GW signal parameter space, while the second is a generic search for GW transients (bursts). No GWs from IMBHBs were detected; therefore, we constrain the rate of several classes of IMBHB mergers. The most stringent limit is obtained for black holes of individual mass 100 M⊙ , with spins aligned with the binary orbital angular momentum. For such systems, the merger rate is constrained to be less than 0.93 Gpc-3 yr-1 in comoving units at the 90% confidence level, an improvement of nearly 2 orders of magnitude over previous upper limits.

  16. Exotic Leptons. Higgs, Flavor and Collider Phenomenology

    DOE PAGES

    Altmannshofer, Wolfgang; Bauer, Martin; Carena, Marcela

    2014-01-15

    We study extensions of the standard model by one generation of vector-like leptons with non-standard hypercharges, which allow for a sizable modification of the h → γγ decay rate for new lepton masses in the 300 GeV-1 TeV range. We also analyze vacuum stability implications for different hypercharges. Effects in h → Zγ are typically much smaller than in h → γγ, but distinct among the considered hypercharge assignments. Non-standard hypercharges constrain or entirely forbid possible mixing operators with standard model leptons. As a consequence, the leading contributions to the experimentally strongly constrained electric dipole moments of standard model fermionsmore » are only generated at the two loop level by the new CP violating sources of the considered setups. Furthermore, we derive the bounds from dipole moments, electro-weak precision observables and lepton flavor violating processes, and discuss their implications. Finally, we examine the production and decay channels of the vector-like leptons at the LHC, and find that signatures with multiple light leptons or taus are already probing interesting regions of parameter space.« less

  17. Probing Models of Dark Matter and the Early Universe

    NASA Astrophysics Data System (ADS)

    Orlofsky, Nicholas David

    This thesis discusses models for dark matter (DM) and their behavior in the early universe. An important question is how phenomenological probes can directly search for signals of DM today. Another topic of investigation is how the DM and other processes in the early universe must evolve. Then, astrophysical bounds on early universe dynamics can constrain DM. We will consider these questions in the context of three classes of DM models--weakly interacting massive particles (WIMPs), axions, and primordial black holes (PBHs). Starting with WIMPs, we consider models where the DM is charged under the electroweak gauge group of the Standard Model. Such WIMPs, if generated by a thermal cosmological history, are constrained by direct detection experiments. To avoid present or near-future bounds, the WIMP model or cosmological history must be altered in some way. This may be accomplished by the inclusion of new states that coannihilate with the WIMP or a period of non-thermal evolution in the early universe. Future experiments are likely to probe some of these altered scenarios, and a non-observation would require a high degree of tuning in some of the model parameters in these scenarios. Next, axions, as light pseudo-Nambu-Goldstone bosons, are susceptible to quantum fluctuations in the early universe that lead to isocurvature perturbations, which are constrained by observations of the cosmic microwave background (CMB). We ask what it would take to allow axion models in the face of these strong CMB bounds. We revisit models where inflationary dynamics modify the axion potential and discuss how isocurvature bounds can be relaxed, elucidating the difficulties in these constructions. Avoiding disruption of inflationary dynamics provides important limits on the parameter space. Finally, PBHs have received interest in part due to observations by LIGO of merging black hole binaries. We ask how these PBHs could arise through inflationary models and investigate the opportunity for corroboration through experimental probes of gravitational waves at pulsar timing arrays. We provide examples of theories that are already ruled out, theories that will soon be probed, and theories that will not be tested in the foreseeable future. The models that are most strongly constrained are those with relatively broad primordial power spectra.

  18. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    NASA Astrophysics Data System (ADS)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  19. Constrained coding for the deep-space optical channel

    NASA Technical Reports Server (NTRS)

    Moision, B. E.; Hamkins, J.

    2002-01-01

    We investigate methods of coding for a channel subject to a large dead-time constraint, i.e. a constraint on the minimum spacing between transmitted pulses, with the deep-space optical channel as the motivating example.

  20. Constraining extended scalar sectors at the LHC and beyond

    NASA Astrophysics Data System (ADS)

    Ilnicka, Agnieszka; Robens, Tania; Stefaniak, Tim

    2018-04-01

    We give a brief overview of beyond the Standard Model (BSM) theories with an extended scalar sector and their phenomenological status in the light of recent experimental results. We discuss the relevant theoretical and experimental constraints, and show their impact on the allowed parameter space of two specific models: the real scalar singlet extension of the Standard Model (SM) and the Inert Doublet Model. We emphasize the importance of the LHC measurements, both the direct searches for additional scalar bosons, as well as the precise measurements of properties of the Higgs boson of mass 125 GeV. We show the complementarity of these measurements to electroweak and dark matter observables.

  1. The Sommerfeld enhancement in the scotogenic model with large electroweak scalar multiplets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chowdhury, Talal Ahmed; Nasri, Salah, E-mail: talal@du.ac.bd, E-mail: snasri@uaeu.ac.ae

    2017-01-01

    We investigate the Sommerfeld enhancement (SE) in the generalized scotogenic model with large electroweak multiplets. We focus on scalar dark matter (DM) candidate of the model and compare DM annihilation cross sections to WW , ZZ , γγ and γ Z at present day in the galactic halo for scalar doublet and its immediate generalization, the quartet in their respective viable regions of parameter space. We find that larger multiplet has sizable Sommerfeld enhanced annihilation cross section compared to the doublet and because of that it is more likely to be constrained by the current H.E.S.S. results and future CTAmore » sensitivity limits.« less

  2. Lepton flavor violating radiative decays in EW-scale ν R model: an update

    DOE PAGES

    Hung, P. Q.; Le, Trinh; Tran, Van Que; ...

    2015-12-28

    Here, we perform an updated analysis for the one-loop induced charged lepton flavor violating radiative decays l i → l j γ in an extended mirror model. Mixing effects of the neutrinos and charged leptons constructed with a horizontal A 4 symmetry are also taken into account. Current experimental limit and projected sensitivity on the branching ratio of μ → eγ are used to constrain the parameter space of the model. Calculations of two related observables, the electric and magnetic dipole moments of the leptons, are included. Implications concerning the possible detection of mirror leptons at the LHC and themore » ILC are also discussed.« less

  3. Bottom-quark fusion processes at the LHC for probing Z' models and B -meson decay anomalies

    NASA Astrophysics Data System (ADS)

    Abdullah, Mohammad; Dalchenko, Mykhailo; Dutta, Bhaskar; Eusebi, Ricardo; Huang, Peisi; Kamon, Teruki; Rathjens, Denis; Thompson, Adrian

    2018-04-01

    We investigate models of a heavy neutral gauge boson Z' coupling mostly to third generation quarks and second generation leptons. In this scenario, bottom quarks arising from gluon splitting can fuse into Z' allowing the LHC to probe it. In the generic framework presented, anomalies in B -meson decays reported by the LHCb experiment imply a flavor-violating b s coupling of the featured Z' constraining the lowest possible production cross section. A novel approach searching for a Z'(→μ μ ) in association with at least one bottom-tagged jet can probe regions of model parameter space existing analyses are not sensitive to.

  4. MIDAS - Mission design and analysis software for the optimization of ballistic interplanetary trajectories

    NASA Technical Reports Server (NTRS)

    Sauer, Carl G., Jr.

    1989-01-01

    A patched conic trajectory optimization program MIDAS is described that was developed to investigate a wide variety of complex ballistic heliocentric transfer trajectories. MIDAS includes the capability of optimizing trajectory event times such as departure date, arrival date, and intermediate planetary flyby dates and is able to both add and delete deep space maneuvers when dictated by the optimization process. Both powered and unpowered flyby or gravity assist trajectories of intermediate bodies can be handled and capability is included to optimize trajectories having a rendezvous with an intermediate body such as for a sample return mission. Capability is included in the optimization process to constrain launch energy and launch vehicle parking orbit parameters.

  5. Insight and search in Katona's five-square problem.

    PubMed

    Ollinger, Michael; Jones, Gary; Knoblich, Günther

    2014-01-01

    Insights are often productive outcomes of human thinking. We provide a cognitive model that explains insight problem solving by the interplay of problem space search and representational change, whereby the problem space is constrained or relaxed based on the problem representation. By introducing different experimental conditions that either constrained the initial search space or helped solvers to initiate a representational change, we investigated the interplay of problem space search and representational change in Katona's five-square problem. Testing 168 participants, we demonstrated that independent hints relating to the initial search space and to representational change had little effect on solution rates. However, providing both hints caused a significant increase in solution rates. Our results show the interplay between problem space search and representational change in insight problem solving: The initial problem space can be so large that people fail to encounter impasse, but even when representational change is achieved the resulting problem space can still provide a major obstacle to finding the solution.

  6. The Tractable Cognition Thesis

    ERIC Educational Resources Information Center

    van Rooij, Iris

    2008-01-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the "Tractable Cognition thesis": Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories…

  7. Bayesian inference of Earth's radial seismic structure from body-wave traveltimes using neural networks

    NASA Astrophysics Data System (ADS)

    de Wit, Ralph W. L.; Valentine, Andrew P.; Trampert, Jeannot

    2013-10-01

    How do body-wave traveltimes constrain the Earth's radial (1-D) seismic structure? Existing 1-D seismological models underpin 3-D seismic tomography and earthquake location algorithms. It is therefore crucial to assess the quality of such 1-D models, yet quantifying uncertainties in seismological models is challenging and thus often ignored. Ideally, quality assessment should be an integral part of the inverse method. Our aim in this study is twofold: (i) we show how to solve a general Bayesian non-linear inverse problem and quantify model uncertainties, and (ii) we investigate the constraint on spherically symmetric P-wave velocity (VP) structure provided by body-wave traveltimes from the EHB bulletin (phases Pn, P, PP and PKP). Our approach is based on artificial neural networks, which are very common in pattern recognition problems and can be used to approximate an arbitrary function. We use a Mixture Density Network to obtain 1-D marginal posterior probability density functions (pdfs), which provide a quantitative description of our knowledge on the individual Earth parameters. No linearization or model damping is required, which allows us to infer a model which is constrained purely by the data. We present 1-D marginal posterior pdfs for the 22 VP parameters and seven discontinuity depths in our model. P-wave velocities in the inner core, outer core and lower mantle are resolved well, with standard deviations of ˜0.2 to 1 per cent with respect to the mean of the posterior pdfs. The maximum likelihoods of VP are in general similar to the corresponding ak135 values, which lie within one or two standard deviations from the posterior means, thus providing an independent validation of ak135 in this part of the radial model. Conversely, the data contain little or no information on P-wave velocity in the D'' layer, the upper mantle and the homogeneous crustal layers. Further, the data do not constrain the depth of the discontinuities in our model. Using additional phases available in the ISC bulletin, such as PcP, PKKP and the converted phases SP and ScP, may enhance the resolvability of these parameters. Finally, we show how the method can be extended to obtain a posterior pdf for a multidimensional model space. This enables us to investigate correlations between model parameters.

  8. A Bayesian approach to earthquake source studies

    NASA Astrophysics Data System (ADS)

    Minson, Sarah

    Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.

  9. Traversable braneworld wormholes supported by astrophysical observations

    NASA Astrophysics Data System (ADS)

    Wang, Deng; Meng, Xin-He

    2018-02-01

    In this study, we investigate the characteristics and properties of a traversable wormhole constrained by the current astrophysical observations in the framework of modified theories of gravity (MOG). As a concrete case, we study traversable wormhole space-time configurations in the Dvali-Gabadadze-Porrati (DGP) braneworld scenario, which are supported by the effects of the gravity leakage of extra dimensions. We find that the wormhole space-time structure will open in terms of the 2 σ confidence level when we utilize the joint constraints supernovae (SNe) Ia + observational Hubble parameter data (OHD) + Planck + gravitational wave (GW) and z < 0:2874. Furthermore, we obtain several model-independent conclusions, such as (i) the exotic matter threading the wormholes can be divided into four classes during the evolutionary processes of the universe based on various energy conditions; (ii) we can offer a strict restriction to the local wormhole space-time structure by using the current astrophysical observations; and (iii) we can clearly identify a physical gravitational resource for the wormholes supported by astrophysical observations, namely the dark energy components of the universe or equivalent space-time curvature effects from MOG. Moreover, we find that the strong energy condition is always violated at low redshifts.

  10. The added value of remote sensing products in constraining hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.

  11. Explorations in dark energy

    NASA Astrophysics Data System (ADS)

    Bozek, Brandon

    This dissertation describes three research projects on the topic of dark energy. The first project is an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their "w 0--wa" parameterization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which can not be distinguished from a cosmological constant at DETF Stage 2, and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The second project details this analysis on a Inverse Power Law (IPL) or "Ratra-Peebles" (RP) model. This model is a member of a popular subset of scalar field quintessence models that exhibit "tracking" behavior that make this model particularly theoretically interesting. We find that the relative increase in constraining power on the parameter space of this model is consistent to what was found in the first project and the DETF report. We also show, using a background cosmology based on an IPL scalar field model that is consistent with a cosmological constant with Stage 2 data, that good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The third project extends the Causal Entropic Principle to predict the preferred curvature within the "multiverse". The Causal Entropic Principle (Bousso, et al.) provides an alternative approach to anthropic attempts to predict our observed value of the cosmological constant by calculating the entropy created within a causal diamond. We have found that values larger than rhok = 40rho m are disfavored by more than 99.99% and a peak value at rho Λ = 7.9 x 10-123 and rho k = 4.3rhom for open universes. For universes that allow only positive curvature or both positive and negative curvature, we find a correlation between curvature and dark energy that leads to an extended region of preferred values. Our universe is found to be disfavored to an extent depending the priors on curvature. We also provide a comparison to previous anthropic constraints on open universes and discuss future directions for this work.

  12. Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods

    NASA Technical Reports Server (NTRS)

    Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark

    2002-01-01

    Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.

  13. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    NASA Astrophysics Data System (ADS)

    Gato-Rivera, B.; Semikhatov, A. M.

    1992-08-01

    A direct relation between the conformal formalism for 2D quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the W ( l) -constrained KP hierarchy to the ( p‧, p‧) minimal model, with the tau function being given by the correlator of a product of (dressed) ( l, 1) [or (1, l)] operators, provided the Miwa parameter ni and the free parameter (an abstract bc spin) present in the constraint are expressed through the ratio p‧/ p and the level l.

  14. The 6dF Galaxy Survey: Mass and Motions in the Local Universe

    NASA Astrophysics Data System (ADS)

    Colless, M.; Jones, H.; Campbell, L.; Burkey, D.; Taylor, A.; Saunders, W.

    2005-01-01

    The 6dF Galaxy Survey will provide 167000 redshifts and about 15000 peculiar velocities for galaxies over most of the southern sky out to about cz = 30000 km/s. The survey is currently almost half complete, with the final observations due in mid-2005. An initial data release was made public in December 2002; the first third of the dataset will be released at the end of 2003, with the remaining thirds being released at the end of 2004 and 2005. The status of the survey, the survey database and other relevant information can be obtained from the 6dFGS web site at http://www.mso.anu.edu.au/6dFGS. In terms of constraining cosmological parameters, combining the 6dFGS redshift and peculiar velocity surveys will allow us to: (1) break the degeneracy between the redshift-space distortion parameter beta = Omega_m0.6b/b and the galaxy-mass correlation parameter rg; (2) measure the four parameters Ag, Gamma, beta and rg with precisions of between 1% and 3%; (3) measure the variation of rg and b with scale to within a few percent over a wide range of scales.

  15. Long-range interacting systems in the unconstrained ensemble.

    PubMed

    Latella, Ivan; Pérez-Madrid, Agustín; Campa, Alessandro; Casetti, Lapo; Ruffo, Stefano

    2017-01-01

    Completely open systems can exchange heat, work, and matter with the environment. While energy, volume, and number of particles fluctuate under completely open conditions, the equilibrium states of the system, if they exist, can be specified using the temperature, pressure, and chemical potential as control parameters. The unconstrained ensemble is the statistical ensemble describing completely open systems and the replica energy is the appropriate free energy for these control parameters from which the thermodynamics must be derived. It turns out that macroscopic systems with short-range interactions cannot attain equilibrium configurations in the unconstrained ensemble, since temperature, pressure, and chemical potential cannot be taken as a set of independent variables in this case. In contrast, we show that systems with long-range interactions can reach states of thermodynamic equilibrium in the unconstrained ensemble. To illustrate this fact, we consider a modification of the Thirring model and compare the unconstrained ensemble with the canonical and grand-canonical ones: The more the ensemble is constrained by fixing the volume or number of particles, the larger the space of parameters defining the equilibrium configurations.

  16. Robust Constrained Optimization Approach to Control Design for International Space Station Centrifuge Rotor Auto Balancing Control System

    NASA Technical Reports Server (NTRS)

    Postma, Barry Dirk

    2005-01-01

    This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.

  17. Dynamic Parameters of the 2015 Nepal Gorkha Mw7.8 Earthquake Constrained by Multi-observations

    NASA Astrophysics Data System (ADS)

    Weng, H.; Yang, H.

    2017-12-01

    Dynamic rupture model can provide much detailed insights into rupture physics that is capable of assessing future seismic risk. Many studies have attempted to constrain the slip-weakening distance, an important parameter controlling friction behavior of rock, for several earthquakes based on dynamic models, kinematic models, and direct estimations from near-field ground motion. However, large uncertainties of the values of the slip-weakening distance still remain, mostly because of the intrinsic trade-offs between the slip-weakening distance and fault strength. Here we use a spontaneously dynamic rupture model to constrain the frictional parameters of the 25 April 2015 Mw7.8 Nepal earthquake, by combining with multiple seismic observations such as high-rate cGPS data, strong motion data, and kinematic source models. With numerous tests we find the trade-off patterns of final slip, rupture speed, static GPS ground displacements, and dynamic ground waveforms are quite different. Combining all the seismic constraints we can conclude a robust solution without a substantial trade-off of average slip-weakening distance, 0.6 m, in contrast to previous kinematical estimation of 5 m. To our best knowledge, this is the first time to robustly determine the slip-weakening distance on seismogenic fault from seismic observations. The well-constrained frictional parameters may be used for future dynamic models to assess seismic hazard, such as estimating the peak ground acceleration (PGA) etc. Similar approach could also be conducted for other great earthquakes, enabling broad estimations of the dynamic parameters in global perspectives that can better reveal the intrinsic physics of earthquakes.

  18. Impact Flash Monitoring Facility on the Deep Space Gateway

    NASA Astrophysics Data System (ADS)

    Needham, D. H.; Moser, D. E.; Suggs, R. M.; Cooke, W. J.; Kring, D. A.; Neal, C. R.; Fassett, C. I.

    2018-02-01

    Cameras mounted to the Deep Space Gateway exterior will detect flashes caused by impacts on the lunar surface. Observed flashes will help constrain the current lunar impact flux and assess hazards faced by crews living and working in cislunar space.

  19. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  20. Extracting Prior Distributions from a Large Dataset of In-Situ Measurements to Support SWOT-based Estimation of River Discharge

    NASA Astrophysics Data System (ADS)

    Hagemann, M.; Gleason, C. J.

    2017-12-01

    The upcoming (2021) Surface Water and Ocean Topography (SWOT) NASA satellite mission aims, in part, to estimate discharge on major rivers worldwide using reach-scale measurements of stream width, slope, and height. Current formalizations of channel and floodplain hydraulics are insufficient to fully constrain this problem mathematically, resulting in an infinitely large solution set for any set of satellite observations. Recent work has reformulated this problem in a Bayesian statistical setting, in which the likelihood distributions derive directly from hydraulic flow-law equations. When coupled with prior distributions on unknown flow-law parameters, this formulation probabilistically constrains the parameter space, and results in a computationally tractable description of discharge. Using a curated dataset of over 200,000 in-situ acoustic Doppler current profiler (ADCP) discharge measurements from over 10,000 USGS gaging stations throughout the United States, we developed empirical prior distributions for flow-law parameters that are not observable by SWOT, but that are required in order to estimate discharge. This analysis quantified prior uncertainties on quantities including cross-sectional area, at-a-station hydraulic geometry width exponent, and discharge variability, that are dependent on SWOT-observable variables including reach-scale statistics of width and height. When compared against discharge estimation approaches that do not use this prior information, the Bayesian approach using ADCP-derived priors demonstrated consistently improved performance across a range of performance metrics. This Bayesian approach formally transfers information from in-situ gaging stations to remote-sensed estimation of discharge, in which the desired quantities are not directly observable. Further investigation using large in-situ datasets is therefore a promising way forward in improving satellite-based estimates of river discharge.

  1. Estimating the frequency interval of a regularly spaced multicomponent harmonic line signal in colored noise

    NASA Astrophysics Data System (ADS)

    Frazer, Gordon J.; Anderson, Stuart J.

    1997-10-01

    The radar returns from some classes of time-varying point targets can be represented by the discrete-time signal plus noise model: xt equals st plus [vt plus (eta) t] equals (summation)i equals o P minus 1 Aiej2(pi f(i)/f(s)t) plus vt plus (eta) t, t (epsilon) 0, . . ., N minus 1, fi equals kfI plus fo where the received signal xt corresponds to the radar return from the target of interest from one azimuth-range cell. The signal has an unknown number of components, P, unknown complex amplitudes Ai and frequencies fi. The frequency parameters fo and fI are unknown, although constrained such that fo less than fI/2 and parameter k (epsilon) {minus u, . . ., minus 2, minus 1, 0, 1, 2, . . ., v} is constrained such that the component frequencies fi are bound by (minus fs/2, fs/2). The noise term vt, is typically colored, and represents clutter, interference and various noise sources. It is unknown, except that (summation)tvt2 less than infinity; in general, vt is not well modelled as an auto-regressive process of known order. The additional noise term (eta) t represents time-invariant point targets in the same azimuth-range cell. An important characteristic of the target is the unknown parameter, fI, representing the frequency interval between harmonic lines. It is desired to determine an estimate of fI from N samples of xt. We propose an algorithm to estimate fI based on Thomson's harmonic line F-Test, which is part of the multi-window spectrum estimation method and demonstrate the proposed estimator applied to target echo time series collected using an experimental HF skywave radar.

  2. Estimating contrast transfer function and associated parameters by constrained non-linear optimization.

    PubMed

    Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W

    2009-03-01

    The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.

  3. EDITORIAL: Interrelationship between plasma phenomena in the laboratory and in space

    NASA Astrophysics Data System (ADS)

    Koepke, Mark

    2008-07-01

    The premise of investigating basic plasma phenomena relevant to space is that an alliance exists between both basic plasma physicists, using theory, computer modelling and laboratory experiments, and space science experimenters, using different instruments, either flown on different spacecraft in various orbits or stationed on the ground. The intent of this special issue on interrelated phenomena in laboratory and space plasmas is to promote the interpretation of scientific results in a broader context by sharing data, methods, knowledge, perspectives, and reasoning within this alliance. The desired outcomes are practical theories, predictive models, and credible interpretations based on the findings and expertise available. Laboratory-experiment papers that explicitly address a specific space mission or a specific manifestation of a space-plasma phenomenon, space-observation papers that explicitly address a specific laboratory experiment or a specific laboratory result, and theory or modelling papers that explicitly address a connection between both laboratory and space investigations were encouraged. Attention was given to the utility of the references for readers who seek further background, examples, and details. With the advent of instrumented spacecraft, the observation of waves (fluctuations), wind (flows), and weather (dynamics) in space plasmas was approached within the framework provided by theory with intuition provided by the laboratory experiments. Ideas on parallel electric field, magnetic topology, inhomogeneity, and anisotropy have been refined substantially by laboratory experiments. Satellite and rocket observations, theory and simulations, and laboratory experiments have contributed to the revelation of a complex set of processes affecting the accelerations of electrons and ions in the geospace plasma. The processes range from meso-scale of several thousands of kilometers to micro-scale of a few meters to kilometers. Papers included in this special issue serve to synthesise our current understanding of processes related to the coupling and feedback at disparate scales. Categories of topics included here are (1) ionospheric physics and (2) Alfvén-wave physics, both of which are related to the particle acceleration responsible for auroral displays, (3) whistler-mode triggering mechanism, which is relevant to radiation-belt dynamics, (4) plasmoid encountering a barrier, which has applications throughout the realm of space and astrophysical plasmas, and (5) laboratory investigations of the entire magnetosphere or the plasma surrounding the magnetosphere. The papers are ordered from processes that take place nearest the Earth to processes that take place at increasing distances from Earth. Many advances in understanding space plasma phenomena have been linked to insight derived from theoretical modeling and/or laboratory experiments. Observations from space-borne instruments are typically interpreted using theoretical models developed to predict the properties and dynamics of space and astrophysical plasmas. The usefulness of customized laboratory experiments for providing confirmation of theory by identifying, isolating, and studying physical phenomena efficiently, quickly, and economically has been demonstrated in the past. The benefits of laboratory experiments to investigating space-plasma physics are their reproducibility, controllability, diagnosability, reconfigurability, and affordability compared to a satellite mission or rocket campaign. Certainly, the plasma being investigated in a laboratory device is quite different from that being measured by a spaceborne instrument; nevertheless, laboratory experiments discover unexpected phenomena, benchmark theoretical models, develop physical insight, establish observational signatures, and pioneer diagnostic techniques. Explicit reference to such beneficial laboratory contributions is occasionally left out of the citations in the space-physics literature in favor of theory-paper counterparts and, thus, the scientific support that laboratory results can provide to the development of space-relevant theoretical models is often under-recognized. It is unrealistic to expect the dimensional parameters corresponding to space plasma to be matchable in the laboratory. However, a laboratory experiment is considered well designed if the subset of parameters relevant to a specific process shares the same phenomenological regime as the subset of analogous space parameters, even if less important parameters are mismatched. Regime boundaries are assigned by normalizing a dimensional parameter to an appropriate reference or scale value to make it dimensionless and noting the values at which transitions occur in the physical behavior or approximations. An example of matching regimes for cold-plasma waves is finding a 45° diagonal line on the log--log CMA diagram along which lie both a laboratory-observed wave and a space-observed wave. In such a circumstance, a space plasma and a lab plasma will support the same kind of modes if the dimensionless parameters are scaled properly (Bellan 2006 Fundamentals of Plasma Physics (Cambridge: Cambridge University Press) p 227). The plasma source, configuration geometry, and boundary conditions associated with a specific laboratory experiment are characteristic elements that affect the plasma and plasma processes that are being investigated. Space plasma is not exempt from an analogous set of constraining factors that likewise influence the phenomena that occur. Typically, each morphologically distinct region of space has associated with it plasma that is unique by virtue of the various mechanisms responsible for the plasma's presence there, as if the plasma were produced by a unique source. Boundary effects that typically constrain the possible parameter values to lie within one or more restricted ranges are inescapable in laboratory plasma. The goal of a laboratory experiment is to examine the relevant physics within these ranges and extrapolate the results to space conditions that may or may not be subject to any restrictions on the values of the plasma parameters. The interrelationship between laboratory and space plasma experiments has been cultivated at a low level and the potential scientific benefit in this area has yet to be realized. The few but excellent examples of joint papers, joint experiments, and directly relevant cross-disciplinary citations are a direct result of the emphasis placed on this interrelationship two decades ago. Building on this special issue Plasma Physics and Controlled Fusion plans to create a dedicated webpage to highlight papers directly relevant to this field published either in the recent past or in the future. It is hoped that this resource will appeal to the readership in the laboratory-experiment and space-plasma communities and improve the cross-fertilization between them.

  4. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    NASA Astrophysics Data System (ADS)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar nucleosynthesis with far more complex abundance data sets and to refine the physical processes governing the chemical evolution of stellar systems.

  5. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dierickx, Marion I. P.; Loeb, Abraham, E-mail: mdierickx@cfa.harvard.edu, E-mail: aloeb@cfa.harvard.edu

    The extensive span of the Sagittarius (Sgr) stream makes it a promising tool for studying the gravitational potential of the Milky Way (MW). Characterizing its stellar kinematics can constrain halo properties and provide a benchmark for the paradigm of galaxy formation from cold dark matter. Accurate models of the disruption dynamics of the Sgr progenitor are necessary to employ this tool. Using a combination of analytic modeling and N -body simulations, we build a new model of the Sgr orbit and resulting stellar stream. In contrast to previous models, we simulate the full infall trajectory of the Sgr progenitor frommore » the time it first crossed the MW virial radius 8 Gyr ago. An exploration of the parameter space of initial phase-space conditions yields tight constraints on the angular momentum of the Sgr progenitor. Our best-fit model is the first to accurately reproduce existing data on the 3D positions and radial velocities of the debris detected 100 kpc away in the MW halo. In addition to replicating the mapped stream, the simulation also predicts the existence of several arms of the Sgr stream extending to hundreds of kiloparsecs. The two most distant stars known in the MW halo coincide with the predicted structure. Additional stars in the newly predicted arms can be found with future data from the Large Synoptic Survey Telescope. Detecting a statistical sample of stars in the most distant Sgr arms would provide an opportunity to constrain the MW potential out to unprecedented Galactocentric radii.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Powalka, Mathieu; Lançon, Ariane; Duc, Pierre-Alain

    Large samples of globular clusters (GC) with precise multi-wavelength photometry are becoming increasingly available and can be used to constrain the formation history of galaxies. We present the results of an analysis of Milky Way (MW) and Virgo core GCs based on 5 optical-near-infrared colors and 10 synthetic stellar population models. For the MW GCs, the models tend to agree on photometric ages and metallicities, with values similar to those obtained with previous studies. When used with Virgo core GCs, for which photometry is provided by the Next Generation Virgo cluster Survey (NGVS), the same models generically return younger ages.more » This is a consequence of the systematic differences observed between the locus occupied by Virgo core GCs and models in panchromatic color space. Only extreme fine-tuning of the adjustable parameters available to us can make the majority of the best-fit ages old. Although we cannot exclude that the formation history of the Virgo core may lead to more conspicuous populations of relatively young GCs than in other environments, we emphasize that the intrinsic properties of the Virgo GCs are likely to differ systematically from those assumed in the models. Thus, the large wavelength coverage and photometric quality of modern GC samples, such as those used here, is not by itself sufficient to better constrain the GC formation histories. Models matching the environment-dependent characteristics of GCs in multi-dimensional color space are needed to improve the situation.« less

  8. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices.

    PubMed

    Liu, Wei; Kulin, Merima; Kazaz, Tarik; Shahid, Adnan; Moerman, Ingrid; De Poorter, Eli

    2017-09-12

    Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.

  9. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices

    PubMed Central

    Liu, Wei; Kulin, Merima; Kazaz, Tarik; De Poorter, Eli

    2017-01-01

    Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals’ modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI’s probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access. PMID:28895879

  10. Phase space flows for non-Hamiltonian systems with constraints

    NASA Astrophysics Data System (ADS)

    Sergi, Alessandro

    2005-09-01

    In this paper, non-Hamiltonian systems with holonomic constraints are treated by a generalization of Dirac’s formalism. Non-Hamiltonian phase space flows can be described by generalized antisymmetric brackets or by general Liouville operators which cannot be derived from brackets. Both situations are treated. In the first case, a Nosé-Dirac bracket is introduced as an example. In the second one, Dirac’s recipe for projecting out constrained variables from time translation operators is generalized and then applied to non-Hamiltonian linear response. Dirac’s formalism avoids spurious terms in the response function of constrained systems. However, corrections coming from phase space measure must be considered for general perturbations.

  11. Species richness and morphological diversity of passerine birds

    PubMed Central

    Ricklefs, Robert E.

    2012-01-01

    The relationship between species richness and the occupation of niche space can provide insight into the processes that shape patterns of biodiversity. For example, if species interactions constrained coexistence, one might expect tendencies toward even spacing within niche space and positive relationships between diversity and total niche volume. I use morphological diversity of passerine birds as a proxy for diet, foraging maneuvers, and foraging substrates and examine the morphological space occupied by regional and local passerine avifaunas. Although independently diversified regional faunas exhibit convergent morphology, species are clustered rather than evenly distributed, the volume of the morphological space is weakly related to number of species per taxonomic family, and morphological volume is unrelated to number of species within both regional avifaunas and local assemblages. These results seemingly contradict patterns expected when species interactions constrain regional or local diversity, and they suggest a larger role for diversification, extinction, and dispersal limitation in shaping species richness. PMID:22908271

  12. Comprehensive, Process-based Identification of Hydrologic Models using Satellite and In-situ Water Storage Data: A Multi-objective calibration Approach

    NASA Astrophysics Data System (ADS)

    Abdo Yassin, Fuad; Wheater, Howard; Razavi, Saman; Sapriza, Gonzalo; Davison, Bruce; Pietroniro, Alain

    2015-04-01

    The credible identification of vertical and horizontal hydrological components and their associated parameters is very challenging (if not impossible) by only constraining the model to streamflow data, especially in regions where the vertical processes significantly dominate the horizontal processes. The prairie areas of the Saskatchewan River basin, a major water system in Canada, demonstrate such behavior, where the hydrologic connectivity and vertical fluxes are mainly controlled by the amount of surface and sub-surface water storages. In this study, we develop a framework for distributed hydrologic model identification and calibration that jointly constrains the model response (i.e., streamflows) as well as a set of model state variables (i.e., water storages) to observations. This framework is set up in the form of multi-objective optimization, where multiple performance criteria are defined and used to simultaneously evaluate the fidelity of the model to streamflow observations and observed (estimated) changes of water storage in the gridded landscape over daily and monthly time scales. The time series of estimated changes in total water storage (including soil, canopy, snow and pond storages) used in this study were derived from an experimental study enhanced by the information obtained from the GRACE satellite. We test this framework on the calibration of a Land Surface Scheme-Hydrology model, called MESH (Modélisation Environmentale Communautaire - Surface and Hydrology), for the Saskatchewan River basin. Pareto Archived Dynamically Dimensioned Search (PA-DDS) is used as the multi-objective optimization engine. The significance of using the developed framework is demonstrated in comparison with the results obtained through a conventional calibration approach to streamflow observations. The approach of incorporating water storage data into the model identification process can more potentially constrain the posterior parameter space, more comprehensively evaluate the model fidelity, and yield more credible predictions.

  13. James Webb Space Telescope Studies of Dark Energy

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.; Stiavelli, Massimo; Mather, John C.

    2010-01-01

    The Hubble Space Telescope (HST) has contributed significantly to studies of dark energy. It was used to find the first evidence of deceleration at z=1.8 (Riess et al. 2001) through the serendipitous discovery of a type 1a supernova (SN1a) in the Hubble Deep Field. The discovery of deceleration at z greater than 1 was confirmation that the apparent acceleration at low redshift (Riess et al. 1998; Perlmutter et al. 1999) was due to dark energy rather than observational or astrophysical effects such as systematic errors, evolution in the SN1a population or intergalactic dust. The GOODS project and associated follow-up discovered 21 SN1a, expanding on this result (Riess et al. 2007). HST has also been used to constrain cosmological parameters and dark energy through weak lensing measurements in the COSMOS survey (Massey et al 2007; Schrabback et al 2009) and strong gravitational lensing with measured time delays (Suyu et al 2010). Constraints on dark energy are often parameterized as the equation of state, w = P/p. For the cosmological constant model, w = -1 at all times; other models predict a change with time, sometimes parameterized generally as w(a) or approximated as w(sub 0)+(1-a)w(sub a), where a = (1+z)(sup -1) is the scale factor of the universe relative to its current scale. Dark energy can be constrained through several measurements. Standard candles, such as SN1a, provide a direct measurement of the luminosity distance as a function of redshift, which can be converted to H(z), the change in the Hubble constant with redshift. An analysis of weak lensing in a galaxy field can be used to derive the angular-diameter distance from the weak-lensing equation and to measure the power spectrum of dark-matter halos, which constrains the growth of structure in the Universe. Baryonic acoustic oscillations (BAO), imprinted on the distribution of matter at recombination, provide a standard rod for measuring the cosmological geometry. Strong gravitational lensing of a time-variable source gives the angular diameter distance through measured time delays of multiple images. Finally, the growth of structure can also be constrained by measuring the mass of the largest galaxy clusters over cosmic time. HST has contributed to the study of dark energy through SN1a and gravitational lensing, as discussed above. HST has also helped to characterize galaxy clusters and the HST-measured constraints on the current Hubble constant H(sub 0) are relevant to the interpretation of dark energy measurements (Riess et al 2009a). HST has not been used to constrain BAO as the large number of galaxy redshifts required, of order 100 million, is poorly matched to HST's capabilities. As the successor to HST, the James Webb Space Telescope (JWST; Gardner et al 2006) will continue and extend HST's dark energy work in several ways.

  14. Researching Children's Understanding of Safety: An Auto-Driven Visual Approach

    ERIC Educational Resources Information Center

    Agbenyega, Joseph S.

    2011-01-01

    Safe learning spaces allow children to explore their environment in an open and inquiring way, whereas unsafe spaces constrain, frustrate and disengage children from experiencing the fullness of their learning spaces. This study explores how children make sense of safe and unsafe learning spaces, and how this understanding affects the ways they…

  15. Optimization of Modeled Land-Atmosphere Exchanges of Water and Energy in an Isotopically-Enabled Land Surface Model by Bayesian Parameter Calibration

    NASA Astrophysics Data System (ADS)

    Wong, T. E.; Noone, D. C.; Kleiber, W.

    2014-12-01

    The single largest uncertainty in climate model energy balance is the surface latent heating over tropical land. Furthermore, the partitioning of the total latent heat flux into contributions from surface evaporation and plant transpiration is of great importance, but notoriously poorly constrained. Resolving these issues will require better exploiting information which lies at the interface between observations and advanced modeling tools, both of which are imperfect. There are remarkably few observations which can constrain these fluxes, placing strict requirements on developing statistical methods to maximize the use of limited information to best improve models. Previous work has demonstrated the power of incorporating stable water isotopes into land surface models for further constraining ecosystem processes. We present results from a stable water isotopically-enabled land surface model (iCLM4), including model experiments partitioning the latent heat flux into contributions from plant transpiration and surface evaporation. It is shown that the partitioning results are sensitive to the parameterization of kinetic fractionation used. We discuss and demonstrate an approach to calibrating select model parameters to observational data in a Bayesian estimation framework, requiring Markov Chain Monte Carlo sampling of the posterior distribution, which is shown to constrain uncertain parameters as well as inform relevant values for operational use. Finally, we discuss the application of the estimation scheme to iCLM4, including entropy as a measure of information content and specific challenges which arise in calibration models with a large number of parameters.

  16. Impact of TRMM and SSM/I-derived Precipitation and Moisture Data on the GEOS Global Analysis

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.

    1999-01-01

    Current global analyses contain significant errors in primary hydrological fields such as precipitation, evaporation, and related cloud and moisture in the tropics. The Data Assimilation Office at NASA's Goddard Space Flight Center has been exploring the use of space-based rainfall and total precipitable water (TPW) estimates to constrain these hydrological parameters in the Goddard Earth Observing System (GEOS) global data assimilation system. We present results showing that assimilating the 6-hour averaged rain rates and TPW estimates from the Tropical Rainfall Measuring Mission (TRMM) and Special Sensor Microwave/Imager (SSM/I) instruments improves not only the precipitation and moisture estimates but also reduce state-dependent systematic errors in key climate parameters directly linked to convection such as the outgoing longwave radiation, clouds, and the large-scale circulation. The improved analysis also improves short-range forecasts beyond 1 day, but the impact is relatively modest compared with improvements in the time-averaged analysis. The study shows that, in the presence of biases and other errors of the forecast model, improving the short-range forecast is not necessarily prerequisite for improving the assimilation as a climate data set. The full impact of a given type of observation on the assimilated data set should not be measured solely in terms of forecast skills.

  17. The four-loop six-gluon NMHV ratio function

    DOE PAGES

    Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.

    2016-01-11

    We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N=4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q¯ differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We test the result againstmore » multi-Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. As a result, we also provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less

  18. The four-loop six-gluon NMHV ratio function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Lance J.; von Hippel, Matt; McLeod, Andrew J.

    2016-01-11

    We use the hexagon function bootstrap to compute the ratio function which characterizes the next-to-maximally-helicity-violating (NMHV) six-point amplitude in planar N = 4 super-Yang-Mills theory at four loops. A powerful constraint comes from dual superconformal invariance, in the form of a Q - differential equation, which heavily constrains the first derivatives of the transcendental functions entering the ratio function. At four loops, it leaves only a 34-parameter space of functions. Constraints from the collinear limits, and from the multi-Regge limit at the leading-logarithmic (LL) and next-to-leading-logarithmic (NLL) order, suffice to fix these parameters and obtain a unique result. We testmore » the result against multi- Regge predictions at NNLL and N 3LL, and against predictions from the operator product expansion involving one and two flux-tube excitations; all cross-checks are satisfied. We also study the analytical and numerical behavior of the parity-even and parity-odd parts on various lines and surfaces traversing the three-dimensional space of cross ratios. As part of this program, we characterize all irreducible hexagon functions through weight eight in terms of their coproduct. Furthermore, we provide representations of the ratio function in particular kinematic regions in terms of multiple polylogarithms.« less

  19. Electroweak Symmetry Breaking and the Higgs Boson: Confronting Theories at Colliders

    NASA Astrophysics Data System (ADS)

    Azatov, Aleksandr; Galloway, Jamison

    2013-01-01

    In this review, we discuss methods of parsing direct information from collider experiments regarding the Higgs boson and describe simple ways in which experimental likelihoods can be consistently reconstructed and interfaced with model predictions in pertinent parameter spaces. We review prevalent scenarios for extending the electroweak symmetry breaking sector and emphasize their predictions for nonstandard Higgs phenomenology that could be observed in large hadron collider (LHC) data if naturalness is realized in particular ways. Specifically we identify how measurements of Higgs couplings can be used to imply the existence of new physics at particular scales within various contexts. The most dominant production and decay modes of the Higgs-like state observed in the early data sets have proven to be consistent with predictions of the Higgs boson of the Standard Model, though interesting directions in subdominant channels still exist and will require our careful attention in further experimental tests. Slightly anomalous rates in certain channels at the early LHC have spurred effort in model building and spectra analyses of particular theories, and we discuss these developments in some detail. Finally, we highlight some parameter spaces of interest in order to give examples of how the data surrounding the new state can most effectively be used to constrain specific models of weak scale physics.

  20. Convective-core Overshoot and Suppression of Oscillations: Constraints from Red Giants in NGC 6811

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arentoft, T.; Brogaard, K.; Jessen-Hansen, J.

    Using data from the NASA spacecraft Kepler , we study solar-like oscillations in red giant stars in the open cluster NGC 6811. We determine oscillation frequencies, frequency separations, period spacings of mixed modes, and mode visibilities for eight cluster giants. The oscillation parameters show that these stars are helium-core-burning red giants. The eight stars form two groups with very different oscillation power spectra; the four stars with the lowest Δ ν values display rich sets of mixed l = 1 modes, while this is not the case for the four stars with higher Δ ν . For the four starsmore » with lowest Δ ν , we determine the asymptotic period spacing of the mixed modes, Δ P , which together with the masses we derive for all eight stars suggest that they belong to the so-called secondary clump. Based on the global oscillation parameters, we present initial theoretical stellar modeling that indicates that we can constrain convective-core overshoot on the main sequence and in the helium-burning phase for these ∼2 M {sub ⊙} stars. Finally, our results indicate less mode suppression than predicted by recent theories for magnetic suppression of certain oscillation modes in red giants.« less

  1. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  2. Hard X-ray Detectability of Small-Scale Coronal Heating Events

    NASA Astrophysics Data System (ADS)

    Marsh, A.; Glesener, L.; Klimchuk, J. A.; Bradshaw, S. J.; Smith, D. M.; Hannah, I. G.

    2016-12-01

    The nanoflare heating theory predicts the ubiquitous presence of hot ( >5 MK) plasma in the solar corona, but evidence for this high-temperature component has been scarce. Current hard x-ray instruments such as RHESSI lack the sensitivity to see the trace amounts of this plasma that are predicted by theoretical models. New hard X-ray instruments that use focusing optics, such as FOXSI (the Focusing Optics X-ray Solar Imager) and NuSTAR (the Nuclear Spectroscopic Telescope Array) can extend the visible parameter space of nanoflare "storms" that create hot plasma. We compare active-region data from FOXSI and NuSTAR with a series of EBTEL hydrodynamic simulations, and constrain nanoflare properties to give good agreement with observations.

  3. Hard X-ray Detectability of Small-Scale Coronal Heating Events

    NASA Astrophysics Data System (ADS)

    Marsh, Andrew; Glesener, Lindsay; Klimchuk, James A.; Bradshaw, Stephen; Smith, David; Hannah, Iain

    2016-05-01

    The nanoflare heating theory predicts the ubiquitous presence of hot (~>5 MK) plasma in the solar corona, but evidence for this high-temperature component has been scarce. Current hard x-ray instruments such as RHESSI lack the sensitivity to see the trace amounts of this plasma that are predicted by theoretical models. New hard X-ray instruments that use focusing optics, such as FOXSI (the Focusing Optics X-ray Solar Imager) and NuSTAR (the Nuclear Spectroscopic Telescope Array) can extend the visible parameter space of nanoflare “storms” that create hot plasma. We compare active-region data from FOXSI and NuSTAR with a series of EBTEL hydrodynamic simulations, and constrain nanoflare properties to give good agreement with observations.

  4. Hard X-Ray Constraints on Small-Scale Coronal Heating Events

    NASA Astrophysics Data System (ADS)

    Marsh, Andrew; Smith, David M.; Glesener, Lindsay; Klimchuk, James A.; Bradshaw, Stephen; Hannah, Iain; Vievering, Juliana; Ishikawa, Shin-Nosuke; Krucker, Sam; Christe, Steven

    2017-08-01

    A large body of evidence suggests that the solar corona is heated impulsively. Small-scale heating events known as nanoflares may be ubiquitous in quiet and active regions of the Sun. Hard X-ray (HXR) observations with unprecedented sensitivity >3 keV have recently been enabled through the use of focusing optics. We analyze active region spectra from the FOXSI-2 sounding rocket and the NuSTAR satellite to constrain the physical properties of nanoflares simulated with the EBTEL field-line-averaged hydrodynamics code. We model a wide range of X-ray spectra by varying the nanoflare heating amplitude, duration, delay time, and filling factor. Additional constraints on the nanoflare parameter space are determined from energy constraints and EUV/SXR data.

  5. Constraining scalar resonances with top-quark pair production at the LHC

    NASA Astrophysics Data System (ADS)

    Franzosi, Diogo Buarque; Fabbri, Federica; Schumann, Steffen

    2018-03-01

    Constraints on models which predict resonant top-quark pair production at the LHC are provided via a reinterpretation of the Standard Model (SM) particle level measurement of the top-anti-top invariant mass distribution, m(t\\overline{t}) . We make use of state-of-the-art Monte Carlo event simulation to perform a direct comparison with measurements of m(t\\overline{t}) in the semi-leptonic channels, considering both the boosted and the resolved regime of the hadronic top decays. A simplified model to describe various scalar resonances decaying into top-quarks is considered, including CP-even and CP-odd, color-singlet and color-octet states, and the excluded regions in the respective parameter spaces are provided.

  6. Six-quark decays of the Higgs boson in supersymmetry with R-parity violation.

    PubMed

    Carpenter, Linda M; Kaplan, David E; Rhee, Eun-Jung

    2007-11-23

    Both electroweak precision measurements and simple supersymmetric extensions of the standard model prefer a mass of the Higgs boson less than the experimental lower limit (on a standard-model-like Higgs boson) of 114 GeV. We show that supersymmetric models with R parity violation and baryon-number violation have a significant range of parameter space in which the Higgs boson dominantly decays to six jets. These decays are much more weakly constrained by current CERN LEP analyses and would allow for a Higgs boson mass near that of the Z. In general, lighter scalar quark and other superpartner masses are allowed. The Higgs boson would potentially be discovered at hadron colliders via the appearance of new displaced vertices.

  7. Impulsive acceleration and scatter-free transport of about 1 MeV per nucleon ions in (He-3)-rich solar particle events

    NASA Technical Reports Server (NTRS)

    Mason, G. M.; Ng, C. K.; Klecker, B.; Green, G.

    1989-01-01

    Impulsive solar energetic particle (SEP) events are studied to: (1) describe a distinct class of SEP ion events observed in interplanetary space, and (2) test models of focused transport through detailed comparisons of numerical model prediction with the data. An attempt will also be made to describe the transport and scattering properties of the interplanetary medium during the times these events are observed and to derive source injection profiles in these events. ISEE 3 and Helios 1 magnetic field and plasma data are used to locate the approximate coronal connection points of the spacecraft to organize the particle anisotropy data and to constrain some free parameters in the modeling of flare events.

  8. Majorana dark matter with B+L gauge symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao

    Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less

  9. Majorana dark matter with B+L gauge symmetry

    DOE PAGES

    Chao, Wei; Guo, Huai-Ke; Zhang, Yongchao

    2017-04-07

    Here, we present a new model that extends the Standard Model (SM) with the local B + L symmetry, and point out that the lightest new fermion, introduced to cancel anomalies and stabilized automatically by the B + L symmetry, can serve as the cold dark matter candidate. We also study constraints on the model from Higgs measurements, electroweak precision measurements as well as the relic density and direct detections of the dark matter. Our numerical results reveal that the pseudo-vector coupling of with Z and the Yukawa coupling with the SM Higgs are highly constrained by the latest resultsmore » of LUX, while there are viable parameter space that could satisfy all the constraints and give testable predictions.« less

  10. Improving LHC searches for dark photons using lepton-jet substructure

    NASA Astrophysics Data System (ADS)

    Barello, G.; Chang, Spencer; Newby, Christopher A.; Ostdiek, Bryan

    2017-03-01

    Collider signals of dark photons are an exciting probe for new gauge forces and are characterized by events with boosted lepton jets. Existing techniques are efficient in searching for muonic lepton jets but due to substantial backgrounds have difficulty constraining lepton jets containing only electrons. This is unfortunate since upcoming intensity frontier experiments are sensitive to dark photon masses which only allow electron decays. Analyzing a recently proposed model of kinetic mixing, with new scalar particles decaying into dark photons, we find that existing techniques for electron jets can be substantially improved. We show that using lepton-jet-substructure variables, in association with a boosted decision tree, improves background rejection, significantly increasing the LHC's reach for dark photons in this region of parameter space.

  11. Qualitative simulation for process modeling and control

    NASA Technical Reports Server (NTRS)

    Dalle Molle, D. T.; Edgar, T. F.

    1989-01-01

    A qualitative model is developed for a first-order system with a proportional-integral controller without precise knowledge of the process or controller parameters. Simulation of the qualitative model yields all of the solutions to the system equations. In developing the qualitative model, a necessary condition for the occurrence of oscillatory behavior is identified. Initializations that cannot exhibit oscillatory behavior produce a finite set of behaviors. When the phase-space behavior of the oscillatory behavior is properly constrained, these initializations produce an infinite but comprehensible set of asymptotically stable behaviors. While the predictions include all possible behaviors of the real system, a class of spurious behaviors has been identified. When limited numerical information is included in the model, the number of predictions is significantly reduced.

  12. Two Higgs doublet model with vectorlike leptons and contributions to pp → W W and H → W W

    NASA Astrophysics Data System (ADS)

    Dermíšek, Radovan; Lunghi, Enrico; Shin, Seodong

    2016-02-01

    We study a two Higgs doublet model extended by vectorlike leptons mixing with one family of standard model leptons. Generated flavor violating couplings between heavy and light leptons can dramatically alter the decay patterns of heavier Higgs bosons. We focus on pp → H → ν 4 ν μ → W μν μ , where ν 4 is a new neutral lepton, and study possible effects of this process on the measurements of pp → W W and H → W W since it leads to the same final states. We discuss predictions for contributions to pp → W W and H → WW and their correlations from the region of the parameter space that satisfies all available constraints including precision electroweak observables and from pair production of vectorlike leptons. Large contributions, close to current limits, favor small tan β region of the parameter space. We find that, as a result of adopted cuts in experimental analyses, the contribution to pp → W W can be an order of magnitude larger than the contribution to H → W W . Thus, future precise measurements of pp → W W will further constrain the parameters of the model. In addition, we also consider possible contributions to pp → W W from the heavy Higgs decays into a new charged lepton e 4 ( H → e 4 μ → W μν μ ), exotic SM Higgs decays, and pair production of vectorlike leptons.

  13. Obtaining the Grobner Initialization for the Ground Flash Fraction Retrieval Algorithm

    NASA Technical Reports Server (NTRS)

    Solakiewicz, R.; Attele, R.; Koshak, W.

    2011-01-01

    At optical wavelengths and from the vantage point of space, the multiple scattering cloud medium obscures one's view and prevents one from easily determining what flashes strike the ground. However, recent investigations have made some progress examining the (easier, but still difficult) problem of estimating the ground flash fraction in a set of N flashes observed from space In the study by Koshak, a Bayesian inversion method was introduced for retrieving the fraction of ground flashes in a set of flashes observed from a (low earth orbiting or geostationary) satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters, a scalar function of three variables (one of which is the ground flash fraction) was minimized by a numerical method. This method has formed the basis of a Ground Flash Fraction Retrieval Algorithm (GoFFRA) that is being tested as part of GOES-R GLM risk reduction.

  14. Anisotropic hydrodynamics for conformal Gubser flow

    NASA Astrophysics Data System (ADS)

    Nopoush, Mohammad; Ryblewski, Radoslaw; Strickland, Michael

    2015-02-01

    We derive the equations of motion for a system undergoing boost-invariant longitudinal and azimuthally symmetric transverse "Gubser flow" using leading-order anisotropic hydrodynamics. This is accomplished by assuming that the one-particle distribution function is ellipsoidally symmetric in the momenta conjugate to the de Sitter coordinates used to parametrize the Gubser flow. We then demonstrate that the S O (3 )q symmetry in de Sitter space further constrains the anisotropy tensor to be of spheroidal form. The resulting system of two coupled ordinary differential equations for the de Sitter-space momentum scale and anisotropy parameter are solved numerically and compared to a recently obtained exact solution of the relaxation-time-approximation Boltzmann equation subject to the same flow. We show that anisotropic hydrodynamics describes the spatiotemporal evolution of the system better than all currently known dissipative hydrodynamics approaches. In addition, we prove that anisotropic hydrodynamics gives the exact solution of the relaxation-time approximation Boltzmann equation in the ideal, η /s →0 , and free-streaming, η /s →∞, limits.

  15. Kepler eclipsing binaries with δ Scuti components and tidally induced heartbeat stars

    NASA Astrophysics Data System (ADS)

    Guo, Zhao; Gies, Douglas R.; Matson, Rachel A.

    δ Scuti stars are generally fast rotators and their pulsations are not in the asymptotic regime, so the interpretation of their pulsation spectra is a very difficult task. Binary stars, especially eclipsing systems, offer us the opportunity to constrain the space of fundamental stellar parameters. Firstly, we show the results of KIC9851944 and KIC4851217 as two case studies. We found the signature of the large frequency separation in the pulsational spectrum of both stars. The observed mean stellar density and the large frequency separation obey the linear relation in the log-log space as found by Suarez et al. (2014) and García Hernández et al. (2015). Second, we apply the simple `one-layer model' of Moreno & Koenigsberger (1999) to the prototype heartbeat star KOI-54. The model naturally reproduces the tidally induced high frequency oscillations and their frequencies are very close to the observed frequency at 90 and 91 times the orbital frequency.

  16. Limits on active to sterile neutrino oscillations from disappearance searches in the MINOS, Daya Bay, and Bugey-3 experiments

    DOE PAGES

    Adamson, P.; An, F. P.; Anghel, I.; ...

    2016-10-07

    Searches for a light sterile neutrino have been performed independently by the MINOS and the Daya Bay experiments using the muon (anti)neutrino and electron antineutrino disappearance channels, respectively. In this Letter, results from both experiments are combined with those from the Bugey-3 reactor neutrino experiment to constrain oscillations into light sterile neutrinos. The three experiments are sensitive to complementary regions of parameter space, enabling the combined analysis to probe regions allowed by the Liquid Scintillator Neutrino Detector (LSND) and MiniBooNE experiments in a minimally extended four-neutrino flavor framework. Here, stringent limits on sin 22θ μe are set over 6 ordersmore » of magnitude in the sterile mass-squared splitting Δm 2 41. The sterile-neutrino mixing phase space allowed by the LSND and MiniBooNE experiments is excluded for Δm 2 41 < 0.8 eV 2 at 95% CL s.« less

  17. Gyrokinetic equations and full f solution method based on Dirac's constrained Hamiltonian and inverse Kruskal iteration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heikkinen, J. A.; Nora, M.

    2011-02-15

    Gyrokinetic equations of motion, Poisson equation, and energy and momentum conservation laws are derived based on the reduced-phase-space Lagrangian and inverse Kruskal iteration introduced by Pfirsch and Correa-Restrepo [J. Plasma Phys. 70, 719 (2004)]. This formalism, together with the choice of the adiabatic invariant J= as one of the averaging coordinates in phase space, provides an alternative to the standard gyrokinetics. Within second order in gyrokinetic parameter, the new equations do not show explicit ponderomotivelike or polarizationlike terms. Pullback of particle information with an iterated gyrophase and field dependent gyroradius function from the gyrocenter position defined by gyroaveraged coordinates allowsmore » direct numerical integration of the gyrokinetic equations in particle simulation of the field and particles with full distribution function. As an example, gyrokinetic systems with polarization drift either present or absent in the equations of motion are considered.« less

  18. Limits on Active to Sterile Neutrino Oscillations from Disappearance Searches in the MINOS, Daya Bay, and Bugey-3 Experiments.

    PubMed

    Adamson, P; An, F P; Anghel, I; Aurisano, A; Balantekin, A B; Band, H R; Barr, G; Bishai, M; Blake, A; Blyth, S; Bock, G J; Bogert, D; Cao, D; Cao, G F; Cao, J; Cao, S V; Carroll, T J; Castromonte, C M; Cen, W R; Chan, Y L; Chang, J F; Chang, L C; Chang, Y; Chen, H S; Chen, Q Y; Chen, R; Chen, S M; Chen, Y; Chen, Y X; Cheng, J; Cheng, J-H; Cheng, Y P; Cheng, Z K; Cherwinka, J J; Childress, S; Chu, M C; Chukanov, A; Coelho, J A B; Corwin, L; Cronin-Hennessy, D; Cummings, J P; de Arcos, J; De Rijck, S; Deng, Z Y; Devan, A V; Devenish, N E; Ding, X F; Ding, Y Y; Diwan, M V; Dolgareva, M; Dove, J; Dwyer, D A; Edwards, W R; Escobar, C O; Evans, J J; Falk, E; Feldman, G J; Flanagan, W; Frohne, M V; Gabrielyan, M; Gallagher, H R; Germani, S; Gill, R; Gomes, R A; Gonchar, M; Gong, G H; Gong, H; Goodman, M C; Gouffon, P; Graf, N; Gran, R; Grassi, M; Grzelak, K; Gu, W Q; Guan, M Y; Guo, L; Guo, R P; Guo, X H; Guo, Z; Habig, A; Hackenburg, R W; Hahn, S R; Han, R; Hans, S; Hartnell, J; Hatcher, R; He, M; Heeger, K M; Heng, Y K; Higuera, A; Holin, A; Hor, Y K; Hsiung, Y B; Hu, B Z; Hu, T; Hu, W; Huang, E C; Huang, H X; Huang, J; Huang, X T; Huber, P; Huo, W; Hussain, G; Hylen, J; Irwin, G M; Isvan, Z; Jaffe, D E; Jaffke, P; James, C; Jen, K L; Jensen, D; Jetter, S; Ji, X L; Ji, X P; Jiao, J B; Johnson, R A; de Jong, J K; Joshi, J; Kafka, T; Kang, L; Kasahara, S M S; Kettell, S H; Kohn, S; Koizumi, G; Kordosky, M; Kramer, M; Kreymer, A; Kwan, K K; Kwok, M W; Kwok, T; Lang, K; Langford, T J; Lau, K; Lebanowski, L; Lee, J; Lee, J H C; Lei, R T; Leitner, R; Leung, J K C; Li, C; Li, D J; Li, F; Li, G S; Li, Q J; Li, S; Li, S C; Li, W D; Li, X N; Li, Y F; Li, Z B; Liang, H; Lin, C J; Lin, G L; Lin, S; Lin, S K; Lin, Y-C; Ling, J J; Link, J M; Litchfield, P J; Littenberg, L; Littlejohn, B R; Liu, D W; Liu, J C; Liu, J L; Loh, C W; Lu, C; Lu, H Q; Lu, J S; Lucas, P; Luk, K B; Lv, Z; Ma, Q M; Ma, X B; Ma, X Y; Ma, Y Q; Malyshkin, Y; Mann, W A; Marshak, M L; Martinez Caicedo, D A; Mayer, N; McDonald, K T; McGivern, C; McKeown, R D; Medeiros, M M; Mehdiyev, R; Meier, J R; Messier, M D; Miller, W H; Mishra, S R; Mitchell, I; Mooney, M; Moore, C D; Mualem, L; Musser, J; Nakajima, Y; Naples, D; Napolitano, J; Naumov, D; Naumova, E; Nelson, J K; Newman, H B; Ngai, H Y; Nichol, R J; Ning, Z; Nowak, J A; O'Connor, J; Ochoa-Ricoux, J P; Olshevskiy, A; Orchanian, M; Pahlka, R B; Paley, J; Pan, H-R; Park, J; Patterson, R B; Patton, S; Pawloski, G; Pec, V; Peng, J C; Perch, A; Pfützner, M M; Phan, D D; Phan-Budd, S; Pinsky, L; Plunkett, R K; Poonthottathil, N; Pun, C S J; Qi, F Z; Qi, M; Qian, X; Qiu, X; Radovic, A; Raper, N; Rebel, B; Ren, J; Rosenfeld, C; Rosero, R; Roskovec, B; Ruan, X C; Rubin, H A; Sail, P; Sanchez, M C; Schneps, J; Schreckenberger, A; Schreiner, P; Sharma, R; Moed Sher, S; Sousa, A; Steiner, H; Sun, G X; Sun, J L; Tagg, N; Talaga, R L; Tang, W; Taychenachev, D; Thomas, J; Thomson, M A; Tian, X; Timmons, A; Todd, J; Tognini, S C; Toner, R; Torretta, D; Treskov, K; Tsang, K V; Tull, C E; Tzanakos, G; Urheim, J; Vahle, P; Viaux, N; Viren, B; Vorobel, V; Wang, C H; Wang, M; Wang, N Y; Wang, R G; Wang, W; Wang, X; Wang, Y F; Wang, Z; Wang, Z M; Webb, R C; Weber, A; Wei, H Y; Wen, L J; Whisnant, K; White, C; Whitehead, L; Whitehead, L H; Wise, T; Wojcicki, S G; Wong, H L H; Wong, S C F; Worcester, E; Wu, C-H; Wu, Q; Wu, W J; Xia, D M; Xia, J K; Xing, Z Z; Xu, J L; Xu, J Y; Xu, Y; Xue, T; Yang, C G; Yang, H; Yang, L; Yang, M S; Yang, M T; Ye, M; Ye, Z; Yeh, M; Young, B L; Yu, Z Y; Zeng, S; Zhan, L; Zhang, C; Zhang, H H; Zhang, J W; Zhang, Q M; Zhang, X T; Zhang, Y M; Zhang, Y X; Zhang, Z J; Zhang, Z P; Zhang, Z Y; Zhao, J; Zhao, Q W; Zhao, Y B; Zhong, W L; Zhou, L; Zhou, N; Zhuang, H L; Zou, J H

    2016-10-07

    Searches for a light sterile neutrino have been performed independently by the MINOS and the Daya Bay experiments using the muon (anti)neutrino and electron antineutrino disappearance channels, respectively. In this Letter, results from both experiments are combined with those from the Bugey-3 reactor neutrino experiment to constrain oscillations into light sterile neutrinos. The three experiments are sensitive to complementary regions of parameter space, enabling the combined analysis to probe regions allowed by the Liquid Scintillator Neutrino Detector (LSND) and MiniBooNE experiments in a minimally extended four-neutrino flavor framework. Stringent limits on sin^{2}2θ_{μe} are set over 6 orders of magnitude in the sterile mass-squared splitting Δm_{41}^{2}. The sterile-neutrino mixing phase space allowed by the LSND and MiniBooNE experiments is excluded for Δm_{41}^{2}<0.8  eV^{2} at 95%  CL_{s}.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  20. The combined effect of wet granulation process parameters and dried granule moisture content on tablet quality attributes.

    PubMed

    Gabbott, Ian P; Al Husban, Farhan; Reynolds, Gavin K

    2016-09-01

    A pharmaceutical compound was used to study the effect of batch wet granulation process parameters in combination with the residual moisture content remaining after drying on granule and tablet quality attributes. The effect of three batch wet granulation process parameters was evaluated using a multivariate experimental design, with a novel constrained design space. Batches were characterised for moisture content, granule density, crushing strength, porosity, disintegration time and dissolution. Mechanisms of the effect of the process parameters on the granule and tablet quality attributes are proposed. Water quantity added during granulation showed a significant effect on granule density and tablet dissolution rate. Mixing time showed a significant effect on tablet crushing strength, and mixing speed showed a significant effect on the distribution of tablet crushing strengths obtained. The residual moisture content remaining after granule drying showed a significant effect on tablet crushing strength. The effect of moisture on tablet tensile strength has been reported before, but not in combination with granulation parameters and granule properties, and the impact on tablet dissolution was not assessed. Correlations between the energy input during granulation, the density of granules produced, and the quality attributes of the final tablets were also identified. Understanding the impact of the granulation and drying process parameters on granule and tablet properties provides a basis for process optimisation and scaling. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Using a constrained formulation based on probability summation to fit receiver operating characteristic (ROC) curves

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Good, Walter F.; Gur, David

    2000-04-01

    A constrained ROC formulation from probability summation is proposed for measuring observer performance in detecting abnormal findings on medical images. This assumes the observer's detection or rating decision on each image is determined by a latent variable that characterizes the specific finding (type and location) considered most likely to be a target abnormality. For positive cases, this 'maximum- suspicion' variable is assumed to be either the value for the actual target or for the most suspicious non-target finding, whichever is the greater (more suspicious). Unlike the usual ROC formulation, this constrained formulation guarantees a 'well-behaved' ROC curve that always equals or exceeds chance- level decisions and cannot exhibit an upward 'hook.' Its estimated parameters specify the accuracy for separating positive from negative cases, and they also predict accuracy in locating or identifying the actual abnormal findings. The present maximum-likelihood procedure (runs on PC with Windows 95 or NT) fits this constrained formulation to rating-ROC data using normal distributions with two free parameters. Fits of the conventional and constrained ROC formulations are compared for continuous and discrete-scale ratings of chest films in a variety of detection problems, both for localized lesions (nodules, rib fractures) and for diffuse abnormalities (interstitial disease, infiltrates or pnumothorax). The two fitted ROC curves are nearly identical unless the conventional ROC has an ill behaved 'hook,' below the constrained ROC.

  2. Modeling Coronal Mass Ejections with EUHFORIA: A Parameter Study of the Gibson-Low Flux Rope Model using Multi-Viewpoint Observations

    NASA Astrophysics Data System (ADS)

    Verbeke, C.; Asvestari, E.; Scolini, C.; Pomoell, J.; Poedts, S.; Kilpua, E.

    2017-12-01

    Coronal Mass Ejections (CMEs) are one of the big influencers on the coronal and interplanetary dynamics. Understanding their origin and evolution from the Sun to the Earth is crucial in order to determine the impact on our Earth and society. One of the key parameters that determine the geo-effectiveness of the coronal mass ejection is its internal magnetic configuration. We present a detailed parameter study of the Gibson-Low flux rope model. We focus on changes in the input parameters and how these changes affect the characteristics of the CME at Earth. Recently, the Gibson-Low flux rope model has been implemented into the inner heliosphere model EUHFORIA, a magnetohydrodynamics forecasting model of large-scale dynamics from 0.1 AU up to 2 AU. Coronagraph observations can be used to constrain the kinematics and morphology of the flux rope. One of the key parameters, the magnetic field, is difficult to determine directly from observations. In this work, we approach the problem by conducting a parameter study in which flux ropes with varying magnetic configurations are simulated. We then use the obtained dataset to look for signatures in imaging observations and in-situ observations in order to find an empirical way of constraining the parameters related to the magnetic field of the flux rope. In particular, we focus on events observed by at least two spacecraft (STEREO + L1) in order to discuss the merits of using observations from multiple viewpoints in constraining the parameters.

  3. Configurational entropy as a tool to select a physical thick brane model

    NASA Astrophysics Data System (ADS)

    Chinaglia, M.; Cruz, W. T.; Correa, R. A. C.; de Paula, W.; Moraes, P. H. R. S.

    2018-04-01

    We analize braneworld scenarios via a configurational entropy (CE) formalism. Braneworld scenarios have drawn attention mainly due to the fact that they can explain the hierarchy problem and unify the fundamental forces through a symmetry breaking procedure. Those scenarios localize matter in a (3 + 1) hypersurface, the brane, which is inserted in a higher dimensional space, the bulk. Novel analytical braneworld models, in which the warp factor depends on a free parameter n, were recently released in the literature. In this article we will provide a way to constrain this parameter through the relation between information and dynamics of a system described by the CE. We demonstrate that in some cases the CE is an important tool in order to provide the most probable physical system among all the possibilities. In addition, we show that the highest CE is correlated to a tachyonic sector of the configuration, where the solutions for the corresponding model are dynamically unstable.

  4. Origin and propagation of galactic cosmic rays

    NASA Technical Reports Server (NTRS)

    Cesarsky, Catherine J.; Ormes, Jonathan F.

    1987-01-01

    The study of systematic trends in elemental abundances is important for unfolding the nuclear and/or atomic effects that should govern the shaping of source abundances and in constraining the parameters of cosmic ray acceleration models. In principle, much can be learned about the large-scale distributions of cosmic rays in the galaxy from all-sky gamma ray surveys such as COS-B and SAS-2. Because of the uncertainties in the matter distribution which come from the inability to measure the abundance of molecular hydrogen, the results are somewhat controversial. The leaky-box model accounts for a surprising amount of the data on heavy nuclei. However, a growing body of data indicates that the simple picture may have to be abandoned in favor of more complex models which contain additional parameters. Future experiments on the Spacelab and space station will hopefully be made of the spectra of individual nuclei at high energy. Antiprotons must be studied in the background free environment above the atmosphere with much higher reliability and presion to obtain spectral information.

  5. Preliminary gravity inversion model of Frenchman Flat Basin, Nevada Test Site, Nevada

    USGS Publications Warehouse

    Phelps, Geoffrey A.; Graham, Scott E.

    2002-01-01

    The depth of the basin beneath Frenchman Flat is estimated using a gravity inversion method. Gamma-gamma density logs from two wells in Frenchman Flat constrained the density profiles used to create the gravity inversion model. Three initial models were considered using data from one well, then a final model is proposed based on new information from the second well. The preferred model indicates that a northeast-trending oval-shaped basin underlies Frenchman Flat at least 2,100 m deep, with a maximum depth of 2,400 m at its northeast end. No major horst and graben structures are predicted. Sensitivity analysis of the model indicates that each parameter contributes the same magnitude change to the model, up to 30 meters change in depth for a 1% change in density, but some parameters affect a broader area of the basin. The horizontal resolution of the model was determined by examining the spacing between data stations, and was set to 500 square meters.

  6. Optical Design and Sensitivity of the Probe of Inflation and Cosmic Origins

    NASA Astrophysics Data System (ADS)

    Young, Karl S.; Hanany, Shaul; Wen, Qi

    2018-01-01

    The Probe of Inflation and Cosmic Origins (PICO) is a NASA probe-class mission concept being studied in preparation for the 2020 Astronomy and Astrophysics Decadal Survey. PICO will detect, or place new limits on, the energy scale of inflation and the physics of quantum gravity, determine the effective number of neutrino species and constrain the sum of neutrino masses, measure the optical depth to reionization to the cosmic variance limit, and shed new light on the role of magnetic fields in galactic evolution and star formation by making polarimetric maps of the full mm-wave sky with sensitivity 70 times higher than the Planck space mission. The maps made by PICO will provide a catalog of thousands of new proto clusters and infrared galaxies as well as tens of thousands of galaxy clusters which will further constrain cosmological parameters.PICO will have a 1.4 meter aperture telescope with 21 bands from 20 to 800 Ghz. We show the current PICO optics and discuss trade-offs between types of optical systems, limits imposed by scan strategies, and maximizing the number of detectors on sky. We present the instrument’s focal plane and the expected mission sensitivity.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Yidong; Chen, Xuelei; Wang, Xin

    The Tianlai experiment is dedicated to the observation of large-scale structures (LSS) by the 21 cm intensity mapping technique. In this paper, we make forecasts concerning its ability to observe or constrain the dark energy parameters and the primordial non-Gaussianity. From the LSS data, one can use the baryon acoustic oscillation (BAO) and growth rate derived from the redshift space distortion (RSD) to measure the dark energy density and equation of state. The primordial non-Gaussianity can be constrained either by looking for scale-dependent bias in the power spectrum, or by using the bispectrum. Here, we consider three cases: the Tianlaimore » cylinder array pathfinder that is currently being built, an upgrade of the Pathfinder Array with more receiver units, and the full-scale Tianlai cylinder array. Using the full-scale Tianlai experiment, we expect σ{sub w{sub 0}}∼0.082 and σ{sub w{sub a}}∼0.21 from the BAO and RSD measurements, σ{sub f{sub N{sub L}{sup local}}}∼14 from the power spectrum measurements with scale-dependent bias, and σ{sub f{sub N{sub L}{sup local}}}∼22 and σ{sub f{sub N{sub L}{sup equil}}}∼157 from the bispectrum measurements.« less

  8. Simulation of Constrained Musculoskeletal Systems in Task Space.

    PubMed

    Stanev, Dimitar; Moustakas, Konstantinos

    2018-02-01

    This paper proposes an operational task space formalization of constrained musculoskeletal systems, motivated by its promising results in the field of robotics. The change of representation requires different algorithms for solving the inverse and forward dynamics simulation in the task space domain. We propose an extension to the direct marker control and an adaptation of the computed muscle control algorithms for solving the inverse kinematics and muscle redundancy problems, respectively. Experimental evaluation demonstrates that this framework is not only successful in dealing with the inverse dynamics problem, but also provides an intuitive way of studying and designing simulations, facilitating assessment prior to any experimental data collection. The incorporation of constraints in the derivation unveils an important extension of this framework toward addressing systems that use absolute coordinates and topologies that contain closed kinematic chains. Task space projection reveals a more intuitive encoding of the motion planning problem, allows for better correspondence between observed and estimated variables, provides the means to effectively study the role of kinematic redundancy, and most importantly, offers an abstract point of view and control, which can be advantageous toward further integration with high level models of the precommand level. Task-based approaches could be adopted in the design of simulation related to the study of constrained musculoskeletal systems.

  9. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  10. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  11. A Stochastic Fractional Dynamics Model of Rainfall Statistics

    NASA Astrophysics Data System (ADS)

    Kundu, Prasun; Travis, James

    2013-04-01

    Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is designed to faithfully reflect the scale dependence and is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. The main restriction is the assumption that the statistics of the precipitation field is spatially homogeneous and isotropic and stationary in time. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of the radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment. Some data sets containing periods of non-stationary behavior that involves occasional anomalously correlated rain events, present a challenge for the model.

  12. Constraints on Lunar Structure from Combined Geochemical, Mineralogical, and Geophysical modeling

    NASA Astrophysics Data System (ADS)

    Bremner, P. M.; Fuqua, H.; Mallik, A.; Diamond, M. R.; Lock, S. J.; Panovska, S.; Nishikawa, Y.; Jiménez-Pérez, H.; Shahar, A.; Panero, W. R.; Lognonne, P. H.; Faul, U.

    2016-12-01

    The internal physical and geochemical structure of the Moon is still poorly constrained. Here, we take a multidisciplinary approach to attempt to constrain key parameters of the lunar structure. We use an ensemble of 1-D lunar compositional models with chemically and mineralogically distinct layers, and forward calculated physical parameters, in order to constrain the internal structure. We consider both a chemically well-mixed model with uniform bulk composition, and a chemically stratified model that includes a mantle with preserved mineralogical stratigraphy from magma ocean crystallization. Additionally, we use four different lunar temperature profiles that span the range of proposed selenotherms, giving eight separate sets of lunar models. In each set, we employed a grid search and a differential evolution genetic search algorithm to extensively explore model space, where the thickness of individual compositional layers was varied. In total, we forward calculated over one hundred thousand lunar models. It has been proposed that a dense, partially molten layer exists at the CMB to explain the lack of observed far-side deep moonquakes, the observation of reflected seismic phases from deep moonquakes, and enhanced tidal dissipation. However, subsequent models have proposed that these observables can be explained in other ways. In this study, using a variety of modeling techniques, we find that such a layer may have been formed by overturn of an ilmenite-rich layer, formed after the crystallization of a magma ocean. We therefore include a denser layer (modeled as an ilmenite-rich layer) at both the top and bottom of the lunar mantle in our models. For each set of models, we find models that explain the observed lunar mass and moment of inertia. We find that only a narrow range of core radii are consistent with the mass and moment of inertia constraints. Furthermore, in the chemically well-mixed models, we find that a dense layer is required in the upper mantle to meet the moment of inertia requirement. In no set of models is the mass of the lower dense layer well constrained. For the models that fit the observed mass and moment of inertia, we calculated 1-D seismic velocity profiles, the majority of which compare well with those determined by inverting the Apollo seismic data (Garcia et al., 2011 and Weber et al., 2011).

  13. Aircraft wake vortex measurements at Denver International Airport

    DOT National Transportation Integrated Search

    2004-05-10

    Airport capacity is constrained, in part, by spacing requirements associated with the wake vortex hazard. NASA's Wake Vortex Avoidance Project has a goal to establish the feasibility of reducing this spacing while maintaining safety. Passive acoustic...

  14. Revisiting CMB constraints on warm inflation

    NASA Astrophysics Data System (ADS)

    Arya, Richa; Dasgupta, Arnab; Goswami, Gaurav; Prasad, Jayanti; Rangarajan, Raghavan

    2018-02-01

    We revisit the constraints that Planck 2015 temperature, polarization and lensing data impose on the parameters of warm inflation. To this end, we study warm inflation driven by a single scalar field with a quartic self interaction potential in the weak dissipative regime. We analyse the effect of the parameters of warm inflation, namely, the inflaton self coupling λ and the inflaton dissipation parameter QP on the CMB angular power spectrum. We constrain λ and QP for 50 and 60 number of e-foldings with the full Planck 2015 data (TT, TE, EE + lowP and lensing) by performing a Markov-Chain Monte Carlo analysis using the publicly available code CosmoMC and obtain the joint as well as marginalized distributions of those parameters. We present our results in the form of mean and 68 % confidence limits on the parameters and also highlight the degeneracy between λ and QP in our analysis. From this analysis we show how warm inflation parameters can be well constrained using the Planck 2015 data.

  15. Using internal discharge data in a distributed conceptual model to reduce uncertainty in streamflow simulations

    NASA Astrophysics Data System (ADS)

    Guerrero, J.; Halldin, S.; Xu, C.; Lundin, L.

    2011-12-01

    Distributed hydrological models are important tools in water management as they account for the spatial variability of the hydrological data, as well as being able to produce spatially distributed outputs. They can directly incorporate and assess potential changes in the characteristics of our basins. A recognized problem for models in general is equifinality, which is only exacerbated for distributed models who tend to have a large number of parameters. We need to deal with the fundamentally ill-posed nature of the problem that such models force us to face, i.e. a large number of parameters and very few variables that can be used to constrain them, often only the catchment discharge. There is a growing but yet limited literature showing how the internal states of a distributed model can be used to calibrate/validate its predictions. In this paper, a distributed version of WASMOD, a conceptual rainfall runoff model with only three parameters, combined with a routing algorithm based on the high-resolution HydroSHEDS data was used to simulate the discharge in the Paso La Ceiba basin in Honduras. The parameter space was explored using Monte-Carlo simulations and the region of space containing the parameter-sets that were considered behavioral according to two different criteria was delimited using the geometric concept of alpha-shapes. The discharge data from five internal sub-basins was used to aid in the calibration of the model and to answer the following questions: Can this information improve the simulations at the outlet of the catchment, or decrease their uncertainty? Also, after reducing the number of model parameters needing calibration through sensitivity analysis: Is it possible to relate them to basin characteristics? The analysis revealed that in most cases the internal discharge data can be used to reduce the uncertainty in the discharge at the outlet, albeit with little improvement in the overall simulation results.

  16. An MCMC determination of the primordial helium abundance

    NASA Astrophysics Data System (ADS)

    Aver, Erik; Olive, Keith A.; Skillman, Evan D.

    2012-04-01

    Spectroscopic observations of the chemical abundances in metal-poor H II regions provide an independent method for estimating the primordial helium abundance. H II regions are described by several physical parameters such as electron density, electron temperature, and reddening, in addition to y, the ratio of helium to hydrogen. It had been customary to estimate or determine self-consistently these parameters to calculate y. Frequentist analyses of the parameter space have been shown to be successful in these parameter determinations, and Markov Chain Monte Carlo (MCMC) techniques have proven to be very efficient in sampling this parameter space. Nevertheless, accurate determination of the primordial helium abundance from observations of H II regions is constrained by both systematic and statistical uncertainties. In an attempt to better reduce the latter, and continue to better characterize the former, we apply MCMC methods to the large dataset recently compiled by Izotov, Thuan, & Stasińska (2007). To improve the reliability of the determination, a high quality dataset is needed. In pursuit of this, a variety of cuts are explored. The efficacy of the He I λ4026 emission line as a constraint on the solutions is first examined, revealing the introduction of systematic bias through its absence. As a clear measure of the quality of the physical solution, a χ2 analysis proves instrumental in the selection of data compatible with the theoretical model. Nearly two-thirds of the observations fall outside a standard 95% confidence level cut, which highlights the care necessary in selecting systems and warrants further investigation into potential deficiencies of the model or data. In addition, the method also allows us to exclude systems for which parameter estimations are statistical outliers. As a result, the final selected dataset gains in reliability and exhibits improved consistency. Regression to zero metallicity yields Yp = 0.2534 ± 0.0083, in broad agreement with the WMAP result. The inclusion of more observations shows promise for further reducing the uncertainty, but more high quality spectra are required.

  17. The Magnetar Model for Type I Superluminous Supernovae. I. Bayesian Analysis of the Full Multicolor Light-curve Sample with MOSFiT

    NASA Astrophysics Data System (ADS)

    Nicholl, Matt; Guillochon, James; Berger, Edo

    2017-11-01

    We use the new Modular Open Source Fitter for Transients to model 38 hydrogen-poor superluminous supernovae (SLSNe). We fit their multicolor light curves with a magnetar spin-down model and present posterior distributions of magnetar and ejecta parameters. The color evolution can be fit with a simple absorbed blackbody. The medians (1σ ranges) for key parameters are spin period 2.4 ms (1.2-4 ms), magnetic field 0.8× {10}14 G (0.2{--}1.8× {10}14 G), ejecta mass 4.8 {M}⊙ (2.2-12.9 {M}⊙ ), and kinetic energy 3.9× {10}51 erg (1.9{--}9.8× {10}51 erg). This significantly narrows the parameter space compared to our uninformed priors, showing that although the magnetar model is flexible, the parameter space relevant to SLSNe is well constrained by existing data. The requirement that the instantaneous engine power is ˜1044 erg at the light-curve peak necessitates either large rotational energy (P < 2 ms), or more commonly that the spin-down and diffusion timescales be well matched. We find no evidence for separate populations of fast- and slow-declining SLSNe, which instead form a continuum in light-curve widths and inferred parameters. Variations in the spectra are explained through differences in spin-down power and photospheric radii at maximum light. We find no significant correlations between model parameters and host galaxy properties. Comparing our posteriors to stellar evolution models, we show that SLSNe require rapidly rotating (fastest 10%) massive stars (≳ 20 {M}⊙ ), which is consistent with their observed rate. High mass, low metallicity, and likely binary interaction all serve to maintain rapid rotation essential for magnetar formation. By reproducing the full set of light curves, our posteriors can inform photometric searches for SLSNe in future surveys.

  18. How CMB and large-scale structure constrain chameleon interacting dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength,more » can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.« less

  19. The use of multiobjective calibration and regional sensitivity analysis in simulating hyporheic exchange

    USGS Publications Warehouse

    Naranjo, Ramon C.; Niswonger, Richard G.; Stone, Mark; Davis, Clinton; McKay, Alan

    2012-01-01

    We describe an approach for calibrating a two-dimensional (2-D) flow model of hyporheic exchange using observations of temperature and pressure to estimate hydraulic and thermal properties. A longitudinal 2-D heat and flow model was constructed for a riffle-pool sequence to simulate flow paths and flux rates for variable discharge conditions. A uniform random sampling approach was used to examine the solution space and identify optimal values at local and regional scales. We used a regional sensitivity analysis to examine the effects of parameter correlation and nonuniqueness commonly encountered in multidimensional modeling. The results from this study demonstrate the ability to estimate hydraulic and thermal parameters using measurements of temperature and pressure to simulate exchange and flow paths. Examination of the local parameter space provides the potential for refinement of zones that are used to represent sediment heterogeneity within the model. The results indicate vertical hydraulic conductivity was not identifiable solely using pressure observations; however, a distinct minimum was identified using temperature observations. The measured temperature and pressure and estimated vertical hydraulic conductivity values indicate the presence of a discontinuous low-permeability deposit that limits the vertical penetration of seepage beneath the riffle, whereas there is a much greater exchange where the low-permeability deposit is absent. Using both temperature and pressure to constrain the parameter estimation process provides the lowest overall root-mean-square error as compared to using solely temperature or pressure observations. This study demonstrates the benefits of combining continuous temperature and pressure for simulating hyporheic exchange and flow in a riffle-pool sequence. Copyright 2012 by the American Geophysical Union.

  20. CONSTRAINTS ON THE SYNCHROTRON EMISSION MECHANISM IN GAMMA-RAY BURSTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beniamini, Paz; Piran, Tsvi, E-mail: paz.beniamini@mail.huji.ac.il, E-mail: tsvi.piran@mail.huji.ac.il

    2013-05-20

    We reexamine the general synchrotron model for gamma-ray bursts' (GRBs') prompt emission and determine the regime in the parameter phase space in which it is viable. We characterize a typical GRB pulse in terms of its peak energy, peak flux, and duration and use the latest Fermi observations to constrain the high-energy part of the spectrum. We solve for the intrinsic parameters at the emission region and find the possible parameter phase space for synchrotron emission. Our approach is general and it does not depend on a specific energy dissipation mechanism. Reasonable synchrotron solutions are found with energy ratios ofmore » 10{sup -4} < {epsilon}{sub B}/{epsilon}{sub e} < 10, bulk Lorentz factor values of 300 < {Gamma} < 3000, typical electrons' Lorentz factor values of 3 Multiplication-Sign 10{sup 3} < {gamma}{sub e} < 10{sup 5}, and emission radii of the order 10{sup 15} cm < R < 10{sup 17} cm. Most remarkable among those are the rather large values of the emission radius and the electron's Lorentz factor. We find that soft (with peak energy less than 100 keV) but luminous (isotropic luminosity of 1.5 Multiplication-Sign 10{sup 53}) pulses are inefficient. This may explain the lack of strong soft bursts. In cases when most of the energy is carried out by the kinetic energy of the flow, such as in the internal shocks, the synchrotron solution requires that only a small fraction of the electrons are accelerated to relativistic velocities by the shocks. We show that future observations of very high energy photons from GRBs by CTA could possibly determine all parameters of the synchrotron model or rule it out altogether.« less

  1. Kaon BSM B -parameters using improved staggered fermions from N f = 2 + 1 unquenched QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Benjamin J.

    2016-01-28

    In this paper, we present results for the matrix elements of the additional ΔS = 2 operators that appear in models of physics beyond the Standard Model (BSM), expressed in terms of four BSM B -parameters. Combined with experimental results for ΔM K and ε K, these constrain the parameters of BSM models. We use improved staggered fermions, with valence hypercubic blocking transfromation (HYP)-smeared quarks and N f = 2 + 1 flavors of “asqtad” sea quarks. The configurations have been generated by the MILC Collaboration. The matching between lattice and continuum four-fermion operators and bilinears is done perturbatively at one-loop order. We use three lattice spacings for the continuum extrapolation: a ≈ 0.09 , 0.06 and 0.045 fm. Valence light-quark masses range down to ≈ mmore » $$phys\\atop{s}$$ /13 while the light sea-quark masses range down to ≈ m$$phys\\atop{s}$$ / 20 . Compared to our previous published work, we have added four additional lattice ensembles, leading to better controlled extrapolations in the lattice spacing and sea-quark masses. We report final results for two renormalization scales, μ = 2 and 3 GeV, and compare them to those obtained by other collaborations. Agreement is found for two of the four BSM B-parameters (B 2 and B$$SUSY\\atop{3}$$ ). The other two (B 4 and B 5) differ significantly from those obtained using regularization independent momentum subtraction (RI-MOM) renormalization as an intermediate scheme, but are in agreement with recent preliminary results obtained by the RBC-UKQCD Collaboration using regularization independent symmetric momentum subtraction (RI-SMOM) intermediate schemes.« less

  2. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  3. Multipole analysis of redshift-space distortions around cosmic voids

    NASA Astrophysics Data System (ADS)

    Hamaus, Nico; Cousinou, Marie-Claude; Pisani, Alice; Aubert, Marie; Escoffier, Stéphanie; Weller, Jochen

    2017-07-01

    We perform a comprehensive redshift-space distortion analysis based on cosmic voids in the large-scale distribution of galaxies observed with the Sloan Digital Sky Survey. To this end, we measure multipoles of the void-galaxy cross-correlation function and compare them with standard model predictions in cosmology. Merely considering linear-order theory allows us to accurately describe the data on the entire available range of scales and to probe void-centric distances down to about 2 h-1Mpc. Common systematics, such as the Fingers-of-God effect, scale-dependent galaxy bias, and nonlinear clustering do not seem to play a significant role in our analysis. We constrain the growth rate of structure via the redshift-space distortion parameter β at two median redshifts, β(bar z=0.32)=0.599+0.134-0.124 and β(bar z=0.54)=0.457+0.056-0.054, with a precision that is competitive with state-of-the-art galaxy-clustering results. While the high-redshift constraint perfectly agrees with model expectations, we observe a mild 2σ deviation at bar z=0.32, which increases to 3σ when the data is restricted to the lowest available redshift range of 0.15

  4. CLFs-based optimization control for a class of constrained visual servoing systems.

    PubMed

    Song, Xiulan; Miaomiao, Fu

    2017-03-01

    In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Cosmology and the Bispectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sefusatti, Emiliano; /Fermilab /CCPP, New York; Crocce, Martin

    The present spatial distribution of galaxies in the Universe is non-Gaussian, with 40% skewness in 50 h{sup -1} Mpc spheres, and remarkably little is known about the information encoded in it about cosmological parameters beyond the power spectrum. In this work they present an attempt to bridge this gap by studying the bispectrum, paying particular attention to a joint analysis with the power spectrum and their combination with CMB data. They address the covariance properties of the power spectrum and bispectrum including the effects of beat coupling that lead to interesting cross-correlations, and discuss how baryon acoustic oscillations break degeneracies.more » They show that the bispectrum has significant information on cosmological parameters well beyond its power in constraining galaxy bias, and when combined with the power spectrum is more complementary than combining power spectra of different samples of galaxies, since non-Gaussianity provides a somewhat different direction in parameter space. In the framework of flat cosmological models they show that most of the improvement of adding bispectrum information corresponds to parameters related to the amplitude and effective spectral index of perturbations, which can be improved by almost a factor of two. Moreover, they demonstrate that the expected statistical uncertainties in {sigma}s of a few percent are robust to relaxing the dark energy beyond a cosmological constant.« less

  6. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  7. Space Science for the Third Millennium

    NASA Technical Reports Server (NTRS)

    Frewing, Kent

    1996-01-01

    As NASA approaches the beginning of its fifth decade in 1998, and as the calendar approaches the beginning of its third millennium, America's civilian space agency is changing its historic ideas about conducting space science so that it will still be able to perform the desired scientific studies in an era of constrained NASA budgets.

  8. GRAVITATIONAL-WAVE OBSERVATIONS MAY CONSTRAIN GAMMA-RAY BURST MODELS: THE CASE OF GW150914–GBM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veres, P.; Preece, R. D.; Goldstein, A.

    The possible short gamma-ray burst (GRB) observed by Fermi /GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peakmore » energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (∼10{sup −3} cm{sup −3}) and a high Lorentz factor (∼2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford–Znajek model. If future joint observations confirm the GW–short-GRB association we can provide similar but more detailed tests for prompt emission models.« less

  9. Mapping the Pressure-radius Relationship of Exoplanets

    NASA Astrophysics Data System (ADS)

    Cubillos, Patricio; Fossati, Luca; Kubyshkina, Darya

    2017-10-01

    The radius of a planet is one of the most physically meaningful and readily accessible parameters of extra-solar planets. This parameter is extensively used in the literature to compare planets or study trends in the know population of exoplanets. However, in an atmosphere, the concept of a planet radius is inherently fuzzy. The atmospheric pressures probed by trasmission (transit) or emission (eclipse) spectra are not directly constrained by the observations, they vary as a function of the atmospheric properties and observing wavelengths, and further correlate with the atmospheric properties producing degenerate solutions.Here, we characterize the properties of exoplanet radii using a radiative-transfer model to compute clear- atmosphere transmission and emission spectra of gas-dominated planets. We explore a wide range of planetary temperatures, masses, and radii, sampling from 300 to 3000 K and Jupiter- to Earth-like values. We will discuss how transit and photospheric radii vary over the parameter space, and map the global trends in the atmospheric pressures associated with these radii. We will also highlight the biases introduced by the choice of an observing band, or the assumption of a clear/cloudy atmosphere, and the relevance that these biases take as better instrumentation improves the precision of photometric observations.

  10. On parametrized cold dense matter equation-of-state inference

    NASA Astrophysics Data System (ADS)

    Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.

    2018-07-01

    Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrized dense matter equations of state. In particular, we generalize and examine two inference paradigms from the literature: (i) direct posterior equation-of-state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective while the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilizing archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation-of-state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.

  11. On parametrised cold dense matter equation of state inference

    NASA Astrophysics Data System (ADS)

    Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.

    2018-04-01

    Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrised dense matter equations of state. In particular we generalise and examine two inference paradigms from the literature: (i) direct posterior equation of state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective whilst the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilising archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation of state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.

  12. Cosmological structure formation in Decaying Dark Matter models

    NASA Astrophysics Data System (ADS)

    Cheng, Dalong; Chu, M.-C.; Tang, Jiayu

    2015-07-01

    The standard cold dark matter (CDM) model predicts too many and too dense small structures. We consider an alternative model that the dark matter undergoes two-body decays with cosmological lifetime τ into only one type of massive daughters with non-relativistic recoil velocity Vk. This decaying dark matter model (DDM) can suppress the structure formation below its free-streaming scale at time scale comparable to τ. Comparing with warm dark matter (WDM), DDM can better reduce the small structures while being consistent with high redshfit observations. We study the cosmological structure formation in DDM by performing self-consistent N-body simulations and point out that cosmological simulations are necessary to understand the DDM structures especially on non-linear scales. We propose empirical fitting functions for the DDM suppression of the mass function and the concentration-mass relation, which depend on the decay parameters lifetime τ, recoil velocity Vk and redshift. The fitting functions lead to accurate reconstruction of the the non-linear power transfer function of DDM to CDM in the framework of halo model. Using these results, we set constraints on the DDM parameter space by demanding that DDM does not induce larger suppression than the Lyman-α constrained WDM models. We further generalize and constrain the DDM models to initial conditions with non-trivial mother fractions and show that the halo model predictions are still valid after considering a global decayed fraction. Finally, we point out that the DDM is unlikely to resolve the disagreement on cluster numbers between the Planck primary CMB prediction and the Sunyaev-Zeldovich (SZ) effect number count for τ ~ H0-1.

  13. 4D-tomographic reconstruction of water vapor using the hybrid regularization technique with application to the North West of Iran

    NASA Astrophysics Data System (ADS)

    Adavi, Zohre; Mashhadi-Hossainali, Masoud

    2015-04-01

    Water vapor is considered as one of the most important weather parameter in meteorology. Its non-uniform distribution, which is due to the atmospheric phenomena above the surface of the earth, depends both on space and time. Due to the limited spatial and temporal coverage of observations, estimating water vapor is still a challenge in meteorology and related fields such as positioning and geodetic techniques. Tomography is a method for modeling the spatio-temporal variations of this parameter. By analyzing the impact of troposphere on the Global Navigation Satellite (GNSS) signals, inversion techniques are used for modeling the water vapor in this approach. Non-uniqueness and instability of solution are the two characteristic features of this problem. Horizontal and/or vertical constraints are usually used to compute a unique solution for this problem. Here, a hybrid regularization method is used for computing a regularized solution. The adopted method is based on the Least-Square QR (LSQR) and Tikhonov regularization techniques. This method benefits from the advantages of both the iterative and direct techniques. Moreover, it is independent of initial values. Based on this property and using an appropriate resolution for the model, firstly the number of model elements which are not constrained by GPS measurement are minimized and then; water vapor density is only estimated at the voxels which are constrained by these measurements. In other words, no constraint is added to solve the problem. Reconstructed profiles of water vapor are validated using radiosonde measurements.

  14. Linear and non-linear Modified Gravity forecasts with future surveys

    NASA Astrophysics Data System (ADS)

    Casas, Santiago; Kunz, Martin; Martinelli, Matteo; Pettorino, Valeria

    2017-12-01

    Modified Gravity theories generally affect the Poisson equation and the gravitational slip in an observable way, that can be parameterized by two generic functions (η and μ) of time and space. We bin their time dependence in redshift and present forecasts on each bin for future surveys like Euclid. We consider both Galaxy Clustering and Weak Lensing surveys, showing the impact of the non-linear regime, with two different semi-analytical approximations. In addition to these future observables, we use a prior covariance matrix derived from the Planck observations of the Cosmic Microwave Background. In this work we neglect the information from the cross correlation of these observables, and treat them as independent. Our results show that η and μ in different redshift bins are significantly correlated, but including non-linear scales reduces or even eliminates the correlation, breaking the degeneracy between Modified Gravity parameters and the overall amplitude of the matter power spectrum. We further apply a Zero-phase Component Analysis and identify which combinations of the Modified Gravity parameter amplitudes, in different redshift bins, are best constrained by future surveys. We extend the analysis to two particular parameterizations of μ and η and consider, in addition to Euclid, also SKA1, SKA2, DESI: we find in this case that future surveys will be able to constrain the current values of η and μ at the 2-5% level when using only linear scales (wavevector k < 0 . 15 h/Mpc), depending on the specific time parameterization; sensitivity improves to about 1% when non-linearities are included.

  15. Use of constrained optimization in the conceptual design of a medium-range subsonic transport

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1980-01-01

    Constrained parameter optimization was used to perform the optimal conceptual design of a medium range transport configuration. The impact of choosing a given performance index was studied, and the required income for a 15 percent return on investment was proposed as a figure of merit. A number of design constants and constraint functions were systematically varied to document the sensitivities of the optimal design to a variety of economic and technological assumptions. A comparison was made for each of the parameter variations between the baseline configuration and the optimally redesigned configuration.

  16. Lensing convergence in galaxy clustering in ΛCDM and beyond

    NASA Astrophysics Data System (ADS)

    Villa, Eleonora; Di Dio, Enea; Lepori, Francesca

    2018-04-01

    We study the impact of neglecting lensing magnification in galaxy clustering analyses for future galaxy surveys, considering the ΛCDM model and two extensions: massive neutrinos and modifications of General Relativity. Our study focuses on the biases on the constraints and on the estimation of the cosmological parameters. We perform a comprehensive investigation of these two effects for the upcoming photometric and spectroscopic galaxy surveys Euclid and SKA for different redshift binning configurations. We also provide a fitting formula for the magnification bias of SKA. Our results show that the information present in the lensing contribution does improve the constraints on the modified gravity parameters whereas the lensing constraining power is negligible for the ΛCDM parameters. For photometric surveys the estimation is biased for all the parameters if lensing is not taken into account. This effect is particularly significant for the modified gravity parameters. Conversely for spectroscopic surveys the bias is below one sigma for all the parameters. Our findings show the importance of including lensing in galaxy clustering analyses for testing General Relativity and to constrain the parameters which describe its modifications.

  17. Testing the Einstein's equivalence principle with polarized gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Yang, Chao; Zou, Yuan-Chuan; Zhang, Yue-Yang; Liao, Bin; Lei, Wei-Hua

    2017-07-01

    The Einstein's equivalence principle can be tested by using parametrized post-Newtonian parameters, of which the parameter γ has been constrained by comparing the arrival times of photons with different energies. It has been constrained by a variety of astronomical transient events, such as gamma-ray bursts (GRBs), fast radio bursts as well as pulses of pulsars, with the most stringent constraint of Δγ ≲ 10-15. In this Letter, we consider the arrival times of lights with different circular polarization. For a linearly polarized light, it is the combination of two circularly polarized lights. If the arrival time difference between the two circularly polarized lights is too large, their combination may lose the linear polarization. We constrain the value of Δγp < 1.6 × 10-27 by the measurement of the polarization of GRB 110721A, which is the most stringent constraint ever achieved.

  18. A 750 GeV portal: LHC phenomenology and dark matter candidates

    DOE PAGES

    D’Eramo, Francesco; de Vries, Jordy; Panci, Paolo

    2016-05-16

    We study the effective field theory obtained by extending the Standard Model field content with two singlets: a 750 GeV (pseudo-)scalar and a stable fermion. Accounting for collider productions initiated by both gluon and photon fusion, we investigate where the theory is consistent with both the LHC diphoton excess and bounds from Run 1. We analyze dark matter phenomenology in such regions, including relic density constraints as well as collider, direct, and indirect bounds. Scalar portal dark matter models are very close to limits from direct detection and mono-jet searches if gluon fusion dominates, and not constrained at all otherwise.more » In conclusion, pseudo-scalar models are challenged by photon line limits and mono-jet searches in most of the parameter space.« less

  19. A 750 GeV portal: LHC phenomenology and dark matter candidates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Eramo, Francesco; de Vries, Jordy; Panci, Paolo

    We study the effective field theory obtained by extending the Standard Model field content with two singlets: a 750 GeV (pseudo-)scalar and a stable fermion. Accounting for collider productions initiated by both gluon and photon fusion, we investigate where the theory is consistent with both the LHC diphoton excess and bounds from Run 1. We analyze dark matter phenomenology in such regions, including relic density constraints as well as collider, direct, and indirect bounds. Scalar portal dark matter models are very close to limits from direct detection and mono-jet searches if gluon fusion dominates, and not constrained at all otherwise.more » In conclusion, pseudo-scalar models are challenged by photon line limits and mono-jet searches in most of the parameter space.« less

  20. First LIGO search for gravitational wave bursts from cosmic (super)strings

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Amin, R. S.; Anderson, S. B.; Anderson, W. G.; Arain, M. A.; Araya, M.; Armandula, H.; Armor, P.; Aso, Y.; Aston, S.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P.; Ballmer, S.; Barker, C.; Barker, D.; Barr, B.; Barriga, P.; Barsotti, L.; Barton, M. A.; Bartos, I.; Bassiri, R.; Bastarrika, M.; Behnke, B.; Benacquista, M.; Betzwieser, J.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Biswas, R.; Black, E.; Blackburn, J. K.; Blackburn, L.; Blair, D.; Bland, B.; Bodiya, T. P.; Bogue, L.; Bork, R.; Boschi, V.; Bose, S.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Bridges, D. O.; Brinkmann, M.; Brooks, A. F.; Brown, D. A.; Brummit, A.; Brunet, G.; Bullington, A.; Buonanno, A.; Burmeister, O.; Byer, R. L.; Cadonati, L.; Camp, J. B.; Cannizzo, J.; Cannon, K. C.; Cao, J.; Cardenas, L.; Caride, S.; Castaldi, G.; Caudill, S.; Cavaglià, M.; Cepeda, C.; Chalermsongsak, T.; Chalkley, E.; Charlton, P.; Chatterji, S.; Chelkowski, S.; Chen, Y.; Christensen, N.; Chung, C. T. Y.; Clark, D.; Clark, J.; Clayton, J. H.; Cokelaer, T.; Colacino, C. N.; Conte, R.; Cook, D.; Corbitt, T. R. C.; Cornish, N.; Coward, D.; Coyne, D. C.; Creighton, J. D. E.; Creighton, T. D.; Cruise, A. M.; Culter, R. M.; Cumming, A.; Cunningham, L.; Danilishin, S. L.; Danzmann, K.; Daudert, B.; Davies, G.; Daw, E. J.; Debra, D.; Degallaix, J.; Dergachev, V.; Desai, S.; Desalvo, R.; Dhurandhar, S.; Díaz, M.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doomes, E. E.; Drever, R. W. P.; Dueck, J.; Duke, I.; Dumas, J.-C.; Dwyer, J. G.; Echols, C.; Edgar, M.; Effler, A.; Ehrens, P.; Espinoza, E.; Etzel, T.; Evans, M.; Evans, T.; Fairhurst, S.; Faltas, Y.; Fan, Y.; Fazi, D.; Fehrmann, H.; Finn, L. S.; Flasch, K.; Foley, S.; Forrest, C.; Fotopoulos, N.; Franzen, A.; Frede, M.; Frei, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T.; Fritschel, P.; Frolov, V. V.; Fyffe, M.; Galdi, V.; Garofoli, J. A.; Gholami, I.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Goda, K.; Goetz, E.; Goggin, L. M.; González, G.; Gorodetsky, M. L.; Goßler, S.; Gouaty, R.; Grant, A.; Gras, S.; Gray, C.; Gray, M.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Grimaldi, F.; Grosso, R.; Grote, H.; Grunewald, S.; Guenther, M.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hallam, J. M.; Hammer, D.; Hammond, G. D.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G. M.; Harry, I. W.; Harstad, E. D.; Haughian, K.; Hayama, K.; Heefner, J.; Heng, I. S.; Heptonstall, A.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hodge, K. A.; Holt, K.; Hosken, D. J.; Hough, J.; Hoyland, D.; Hughey, B.; Huttner, S. H.; Ingram, D. R.; Isogai, T.; Ito, M.; Ivanov, A.; Johnson, B.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kandhasamy, S.; Kanner, J.; Kasprzyk, D.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalaidovski, A.; Khalili, F. Y.; Khan, R.; Khazanov, E.; King, P.; Kissel, J. S.; Klimenko, S.; Kokeyama, K.; Kondrashov, V.; Kopparapu, R.; Koranda, S.; Kozak, D.; Krishnan, B.; Kumar, R.; Kwee, P.; Lam, P. K.; Landry, M.; Lantz, B.; Lazzarini, A.; Lei, H.; Lei, M.; Leindecker, N.; Leonor, I.; Li, C.; Lin, H.; Lindquist, P. E.; Littenberg, T. B.; Lockerbie, N. A.; Lodhia, D.; Longo, M.; Lormand, M.; Lu, P.; Lubiński, M.; Lucianetti, A.; Lück, H.; Machenschalk, B.; Macinnis, M.; Mageswaran, M.; Mailand, K.; Mandel, I.; Mandic, V.; Márka, S.; Márka, Z.; Markosyan, A.; Markowitz, J.; Maros, E.; Martin, I. W.; Martin, R. M.; Marx, J. N.; Mason, K.; Matichard, F.; Matone, L.; Matzner, R. A.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McHugh, M.; McIntyre, G.; McKechan, D. J. A.; McKenzie, K.; Mehmet, M.; Melatos, A.; Melissinos, A. C.; Menéndez, D. F.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C.; Meyer, M. S.; Miller, J.; Minelli, J.; Mino, Y.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Moe, B.; Mohanty, S. D.; Mohapatra, S. R. P.; Moreno, G.; Morioka, T.; Mors, K.; Mossavi, K.; Mowlowry, C.; Mueller, G.; Müller-Ebhardt, H.; Muhammad, D.; Mukherjee, S.; Mukhopadhyay, H.; Mullavey, A.; Munch, J.; Murray, P. G.; Myers, E.; Myers, J.; Nash, T.; Nelson, J.; Newton, G.; Nishizawa, A.; Numata, K.; O'Dell, J.; O'Reilly, B.; O'Shaughnessy, R.; Ochsner, E.; Ogin, G. H.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pan, Y.; Pankow, C.; Papa, M. A.; Parameshwaraiah, V.; Patel, P.; Pedraza, M.; Penn, S.; Perreca, A.; Pierro, V.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Postiglione, F.; Principe, M.; Prix, R.; Prokhorov, L.; Punken, O.; Quetschke, V.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raics, Z.; Rainer, N.; Rakhmanov, M.; Raymond, V.; Reed, C. M.; Reed, T.; Rehbein, H.; Reid, S.; Reitze, D. H.; Riesen, R.; Riles, K.; Rivera, B.; Roberts, P.; Robertson, N. A.; Robinson, C.; Robinson, E. L.; Roddy, S.; Röver, C.; Rollins, J.; Romano, J. D.; Romie, J. H.; Rowan, S.; Rüdiger, A.; Russell, P.; Ryan, K.; Sakata, S.; Sancho de La Jordana, L.; Sandberg, V.; Sannibale, V.; Santamaría, L.; Saraf, S.; Sarin, P.; Sathyaprakash, B. S.; Sato, S.; Satterthwaite, M.; Saulson, P. R.; Savage, R.; Savov, P.; Scanlan, M.; Schilling, R.; Schnabel, R.; Schofield, R.; Schulz, B.; Schutz, B. F.; Schwinberg, P.; Scott, J.; Scott, S. M.; Searle, A. C.; Sears, B.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Sergeev, A.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Sinha, S.; Sintes, A. M.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Somiya, K.; Sorazu, B.; Stein, A.; Stein, L. C.; Steplewski, S.; Stochino, A.; Stone, R.; Strain, K. A.; Strigin, S.; Stroeer, A.; Stuver, A. L.; Summerscales, T. Z.; Sun, K.-X.; Sung, M.; Sutton, P. J.; Szokoly, G. P.; Talukder, D.; Tang, L.; Tanner, D. B.; Tarabrin, S. P.; Taylor, J. R.; Taylor, R.; Thacker, J.; Thorne, K. A.; Thorne, K. S.; Thüring, A.; Tokmakov, K. V.; Torres, C.; Torrie, C.; Traylor, G.; Trias, M.; Ugolini, D.; Ulmen, J.; Urbanek, K.; Vahlbruch, H.; Vallisneri, M.; van den Broeck, C.; van der Sluys, M. V.; van Veggel, A. A.; Vass, S.; Vaulin, R.; Vecchio, A.; Veitch, J.; Veitch, P.; Veltkamp, C.; Villar, A.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Ward, R. L.; Weidner, A.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Wen, S.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, H. R.; Williams, L.; Willke, B.; Wilmut, I.; Winkelmann, L.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Wu, W.; Yakushin, I.; Yamamoto, H.; Yan, Z.; Yoshida, S.; Zanolin, M.; Zhang, J.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M. E.; Zur Mühlen, H.; Zweizig, J.; Robinet, F.

    2009-09-01

    We report on a matched-filter search for gravitational wave bursts from cosmic string cusps using LIGO data from the fourth science run (S4) which took place in February and March 2005. No gravitational waves were detected in 14.9 days of data from times when all three LIGO detectors were operating. We interpret the result in terms of a frequentist upper limit on the rate of gravitational wave bursts and use the limits on the rate to constrain the parameter space (string tension, reconnection probability, and loop sizes) of cosmic string models. Many grand unified theory-scale models (with string tension Gμ/c2≈10-6) can be ruled out at 90% confidence for reconnection probabilities p≤10-3 if loop sizes are set by gravitational back reaction.

  1. Measuring rare and exclusive Higgs boson decays into light resonances

    NASA Astrophysics Data System (ADS)

    Chisholm, Andrew S.; Kuttimalai, Silvan; Nikolopoulos, Konstantinos; Spannowsky, Michael

    2016-09-01

    We evaluate the LHC's potential of observing Higgs boson decays into light elementary or composite resonances through their hadronic decay channels. We focus on the Higgs boson production processes with the largest cross sections, pp → h and pp → h+{jet}, with subsequent decays h → ZA or h → Z η _c, and comment on the production process pp → hZ. By exploiting track-based jet substructure observables and extrapolating to 3000 {fb}^{-1} we find {BR}(h → ZA) ≃ {BR}(h → Z η _c) ≲ 0.02 at 95 % CL. We interpret this limit in terms of the 2HDM Type 1. We find that searches for h→ ZA are complementary to existing measurements and can constrain large parts of the currently allowed parameter space.

  2. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    NASA Astrophysics Data System (ADS)

    Xavier, Marcelo A.; Trimboli, M. Scott

    2015-07-01

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggest significant performance improvements might be achieved by extending the result to electrochemical models.

  3. A variational approach to dynamics of flexible multibody systems

    NASA Technical Reports Server (NTRS)

    Wu, Shih-Chin; Haug, Edward J.; Kim, Sung-Soo

    1989-01-01

    This paper presents a variational formulation of constrained dynamics of flexible multibody systems, using a vector-variational calculus approach. Body reference frames are used to define global position and orientation of individual bodies in the system, located and oriented by position of its origin and Euler parameters, respectively. Small strain linear elastic deformation of individual components, relative to their body references frames, is defined by linear combinations of deformation modes that are induced by constraint reaction forces and normal modes of vibration. A library of kinematic couplings between flexible and/or rigid bodies is defined and analyzed. Variational equations of motion for multibody systems are obtained and reduced to mixed differential-algebraic equations of motion. A space structure that must deform during deployment is analyzed, to illustrate use of the methods developed.

  4. Control of the constrained planar simple inverted pendulum

    NASA Technical Reports Server (NTRS)

    Bavarian, B.; Wyman, B. F.; Hemami, H.

    1983-01-01

    Control of a constrained planar inverted pendulum by eigenstructure assignment is considered. Linear feedback is used to stabilize and decouple the system in such a way that specified subspaces of the state space are invariant for the closed-loop system. The effectiveness of the feedback law is tested by digital computer simulation. Pre-compensation by an inverse plant is used to improve performance.

  5. How to Test the SME with Space Missions?

    NASA Technical Reports Server (NTRS)

    Hees, A.; Lamine, B.; Le Poncin-Lafitte, C.; Wolf, P.

    2013-01-01

    In this communication, we focus on possibilities to constrain SME coefficients using Cassini and Messenger data. We present simulations of radio science observables within the framework of the SME, identify the linear combinations of SME coefficients the observations depend on and determine the sensitivity of these measurements to the SME coefficients. We show that these datasets are very powerful for constraining SME coefficients.

  6. Constraining Dust and Color Variations of High-z SNe Using NICMOS on the

    Science.gov Websites

    Hubble SAO/NASA ADS Astronomy Abstract Service Title: Constraining Dust and Color Variations of Road, Oxford OX1 3RH, UK), AN(Department of Astronomy and Astrophysics, University of Toronto, 60 St ). Publication Date: 08/2009 Origin: IOP Astronomy Keywords: cosmology: observations, cosmological parameters

  7. A Multidimensional Item Response Model: Constrained Latent Class Analysis Using the Gibbs Sampler and Posterior Predictive Checks.

    ERIC Educational Resources Information Center

    Hoijtink, Herbert; Molenaar, Ivo W.

    1997-01-01

    This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)

  8. Combined Constraints on the Equation of State of Dense Neutron-rich Matter from Terrestrial Nuclear Experiments and Observations of Neutron Stars

    NASA Astrophysics Data System (ADS)

    Zhang, Nai-Bo; Li, Bao-An; Xu, Jun

    2018-06-01

    Within the parameter space of the equation of state (EOS) of dense neutron-rich matter limited by existing constraints mainly from terrestrial nuclear experiments, we investigate how the neutron star maximum mass M max > 2.01 ± 0.04 M ⊙, radius 10.62 km < R 1.4 < 12.83 km and tidal deformability Λ1.4 ≤ 800 of canonical neutron stars together constrain the EOS of dense neutron-rich nucleonic matter. While the 3D parameter space of K sym (curvature of nuclear symmetry energy), J sym, and J 0 (skewness of the symmetry energy and EOS of symmetric nuclear matter, respectively) is narrowed down significantly by the observational constraints, more data are needed to pin down the individual values of K sym, J sym, and J 0. The J 0 largely controls the maximum mass of neutron stars. While the EOS with J 0 = 0 is sufficiently stiff to support neutron stars as massive as 2.37 M ⊙, supporting the hypothetical ones as massive as 2.74 M ⊙ (composite mass of GW170817) requires J 0 to be larger than its currently known maximum value of about 400 MeV and beyond the causality limit. The upper limit on the tidal deformability of Λ1.4 = 800 from the recent observation of GW170817 is found to provide upper limits on some EOS parameters consistent with but far less restrictive than the existing constraints of other observables studied.

  9. Two Higgs doublet model with vectorlike leptons and contributions to pp → W W and H → W W

    DOE PAGES

    Dermíšek, Radovan; Lunghi, Enrico; Shin, Seodong

    2016-02-18

    In this paper, we study a two Higgs doublet model extended by vectorlike leptons mixing with one family of standard model leptons. Generated flavor violating couplings between heavy and light leptons can dramatically alter the decay patterns of heavier Higgs bosons. We focus on pp → H → ν 4ν μ → W μν μ, where ν 4 is a new neutral lepton, and study possible effects of this process on the measurements of pp → W W and H → W W since it leads to the same final states. We discuss predictions for contributions to pp → Wmore » W and H →WW and their correlations from the region of the parameter space that satisfies all available constraints including precision electroweak observables and from pair production of vectorlike leptons. Large contributions, close to current limits, favor small tan β region of the parameter space. We find that, as a result of adopted cuts in experimental analyses, the contribution to pp → W W can be an order of magnitude larger than the contribution to H → W W . Thus, future precise measurements of pp → W W will further constrain the parameters of the model. Also, we also consider possible contributions to pp → W W from the heavy Higgs decays into a new charged lepton e 4 (H → e 4μ → W μν μ), exotic SM Higgs decays, and pair production of vectorlike leptons.« less

  10. Hiereachical Bayesian Model for Combining Geochemical and Geophysical Data for Environmental Applications Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2013-05-01

    Development of a hierarchical Bayesian model to estimate the spatiotemporal distribution of aqueous geochemical parameters associated with in-situ bioremediation using surface spectral induced polarization (SIP) data and borehole geochemical measurements collected during a bioremediation experiment at a uranium-contaminated site near Rifle, Colorado. The SIP data are first inverted for Cole-Cole parameters including chargeability, time constant, resistivity at the DC frequency and dependence factor, at each pixel of two-dimensional grids using a previously developed stochastic method. Correlations between the inverted Cole-Cole parameters and the wellbore-based groundwater chemistry measurements indicative of key metabolic processes within the aquifer (e.g. ferrous iron, sulfate, uranium)more » were established and used as a basis for petrophysical model development. The developed Bayesian model consists of three levels of statistical sub-models: 1) data model, providing links between geochemical and geophysical attributes, 2) process model, describing the spatial and temporal variability of geochemical properties in the subsurface system, and 3) parameter model, describing prior distributions of various parameters and initial conditions. The unknown parameters are estimated using Markov chain Monte Carlo methods. By combining the temporally distributed geochemical data with the spatially distributed geophysical data, we obtain the spatio-temporal distribution of ferrous iron, sulfate and sulfide, and their associated uncertainity information. The obtained results can be used to assess the efficacy of the bioremediation treatment over space and time and to constrain reactive transport models.« less

  11. Extracting foreground-obscured μ-distortion anisotropies to constrain primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Remazeilles, M.; Chluba, J.

    2018-07-01

    Correlations between cosmic microwave background (CMB) temperature, polarization, and spectral distortion anisotropies can be used as a probe of primordial non-Gaussianity. Here, we perform a reconstruction of μ-distortion anisotropies in the presence of Galactic and extragalactic foregrounds, applying the so-called Constrained ILC component separation method to simulations of proposed CMB space missions (PIXIE, LiteBIRD, CORE, and PICO). Our sky simulations include Galactic dust, Galactic synchrotron, Galactic free-free, thermal Sunyaev-Zeldovich effect, as well as primary CMB temperature and μ-distortion anisotropies, the latter being added as correlated field. The Constrained ILC method allows us to null the CMB temperature anisotropies in the reconstructed μ-map (and vice versa), in addition to mitigating the contaminations from astrophysical foregrounds and instrumental noise. We compute the cross-power spectrum between the reconstructed (CMB-free) μ-distortion map and the (μ-free) CMB temperature map, after foreground removal and component separations. Since the cross-power spectrum is proportional to the primordial non-Gaussianity parameter, fNL, on scales k˜eq 740 Mpc^{-1}, this allows us to derive fNL-detection limits for the aforementioned future CMB experiments. Our analysis shows that foregrounds degrade the theoretical detection limits (based mostly on instrumental noise) by more than one order of magnitude, with PICO standing the best chance at placing upper limits on scale-dependent non-Gaussianity. We also discuss the dependence of the constraints on the channel sensitivities and chosen bands. Like for B-mode polarization measurements, extended coverage at frequencies ν ≲ 40 GHz and ν ≳ 400 GHz provides more leverage than increased channel sensitivity.

  12. Extracting foreground-obscured μ-distortion anisotropies to constrain primordial non-Gaussianity

    NASA Astrophysics Data System (ADS)

    Remazeilles, M.; Chluba, J.

    2018-04-01

    Correlations between cosmic microwave background (CMB) temperature, polarization and spectral distortion anisotropies can be used as a probe of primordial non-Gaussianity. Here, we perform a reconstruction of μ-distortion anisotropies in the presence of Galactic and extragalactic foregrounds, applying the so-called Constrained ILC component separation method to simulations of proposed CMB space missions (PIXIE, LiteBIRD, CORE, PICO). Our sky simulations include Galactic dust, Galactic synchrotron, Galactic free-free, thermal Sunyaev-Zeldovich effect, as well as primary CMB temperature and μ-distortion anisotropies, the latter being added as correlated field. The Constrained ILC method allows us to null the CMB temperature anisotropies in the reconstructed μ-map (and vice versa), in addition to mitigating the contaminations from astrophysical foregrounds and instrumental noise. We compute the cross-power spectrum between the reconstructed (CMB-free) μ-distortion map and the (μ-free) CMB temperature map, after foreground removal and component separation. Since the cross-power spectrum is proportional to the primordial non-Gaussianity parameter, fNL, on scales k˜eq 740 Mpc^{-1}, this allows us to derive fNL-detection limits for the aforementioned future CMB experiments. Our analysis shows that foregrounds degrade the theoretical detection limits (based mostly on instrumental noise) by more than one order of magnitude, with PICO standing the best chance at placing upper limits on scale-dependent non-Gaussianity. We also discuss the dependence of the constraints on the channel sensitivities and chosen bands. Like for B-mode polarization measurements, extended coverage at frequencies ν ≲ 40 GHz and ν ≳ 400 GHz provides more leverage than increased channel sensitivity.

  13. Using Parameter Constraints to Choose State Structures in Cost-Effectiveness Modelling.

    PubMed

    Thom, Howard; Jackson, Chris; Welton, Nicky; Sharples, Linda

    2017-09-01

    This article addresses the choice of state structure in a cost-effectiveness multi-state model. Key model outputs, such as treatment recommendations and prioritisation of future research, may be sensitive to state structure choice. For example, it may be uncertain whether to consider similar disease severities or similar clinical events as the same state or as separate states. Standard statistical methods for comparing models require a common reference dataset but merging states in a model aggregates the data, rendering these methods invalid. We propose a method that involves re-expressing a model with merged states as a model on the larger state space in which particular transition probabilities, costs and utilities are constrained to be equal between states. This produces a model that gives identical estimates of cost effectiveness to the model with merged states, while leaving the data unchanged. The comparison of state structures can be achieved by comparing maximised likelihoods or information criteria between constrained and unconstrained models. We can thus test whether the costs and/or health consequences for a patient in two states are the same, and hence if the states can be merged. We note that different structures can be used for rates, costs and utilities, as appropriate. We illustrate our method with applications to two recent models evaluating the cost effectiveness of prescribing anti-depressant medications by depression severity and the cost effectiveness of diagnostic tests for coronary artery disease. State structures in cost-effectiveness models can be compared using standard methods to compare constrained and unconstrained models.

  14. Supernova Cosmology Project

    Science.gov Websites

    Space Telescope Cluster Supernova Survey: II. The Type Ia Supernova Rate in High-Redshift Galaxy /abs/0809.1648 Constraining Dust and Color Variations of High-z SNe Using NICMOS on the Hubble Space /0804.4142 A New Determination of the High-Redshift Type Ia Supernova Rates with the Hubble Space Telescope

  15. Task-space separation principle: a force-field approach to motion planning for redundant manipulators.

    PubMed

    Tommasino, Paolo; Campolo, Domenico

    2017-02-03

    In this work, we address human-like motor planning in redundant manipulators. Specifically, we want to capture postural synergies such as Donders' law, experimentally observed in humans during kinematically redundant tasks, and infer a minimal set of parameters to implement similar postural synergies in a kinematic model. For the model itself, although the focus of this paper is to solve redundancy by implementing postural strategies derived from experimental data, we also want to ensure that such postural control strategies do not interfere with other possible forms of motion control (in the task-space), i.e. solving the posture/movement problem. The redundancy problem is framed as a constrained optimization problem, traditionally solved via the method of Lagrange multipliers. The posture/movement problem can be tackled via the separation principle which, derived from experimental evidence, posits that the brain processes static torques (i.e. posture-dependent, such as gravitational torques) separately from dynamic torques (i.e. velocity-dependent). The separation principle has traditionally been applied at a joint torque level. Our main contribution is to apply the separation principle to Lagrange multipliers, which act as task-space force fields, leading to a task-space separation principle. In this way, we can separate postural control (implementing Donders' law) from various types of tasks-space movement planners. As an example, the proposed framework is applied to the (redundant) task of pointing with the human wrist. Nonlinear inverse optimization (NIO) is used to fit the model parameters and to capture motor strategies displayed by six human subjects during pointing tasks. The novelty of our NIO approach is that (i) the fitted motor strategy, rather than raw data, is used to filter and down-sample human behaviours; (ii) our framework is used to efficiently simulate model behaviour iteratively, until it converges towards the experimental human strategies.

  16. Cosmological parameters, shear maps and power spectra from CFHTLenS using Bayesian hierarchical inference

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Heavens, Alan; Jaffe, Andrew H.

    2017-04-01

    We apply two Bayesian hierarchical inference schemes to infer shear power spectra, shear maps and cosmological parameters from the Canada-France-Hawaii Telescope (CFHTLenS) weak lensing survey - the first application of this method to data. In the first approach, we sample the joint posterior distribution of the shear maps and power spectra by Gibbs sampling, with minimal model assumptions. In the second approach, we sample the joint posterior of the shear maps and cosmological parameters, providing a new, accurate and principled approach to cosmological parameter inference from cosmic shear data. As a first demonstration on data, we perform a two-bin tomographic analysis to constrain cosmological parameters and investigate the possibility of photometric redshift bias in the CFHTLenS data. Under the baseline ΛCDM (Λ cold dark matter) model, we constrain S_8 = σ _8(Ω _m/0.3)^{0.5} = 0.67+0.03-0.03 (68 per cent), consistent with previous CFHTLenS analyses but in tension with Planck. Adding neutrino mass as a free parameter, we are able to constrain ∑mν < 4.6 eV (95 per cent) using CFHTLenS data alone. Including a linear redshift-dependent photo-z bias Δz = p2(z - p1), we find p_1=-0.25+0.53-0.60 and p_2 = -0.15+0.17-0.15, and tension with Planck is only alleviated under very conservative prior assumptions. Neither the non-minimal neutrino mass nor photo-z bias models are significantly preferred by the CFHTLenS (two-bin tomography) data.

  17. Exploring the squeezed three-point galaxy correlation function with generalized halo occupation distribution models

    NASA Astrophysics Data System (ADS)

    Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.

    2018-04-01

    We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.

  18. Artificial Intelligence in planetary spectroscopy

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2017-10-01

    The field of exoplanetary spectroscopy is as fast moving as it is new. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain. This is true for both: the data analysis of observations as well as the theoretical modelling of their atmospheres.Issues of low signal-to-noise data and large, non-linear parameter spaces are nothing new and commonly found in many fields of engineering and the physical sciences. Recent years have seen vast improvements in statistical data analysis and machine learning that have revolutionised fields as diverse as telecommunication, pattern recognition, medical physics and cosmology.In many aspects, data mining and non-linearity challenges encountered in other data intensive fields are directly transferable to the field of extrasolar planets. In this conference, I will discuss how deep neural networks can be designed to facilitate solving said issues both in exoplanet atmospheres as well as for atmospheres in our own solar system. I will present a deep belief network, RobERt (Robotic Exoplanet Recognition), able to learn to recognise exoplanetary spectra and provide artificial intelligences to state-of-the-art atmospheric retrieval algorithms. Furthermore, I will present a new deep convolutional network that is able to map planetary surface compositions using hyper-spectral imaging and demonstrate its uses on Cassini-VIMS data of Saturn.

  19. AGN Accretion Physics in the Time Domain: Survey Cadences, Stochastic Analysis, and Physical Interpretations

    NASA Astrophysics Data System (ADS)

    Moreno, Jackeline; Vogeley, Michael S.; Richards, Gordon; O'Brien, John T.; Kasliwal, Vishal

    2018-01-01

    We present rigorous testing of survey cadences (K2, SDSS, CRTS, & Pan-STARRS) for quasar variability science using a magnetohydrodynamics synthetic lightcurve and the canonical lightcurve from Kepler, Zw 229.15. We explain where the state of the art is in regards to physical interpretations of stochastic models (CARMA) applied to AGN variability. Quasar variability offers a time domain approach of probing accretion physics at the SMBH scale. Evidence shows that the strongest amplitude changes in the brightness of AGN occur on long timescales ranging from months to hundreds of days. These global behaviors can be constrained by survey data despite low sampling resolution. CARMA processes provide a flexible family of models used to interpolate between data points, predict future observations and describe behaviors in a lightcurve. This is accomplished by decomposing a signal into rise and decay timescales, frequencies for cyclic behavior and shock amplitudes. Characteristic timescales may point to length-scales over which a physical process operates such as turbulent eddies, warping or hotspots due to local thermal instabilities. We present the distribution of SDSS Stripe 82 quasars in CARMA parameters space that pass our cadence tests and also explain how the Damped Harmonic Oscillator model, CARMA(2,1), reduces to the Damped Random Walk, CARMA(1,0), given the data in a specific region of the parameter space.

  20. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  1. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  2. Reconciling a geophysical model to data using a Markov chain Monte Carlo algorithm: An application to the Yellow Sea-Korean Peninsula region

    NASA Astrophysics Data System (ADS)

    Pasyanos, Michael E.; Franz, Gregory A.; Ramirez, Abelardo L.

    2006-03-01

    In an effort to build seismic models that are the most consistent with multiple data sets we have applied a new probabilistic inverse technique. This method uses a Markov chain Monte Carlo (MCMC) algorithm to sample models from a prior distribution and test them against multiple data types to generate a posterior distribution. While computationally expensive, this approach has several advantages over deterministic models, notably the seamless reconciliation of different data types that constrain the model, the proper handling of both data and model uncertainties, and the ability to easily incorporate a variety of prior information, all in a straightforward, natural fashion. A real advantage of the technique is that it provides a more complete picture of the solution space. By mapping out the posterior probability density function, we can avoid simplistic assumptions about the model space and allow alternative solutions to be identified, compared, and ranked. Here we use this method to determine the crust and upper mantle structure of the Yellow Sea and Korean Peninsula region. The model is parameterized as a series of seven layers in a regular latitude-longitude grid, each of which is characterized by thickness and seismic parameters (Vp, Vs, and density). We use surface wave dispersion and body wave traveltime data to drive the model. We find that when properly tuned (i.e., the Markov chains have had adequate time to fully sample the model space and the inversion has converged), the technique behaves as expected. The posterior model reflects the prior information at the edge of the model where there is little or no data to constrain adjustments, but the range of acceptable models is significantly reduced in data-rich regions, producing values of sediment thickness, crustal thickness, and upper mantle velocities consistent with expectations based on knowledge of the regional tectonic setting.

  3. Which products are available for subsetting?

    Atmospheric Science Data Center

    2014-12-08

    ... users to create smaller files (subsets) of the original data by selecting desired parameters, parameter criterion, or latitude and ... fluxes, where the net flux is constrained to the global heat storage in netCDF format. Single Scanner Footprint TOA/Surface Fluxes ...

  4. Uncertainty assessment and implications for data acquisition in support of integrated hydrologic models

    NASA Astrophysics Data System (ADS)

    Brunner, Philip; Doherty, J.; Simmons, Craig T.

    2012-07-01

    The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seljak, Uroš, E-mail: useljak@berkeley.edu

    On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less

  6. Experimental design approach to the process parameter optimization for laser welding of martensitic stainless steels in a constrained overlap configuration

    NASA Astrophysics Data System (ADS)

    Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.

    2011-02-01

    This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.

  7. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  8. Rapid Slewing of Flexible Space Structures

    DTIC Science & Technology

    2015-09-01

    axis gimbal with elastic joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are...the effects of the nonlinearities so the vibrational motion can be constrained for a time-optimal slew. It is shown that by constructing an...joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are properly constrained, thus

  9. A Multiple Group Measurement Model of Children's Reports of Parental Socioeconomic Status. Discussion Papers No. 531-78.

    ERIC Educational Resources Information Center

    Mare, Robert D.; Mason, William M.

    An important class of applications of measurement error or constrained factor analytic models consists of comparing models for several populations. In such cases, it is appropriate to make explicit statistical tests of model similarity across groups and to constrain some parameters of the models to be equal across groups using a priori substantive…

  10. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  11. Constrained orbital intercept-evasion

    NASA Astrophysics Data System (ADS)

    Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh

    2014-06-01

    An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.

  12. On the BV formalism of open superstring field theory in the large Hilbert space

    NASA Astrophysics Data System (ADS)

    Matsunaga, Hiroaki; Nomura, Mitsuru

    2018-05-01

    We construct several BV master actions for open superstring field theory in the large Hilbert space. First, we show that a naive use of the conventional BV approach breaks down at the third order of the antifield number expansion, although it enables us to define a simple "string antibracket" taking the Darboux form as spacetime antibrackets. This fact implies that in the large Hilbert space, "string fields-antifields" should be reassembled to obtain master actions in a simple manner. We determine the assembly of the string anti-fields on the basis of Berkovits' constrained BV approach, and give solutions to the master equation defined by Dirac antibrackets on the constrained string field-antifield space. It is expected that partial gauge-fixing enables us to relate superstring field theories based on the large and small Hilbert spaces directly: reassembling string fields-antifields is rather natural from this point of view. Finally, inspired by these results, we revisit the conventional BV approach and construct a BV master action based on the minimal set of string fields-antifields.

  13. Lunar Heat Flux Measurements Enabled by a Microwave Radiometer Aboard the Deep Space Gateway

    NASA Astrophysics Data System (ADS)

    Siegler, M.; Ruf, C.; Putzig, N.; Morgan, G.; Hayne, P.; Paige, D.; Nagihara, S.; Weber, R.

    2018-02-01

    We would like to present a concept to use the Deep Space Gateway as a platform for constraining the geothermal heat production, surface, and near-surface rocks, and dielectric properties of the Moon from orbit with passive microwave radiometery.

  14. Strategic considerations for support of humans in space and Moon/Mars exploration missions. Life sciences research and technology programs, volume 2

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Summary charts of the following topics are presented: the Percentage of Critical Questions in Constrained and Robust Programs; the Executive Committee and AMAC Disposition of Critical Questions for Constrained and Robust Programs; and the Requirements for Ground-based Research and Flight Platforms for Constrained and Robust Programs. Data Tables are also presented and cover the following: critical questions from all Life Sciences Division Discipline Science Plans; critical questions listed by category and criticality; all critical questions which require ground-based research; critical questions that would utilize spacelabs listed by category and criticality; critical questions that would utilize Space Station Freedom (SSF) listed by category and criticality; critical questions that would utilize the SSF Centrifuge; facility listed by category and criticality; critical questions that would utilize a Moon base listed by category and criticality; critical questions that would utilize robotic missions listed by category and criticality; critical questions that would utilize free flyers listed by category and criticality; and critical questions by deliverables.

  15. Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Lee, Charles H.

    2012-01-01

    We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.

  16. Wavefield reconstruction inversion with a multiplicative cost function

    NASA Astrophysics Data System (ADS)

    da Silva, Nuno V.; Yao, Gang

    2018-01-01

    We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.

  17. Computer-Aided Discovery Tools for Volcano Deformation Studies with InSAR and GPS

    NASA Astrophysics Data System (ADS)

    Pankratius, V.; Pilewskie, J.; Rude, C. M.; Li, J. D.; Gowanlock, M.; Bechor, N.; Herring, T.; Wauthier, C.

    2016-12-01

    We present a Computer-Aided Discovery approach that facilitates the cloud-scalable fusion of different data sources, such as GPS time series and Interferometric Synthetic Aperture Radar (InSAR), for the purpose of identifying the expansion centers and deformation styles of volcanoes. The tools currently developed at MIT allow the definition of alternatives for data processing pipelines that use various analysis algorithms. The Computer-Aided Discovery system automatically generates algorithmic and parameter variants to help researchers explore multidimensional data processing search spaces efficiently. We present first application examples of this technique using GPS data on volcanoes on the Aleutian Islands and work in progress on combined GPS and InSAR data in Hawaii. In the model search context, we also illustrate work in progress combining time series Principal Component Analysis with InSAR augmentation to constrain the space of possible model explanations on current empirical data sets and achieve a better identification of deformation patterns. This work is supported by NASA AIST-NNX15AG84G and NSF ACI-1442997 (PI: V. Pankratius).

  18. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  19. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  20. Towards adjoint-based inversion of time-dependent mantle convection with nonlinear viscosity

    NASA Astrophysics Data System (ADS)

    Li, Dunzhu; Gurnis, Michael; Stadler, Georg

    2017-04-01

    We develop and study an adjoint-based inversion method for the simultaneous recovery of initial temperature conditions and viscosity parameters in time-dependent mantle convection from the current mantle temperature and historic plate motion. Based on a realistic rheological model with temperature-dependent and strain-rate-dependent viscosity, we formulate the inversion as a PDE-constrained optimization problem. The objective functional includes the misfit of surface velocity (plate motion) history, the misfit of the current mantle temperature, and a regularization for the uncertain initial condition. The gradient of this functional with respect to the initial temperature and the uncertain viscosity parameters is computed by solving the adjoint of the mantle convection equations. This gradient is used in a pre-conditioned quasi-Newton minimization algorithm. We study the prospects and limitations of the inversion, as well as the computational performance of the method using two synthetic problems, a sinking cylinder and a realistic subduction model. The subduction model is characterized by the migration of a ridge toward a trench whereby both plate motions and subduction evolve. The results demonstrate: (1) for known viscosity parameters, the initial temperature can be well recovered, as in previous initial condition-only inversions where the effective viscosity was given; (2) for known initial temperature, viscosity parameters can be recovered accurately, despite the existence of trade-offs due to ill-conditioning; (3) for the joint inversion of initial condition and viscosity parameters, initial condition and effective viscosity can be reasonably recovered, but the high dimension of the parameter space and the resulting ill-posedness may limit recovery of viscosity parameters.

  1. Characterizing the Evolution of Circumstellar Systems with the Hubble Space Telescope and the Gemini Planet Imager

    NASA Astrophysics Data System (ADS)

    Wolff, Schuyler; Schuyler G. Wolff

    2018-01-01

    The study of circumstellar disks at a variety of evolutionary stages is essential to understand the physical processes leading to planet formation. The recent development of high contrast instruments designed to directly image the structures surrounding nearby stars, such as the Gemini Planet Imager (GPI) and coronagraphic data from the Hubble Space Telescope (HST) have made detailed studies of circumstellar systems possible. In my thesis work I detail the observation and characterization of three systems. GPI polarization data for the transition disk, PDS 66 shows a double ring and gap structure with a temporally variable azimuthal asymmetry. This evolved morphology could indicate shadowing from some feature in the innermost regions of the disk, a gap-clearing planet, or a localized change in the dust properties of the disk. Millimeter continuum data of the DH Tau system places limits on the dust mass that is contributing to the strong accretion signature on the wide-separation planetary mass companion, DH Tau b. The lower than expected dust mass constrains the possible formation mechanism, with core accretion followed by dynamical scattering being the most likely. Finally, I present HST scattered light observations of the flared, edge-on protoplanetary disk ESO H$\\alpha$ 569. I combine these data with a spectral energy distribution to model the key structural parameters such as the geometry (disk outer radius, vertical scale height, radial flaring profile), total mass, and dust grain properties in the disk using the radiative transfer code MCFOST. In order to conduct this work, I developed a new tool set to optimize the fitting of disk parameters using the MCMC code \\texttt{emcee} to efficiently explore the high dimensional parameter space. This approach allows us to self-consistently and simultaneously fit a wide variety of observables in order to place constraints on the physical properties of a given disk, while also rigorously assessing the uncertainties in those derived properties.

  2. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: Cosmological implications of the configuration-space clustering wedges

    NASA Astrophysics Data System (ADS)

    Sánchez, Ariel G.; Scoccimarro, Román; Crocce, Martín; Grieb, Jan Niklas; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio; Lippich, Martha; Beutler, Florian; Brownstein, Joel R.; Chuang, Chia-Hsun; Eisenstein, Daniel J.; Kitaura, Francisco-Shu; Olmstead, Matthew D.; Percival, Will J.; Prada, Francisco; Rodríguez-Torres, Sergio; Ross, Ashley J.; Samushia, Lado; Seo, Hee-Jong; Tinker, Jeremy; Tojeiro, Rita; Vargas-Magaña, Mariana; Wang, Yuting; Zhao, Gong-Bo

    2017-01-01

    We explore the cosmological implications of anisotropic clustering measurements in configuration space of the final galaxy samples from Data Release 12 of the Sloan Digital Sky Survey III Baryon Oscillation Spectroscopic Survey. We implement a new detailed modelling of the effects of non-linearities, bias and redshift-space distortions that can be used to extract unbiased cosmological information from our measurements for scales s ≳ 20 h-1 Mpc. We combined the information from Baryon Oscillation Spectroscopic Survey (BOSS) with the latest cosmic microwave background (CMB) observations and Type Ia supernovae samples and found no significant evidence for a deviation from the Λ cold dark matter (ΛCDM) cosmological model. In particular, these data sets can constrain the dark energy equation-of-state parameter to wDE = -0.996 ± 0.042 when to be assumed time independent, the curvature of the Universe to Ωk = -0.0007 ± 0.0030 and the sum of the neutrino masses to ∑mν < 0.25 eV at 95 per cent confidence levels. We explore the constraints on the growth rate of cosmic structures assuming f(z) = Ωm(z)γ and obtain γ = 0.609 ± 0.079, in good agreement with the predictions of general relativity of γ = 0.55. We compress the information of our clustering measurements into constraints on the parameter combinations DV(z)/rd, FAP(z) and fσ8(z) at zeff = 0.38, 0.51 and 0.61 with their respective covariance matrices and find good agreement with the predictions for these parameters obtained from the best-fitting ΛCDM model to the CMB data from the Planck satellite. This paper is part of a set that analyses the final galaxy clustering data set from BOSS. The measurements and likelihoods presented here are combined with others by Alam et al. to produce the final cosmological constraints from BOSS.

  3. The shape of galaxy dark matter halos in massive galaxy clusters: Insights from strong gravitational lensing

    NASA Astrophysics Data System (ADS)

    Jauzac, Mathilde; Harvey, David; Massey, Richard

    2018-04-01

    We assess how much unused strong lensing information is available in the deep Hubble Space Telescope imaging and VLT/MUSE spectroscopy of the Frontier Field clusters. As a pilot study, we analyse galaxy cluster MACS J0416.1-2403 (z=0.397, M(R < 200 kpc)=1.6×1014M⊙), which has 141 multiple images with spectroscopic redshifts. We find that many additional parameters in a cluster mass model can be constrained, and that adding even small amounts of extra freedom to a model can dramatically improve its figures of merit. We use this information to constrain the distribution of dark matter around cluster member galaxies, simultaneously with the cluster's large-scale mass distribution. We find tentative evidence that some galaxies' dark matter has surprisingly similar ellipticity to their stars (unlike in the field, where it is more spherical), but that its orientation is often misaligned. When non-coincident dark matter and stellar halos are allowed, the model improves by 35%. This technique may provide a new way to investigate the processes and timescales on which dark matter is stripped from galaxies as they fall into a massive cluster. Our preliminary conclusions will be made more robust by analysing the remaining five Frontier Field clusters.

  4. Titan's interior constrained from its obliquity and tidal Love number

    NASA Astrophysics Data System (ADS)

    Baland, Rose-Marie; Coyette, Alexis; Yseboodt, Marie; Beuthe, Mikael; Van Hoolst, Tim

    2016-04-01

    In the last few years, the Cassini-Huygens mission to the Saturn system has measured the shape, the obliquity, the static gravity field, and the tidally induced gravity field of Titan. The large values of the obliquity and of the k2 Love number both point to the existence of a global internal ocean below the icy crust. In order to constrain interior models of Titan, we combine the above-mentioned data as follows: (1) we build four-layer density profiles consistent with Titan's bulk properties; (2) we determine the corresponding internal flattening compatible with the observed gravity and topography; (3) we compute the obliquity and tidal Love number for each interior model; (4) we compare these predictions with the observations. Previously, we found that Titan is more differentiated than expected (assuming hydrostatic equilibrium), and that its ocean is dense and less than 100 km thick. Here, we revisit these conclusions using a more complete Cassini state model, including: (1) gravitational and pressure torques due to internal tidal deformations; (2) atmosphere/lakes-surface exchange of angular momentum; (3) inertial torque due to Poincaré flow. We also adopt faster methods to evaluate Love numbers (i.e. the membrane approach) in order to explore a larger parameter space.

  5. The shape of galaxy dark matter haloes in massive galaxy clusters: insights from strong gravitational lensing

    NASA Astrophysics Data System (ADS)

    Jauzac, Mathilde; Harvey, David; Massey, Richard

    2018-07-01

    We assess how much unused strong lensing information is available in the deep Hubble Space Telescope imaging and Very Large Telescope/Multi Unit Spectroscopic Explorer spectroscopy of the Frontier Field clusters. As a pilot study, we analyse galaxy cluster MACS J0416.1-2403 (z = 0.397, M(R < 200 kpc) = 1.6 × 1014 M⊙), which has 141 multiple images with spectroscopic redshifts. We find that many additional parameters in a cluster mass model can be constrained, and that adding even small amounts of extra freedom to a model can dramatically improve its figures of merit. We use this information to constrain the distribution of dark matter around cluster member galaxies, simultaneously with the cluster's large-scale mass distribution. We find tentative evidence that some galaxies' dark matter has surprisingly similar ellipticity to their stars (unlike in the field, where it is more spherical), but that its orientation is often misaligned. When non-coincident dark matter and stellar haloes are allowed, the model improves by 35 per cent. This technique may provide a new way to investigate the processes and time-scales on which dark matter is stripped from galaxies as they fall into a massive cluster. Our preliminary conclusions will be made more robust by analysing the remaining five Frontier Field clusters.

  6. HIFI observations of water in the atmosphere of comet C/2008 Q3 (Garradd)

    NASA Astrophysics Data System (ADS)

    Hartogh, P.; Crovisier, J.; de Val-Borro, M.; Bockelée-Morvan, D.; Biver, N.; Lis, D. C.; Moreno, R.; Jarchow, C.; Rengel, M.; Emprechtinger, M.; Szutowicz, S.; Banaszkiewicz, M.; Bensch, F.; Blecka, M. I.; Cavalié, T.; Encrenaz, T.; Jehin, E.; Küppers, M.; Lara, L.-M.; Lellouch, E.; Swinyard, B. M.; Vandenbussche, B.; Bergin, E. A.; Blake, G. A.; Blommaert, J. A. D. L.; Cernicharo, J.; Decin, L.; Encrenaz, P.; de Graauw, T.; Hutsemekers, D.; Kidger, M.; Manfroid, J.; Medvedev, A. S.; Naylor, D. A.; Schieder, R.; Thomas, N.; Waelkens, C.; Roelfsema, P. R.; Dieleman, P.; Güsten, R.; Klein, T.; Kasemann, C.; Caris, M.; Olberg, M.; Benz, A. O.

    2010-07-01

    High-resolution far-infrared and sub-millimetre spectroscopy of water lines is an important tool to understand the physical and chemical properties of cometary atmospheres. We present observations of several rotational ortho- and para-water transitions in comet C/2008 Q3 (Garradd) performed with HIFI on Herschel. These observations have provided the first detection of the 212-101 (1669 GHz) ortho and 111-000 (1113 GHz) para transitions of water in a cometary spectrum. In addition, the ground-state transition 110-101 at 557 GHz is detected and mapped. By detecting several water lines quasi-simultaneously and mapping their emission we can constrain the excitation parameters in the coma. Synthetic line profiles are computed using excitation models which include excitation by collisions, solar infrared radiation, and radiation trapping. We obtain the gas kinetic temperature, constrain the electron density profile, and estimate the coma expansion velocity by analyzing the map and line shapes. We derive water production rates of 1.7-2.8 × 1028 s-1 over the range rh = 1.83-1.85 AU. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.Figure 5 is only available in electronic form at http://www.aanda.org

  7. Optical identification of two nearby Isolated Neutron Stars through proper motion measuremnt.

    NASA Astrophysics Data System (ADS)

    Zane, Silvia

    2004-07-01

    Aim of this proposal is to perform high-resolution imaging of the proposed optical counterparts of the two, radio silent, isolated neutron stars RXJ1308.6+2127 and RX J1605.3+3249 with the STIS/50CCD. Imaging both fields with the same instrumental configuration used in mid 2001 by Kaplan et al {2002; 2003}, will allow us to measure the objects' position and to determine their proper motions over a time base of nearly four years. The measurement of proper motions at the level of at least few tens mas/yr, expected for relatively nearby neutron stars, would unambigouosly secure the proposed optical identifications, not achievable otherwise. In addition, the knowledge of the proper motion will provide useful indications on the space velocity and distance of these neutrons stars, as well as on the radius. Constraining these parameters is of paramount importance to discriminate between the variety of emission mechanisms invoked to explain their observed thermal X-ray spectra and to probe the neutron star equation of state {EOS}. The determination of the proper motion is a decisive step toward a dedicated follow-up program aimed at measuring the objects' optical parallax, thus providing much firmer constrains on the star properties, again to be performed with the STIS/50CCD.

  8. Uncertainty Quantification and Regional Sensitivity Analysis of Snow-related Parameters in the Canadian LAnd Surface Scheme (CLASS)

    NASA Astrophysics Data System (ADS)

    Badawy, B.; Fletcher, C. G.

    2017-12-01

    The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.

  9. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  10. Constraining torsion with Gravity Probe B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao Yi; Guth, Alan H.; Cabi, Serkan

    2007-11-15

    It is well-entrenched folklore that all torsion gravity theories predict observationally negligible torsion in the solar system, since torsion (if it exists) couples only to the intrinsic spin of elementary particles, not to rotational angular momentum. We argue that this assumption has a logical loophole which can and should be tested experimentally, and consider nonstandard torsion theories in which torsion can be generated by macroscopic rotating objects. In the spirit of action=reaction, if a rotating mass like a planet can generate torsion, then a gyroscope would be expected to feel torsion. An experiment with a gyroscope (without nuclear spin) suchmore » as Gravity Probe B (GPB) can test theories where this is the case. Using symmetry arguments, we show that to lowest order, any torsion field around a uniformly rotating spherical mass is determined by seven dimensionless parameters. These parameters effectively generalize the parametrized post-Newtonian formalism and provide a concrete framework for further testing Einstein's general theory of relativity (GR). We construct a parametrized Lagrangian that includes both standard torsion-free GR and Hayashi-Shirafuji maximal torsion gravity as special cases. We demonstrate that classic solar system tests rule out the latter and constrain two observable parameters. We show that Gravity Probe B is an ideal experiment for further constraining nonstandard torsion theories, and work out the most general torsion-induced precession of its gyroscope in terms of our torsion parameters.« less

  11. Performance Analysis of BDS Medium-Long Baseline RTK Positioning Using an Empirical Troposphere Model.

    PubMed

    Shu, Bao; Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong

    2018-04-14

    For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively.

  12. Performance Analysis of BDS Medium-Long Baseline RTK Positioning Using an Empirical Troposphere Model

    PubMed Central

    Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong

    2018-01-01

    For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively. PMID:29661999

  13. Precise Masses in the WASP-47 Multi-Transiting Hot Jupiter System

    NASA Astrophysics Data System (ADS)

    Vanderburg, Andrew; Becker, Juliette; Buchhave, Lars A.; Mortier, Annelies; Latham, David W.; Charbonneau, David; Lopez-Morales, Mercedes; HARPS-N Collaboration

    2017-06-01

    We present precise radial velocity observations of WASP-47, a star known to host a hot Jupiter, a distant Jovian companion, and, uniquely, two additional transiting planets in short-period orbits: a super-Earth in a 19 hour orbit, and a Neptune in a 9 day orbit. We combine our observations, collected with the HARPS-N spectrograph, with previously published data to measure the most precise planet masses yet for this system. When combined with new stellar parameters (from analysis of the HARPS-N spectra) and a reanalysis of the transit photometry, our mass measurements yield strong constraints on the small planets’ compositions. Finally, we probabilistically constrain the orbital inclination of the outer Jovian planet through a dynamical analysis that requires the system reproduce its observed parameters.This work was supported by the National Science Foundation Graduate Research Fellowship Program. HARPS-N was funded by the Swiss Space Office, the Harvard Origin of Life Initiative, the Scottish Universities Physics Alliance, the University of Geneva, the Smithsonian Astrophysical Observatory, the Italian National Astrophysical Institute, the University of St. Andrews, Queens University Belfast, and the University of Edinburgh.

  14. A Modern Take on the RV Classics: N-body Analysis of GJ 876 and 55 Cnc

    NASA Astrophysics Data System (ADS)

    Nelson, Benjamin E.; Ford, E. B.; Wright, J.

    2013-01-01

    Over the past two decades, radial velocity (RV) observations have uncovered a diverse population of exoplanet systems, in particular a subset of multi-planet systems that exhibit strong dynamical interactions. To extract the model parameters (and uncertainties) accurately from these observations, one requires self-consistent n-body integrations and must explore a high-dimensional 7 x number of planets) parameter space, both of which are computationally challenging. Utilizing the power of modern computing resources, we apply our Radial velocity Using N-body Differential Evolution Markov Chain Monte Carlo code (RUN DEMCMC) to two landmark systems from early exoplanet surveys: GJ 876 and 55 Cnc. For GJ 876, we analyze the Keck HIRES (Rivera et al. 2010) and HARPS (Correia et al. 2010) data and constrain the distribution of the Laplace argument. For 55 Cnc, we investigate the orbital architecture based on a cumulative 1086 RV observations from various sources and transit constraints from Winn et al. 2011. In both cases, we also test for long-term orbital stability.

  15. Primordial anisotropies in gauged hybrid inflation

    NASA Astrophysics Data System (ADS)

    Akbar Abolhasani, Ali; Emami, Razieh; Firouzjahi, Hassan

    2014-05-01

    We study primordial anisotropies generated in the model of gauged hybrid inflation in which the complex waterfall field is charged under a U(1)gauge field. Primordial anisotropies are generated either actively during inflation or from inhomogeneities modulating the surface of end of inflation during waterfall transition. We present a consistent δN mechanism to calculate the anisotropic power spectrum and bispectrum. We show that the primordial anisotropies generated at the surface of end of inflation do not depend on the number of e-folds and therefore do not produce dangerously large anisotropies associated with the IR modes. Furthermore, one can find the parameter space that the anisotropies generated from the surface of end of inflation cancel the anisotropies generated during inflation, therefore relaxing the constrains on model parameters imposed from IR anisotropies. We also show that the gauge field fluctuations induce a red-tilted power spectrum so the averaged power spectrum from the gauge field can change the total power spectrum from blue to red. Therefore, hybrid inflation, once gauged under a U(1) field, can be consistent with the cosmological observations.

  16. Using Ice and Dust Lines to Constrain the Surface Densities of Protoplanetary Disks

    NASA Astrophysics Data System (ADS)

    Powell, Diana; Murray-Clay, Ruth; Schlichting, Hilke

    2018-04-01

    The surface density of protoplanetary disks is a fundamental parameter that still remains largely unconstrained due to uncertainties in the dust-to-gas ratio and CO abundance. In this talk I will present a novel method for determining the surface density of protoplanetary disks through consideration of disk “dust lines,” which indicate the observed disk radial scale at different observational wavelengths. I will provide an initial proof of concept of our model through an application to the disk TW Hya where we are able to estimate the disk dust-to-gas ratio, CO abundance, and accretion rate in addition to the total disk surface density. We find that our derived surface density profile and dust-to-gas ratio are consistent with the lower limits found through measurements of HD gas. We further apply our model to a large parameter space of theoretical disks and find three observational diagnostics that may be used to test its validity. Using this method we derive disks that may be much more massive than previously thought, often approaching the limit of gravitational stability.

  17. Big-bang nucleosynthesis and leptogenesis in the CMSSM

    NASA Astrophysics Data System (ADS)

    Kubo, Munehiro; Sato, Joe; Shimomura, Takashi; Takanishi, Yasutaka; Yamanaka, Masato

    2018-06-01

    We have studied the constrained minimal supersymmetric standard model with three right-handed neutrinos, and investigated whether there still is a parameter region consistent with all experimental data/limits such as the baryon asymmetry of the Universe, the dark matter abundance and the lithium primordial abundance. Using Casas-Ibarra parametrization, we have found a very narrow parameter space of the complex orthogonal matrix elements where the lightest slepton can have a long lifetime, which is necessary for solving the lithium problem. We have studied three cases of the right-handed neutrino mass ratio (i) M2=2 ×M1, (ii) M2=4 ×M1, (iii) M2=10 ×M1, while M3=40 ×M1 is fixed. We have obtained the mass range of the lightest right-handed neutrino that lies between 1 09 and 1 011 GeV . The important result is that its upper limit is derived by solving the lithium problem and the lower limit comes from leptogenesis. Lepton flavor violating decays such as μ →e γ in our scenario are in the reach of MEG-II and Mu3e.

  18. Effective field theory of cosmic acceleration: Constraining dark energy with CMB data

    NASA Astrophysics Data System (ADS)

    Raveri, Marco; Hu, Bin; Frusciante, Noemi; Silvestri, Alessandra

    2014-08-01

    We introduce EFTCAMB/EFTCosmoMC as publicly available patches to the commonly used camb/CosmoMC codes. We briefly describe the structure of the codes, their applicability and main features. To illustrate the use of these patches, we obtain constraints on parametrized pure effective field theory and designer f(R) models, both on ΛCDM and wCDM background expansion histories, using data from Planck temperature and lensing potential spectra, WMAP low-ℓ polarization spectra (WP), and baryon acoustic oscillations (BAO). Upon inspecting the theoretical stability of the models on the given background, we find nontrivial parameter spaces that we translate into viability priors. We use different combinations of data sets to show their individual effects on cosmological and model parameters. Our data analysis results show that, depending on the adopted data sets, in the wCDM background case these viability priors could dominate the marginalized posterior distributions. Interestingly, with Planck +WP+BAO+lensing data, in f(R) gravity models, we get very strong constraints on the constant dark energy equation of state, w0∈(-1,-0.9997) (95% C.L.).

  19. Numerical simulation of the geodynamo reaches Earth's core dynamical regime

    NASA Astrophysics Data System (ADS)

    Aubert, J.; Gastine, T.; Fournier, A.

    2016-12-01

    Numerical simulations of the geodynamo have been successful at reproducing a number of static (field morphology) and kinematic (secular variation patterns, core surface flows and westward drift) features of Earth's magnetic field, making them a tool of choice for the analysis and retrieval of geophysical information on Earth's core. However, classical numerical models have been run in a parameter regime far from that of the real system, prompting the question of whether we do get "the right answers for the wrong reasons", i.e. whether the agreement between models and nature simply occurs by chance and without physical relevance in the dynamics. In this presentation, we show that classical models succeed in describing the geodynamo because their large-scale spatial structure is essentially invariant as one progresses along a well-chosen path in parameter space to Earth's core conditions. This path is constrained by the need to enforce the relevant force balance (MAC or Magneto-Archimedes-Coriolis) and preserve the ratio of the convective overturn and magnetic diffusion times. Numerical simulations performed along this path are shown to be spatially invariant at scales larger than that where the magnetic energy is ohmically dissipated. This property enables the definition of large-eddy simulations that show good agreement with direct numerical simulations in the range where both are feasible, and that can be computed at unprecedented values of the control parameters, such as an Ekman number E=10-8. Combining direct and large-eddy simulations, large-scale invariance is observed over half the logarithmic distance in parameter space between classical models and Earth. The conditions reached at this mid-point of the path are furthermore shown to be representative of the rapidly-rotating, asymptotic dynamical regime in which Earth's core resides, with a MAC force balance undisturbed by viscosity or inertia, the enforcement of a Taylor state and strong-field dynamo action. We conclude that numerical modelling has advanced to a stage where it is possible to use models correctly representing the statics, kinematics and now the dynamics of the geodynamo. This opens the way to a better analysis of the geomagnetic field in the time and space domains.

  20. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for buildingmore » parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable model-based fault estimation and correction for particle accelerators and industrial plants feasible.« less

  1. Constraining the Absolute Orientation of eta Carinae's Binary Orbit: A 3-D Dynamical Model for the Broad [Fe III] Emission

    NASA Technical Reports Server (NTRS)

    Madura, T. I.; Gull, T. R.; Owocki, S. P.; Groh, J. H.; Okazaki, A. T.; Russell, C. M. P.

    2011-01-01

    We present a three-dimensional (3-D) dynamical model for the broad [Fe III] emission observed in Eta Carinae using the Hubble Space Telescope/Space Telescope Imaging Spectrograph (HST/STIS). This model is based on full 3-D Smoothed Particle Hydrodynamics (SPH) simulations of Eta Car's binary colliding winds. Radiative transfer codes are used to generate synthetic spectro-images of [Fe III] emission line structures at various observed orbital phases and STIS slit position angles (PAs). Through a parameter study that varies the orbital inclination i, the PA(theta) that the orbital plane projection of the line-of-sight makes with the apastron side of the semi-major axis, and the PA on the sky of the orbital axis, we are able, for the first time, to tightly constrain the absolute 3-D orientation of the binary orbit. To simultaneously reproduce the blue-shifted emission arcs observed at orbital phase 0.976, STIS slit PA = +38deg, and the temporal variations in emission seen at negative slit PAs, the binary needs to have an i approx. = 130deg to 145deg, Theta approx. = -15deg to +30deg, and an orbital axis projected on the sky at a P A approx. = 302deg to 327deg east of north. This represents a system with an orbital axis that is closely aligned with the inferred polar axis of the Homunculus nebula, in 3-D. The companion star, Eta(sub B), thus orbits clockwise on the sky and is on the observer's side of the system at apastron. This orientation has important implications for theories for the formation of the Homunculus and helps lay the groundwork for orbital modeling to determine the stellar masses.

  2. Constraints on the dark matter and dark energy interactions from weak lensing bispectrum tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Rui; Feng, Chang; Wang, Bin, E-mail: an_rui@sjtu.edu.cn, E-mail: chang.feng@uci.edu, E-mail: wang_b@sjtu.edu.cn

    We estimate uncertainties of cosmological parameters for phenomenological interacting dark energy models using weak lensing convergence power spectrum and bispectrum. We focus on the bispectrum tomography and examine how well the weak lensing bispectrum with tomography can constrain the interactions between dark sectors, as well as other cosmological parameters. Employing the Fisher matrix analysis, we forecast parameter uncertainties derived from weak lensing bispectra with a two-bin tomography and place upper bounds on strength of the interactions between the dark sectors. The cosmic shear will be measured from upcoming weak lensing surveys with high sensitivity, thus it enables us to usemore » the higher order correlation functions of weak lensing to constrain the interaction between dark sectors and will potentially provide more stringent results with other observations combined.« less

  3. Fundamental Parameters of Main-Sequence Stars in an Instant with Machine Learning

    NASA Astrophysics Data System (ADS)

    Bellinger, Earl P.; Angelou, George C.; Hekker, Saskia; Basu, Sarbani; Ball, Warrick H.; Guggenberger, Elisabeth

    2016-10-01

    Owing to the remarkable photometric precision of space observatories like Kepler, stellar and planetary systems beyond our own are now being characterized en masse for the first time. These characterizations are pivotal for endeavors such as searching for Earth-like planets and solar twins, understanding the mechanisms that govern stellar evolution, and tracing the dynamics of our Galaxy. The volume of data that is becoming available, however, brings with it the need to process this information accurately and rapidly. While existing methods can constrain fundamental stellar parameters such as ages, masses, and radii from these observations, they require substantial computational effort to do so. We develop a method based on machine learning for rapidly estimating fundamental parameters of main-sequence solar-like stars from classical and asteroseismic observations. We first demonstrate this method on a hare-and-hound exercise and then apply it to the Sun, 16 Cyg A and B, and 34 planet-hosting candidates that have been observed by the Kepler spacecraft. We find that our estimates and their associated uncertainties are comparable to the results of other methods, but with the additional benefit of being able to explore many more stellar parameters while using much less computation time. We furthermore use this method to present evidence for an empirical diffusion-mass relation. Our method is open source and freely available for the community to use.6

  4. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    PubMed Central

    Trucco, Andrea; Traverso, Federico; Crocco, Marco

    2015-01-01

    For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987

  5. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    NASA Astrophysics Data System (ADS)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.

  6. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xavier, MA; Trimboli, MS

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggestmore » significant performance improvements might be achieved by extending the result to electrochemical models. (C) 2015 Elsevier B.V. All rights reserved.« less

  7. A simple testable model of baryon number violation: Baryogenesis, dark matter, neutron-antineutron oscillation and collider signals

    NASA Astrophysics Data System (ADS)

    Allahverdi, Rouzbeh; Dev, P. S. Bhupal; Dutta, Bhaskar

    2018-04-01

    We study a simple TeV-scale model of baryon number violation which explains the observed proximity of the dark matter and baryon abundances. The model has constraints arising from both low and high-energy processes, and in particular, predicts a sizable rate for the neutron-antineutron (n - n bar) oscillation at low energy and the monojet signal at the LHC. We find an interesting complementarity among the constraints arising from the observed baryon asymmetry, ratio of dark matter and baryon abundances, n - n bar oscillation lifetime and the LHC monojet signal. There are regions in the parameter space where the n - n bar oscillation lifetime is found to be more constraining than the LHC constraints, which illustrates the importance of the next-generation n - n bar oscillation experiments.

  8. Millimeter-scale MEMS enabled autonomous systems: system feasibility and mobility

    NASA Astrophysics Data System (ADS)

    Pulskamp, Jeffrey S.

    2012-06-01

    Millimeter-scale robotic systems based on highly integrated microelectronics and micro-electromechanical systems (MEMS) could offer unique benefits and attributes for small-scale autonomous systems. This extreme scale for robotics will naturally constrain the realizable system capabilities significantly. This paper assesses the feasibility of developing such systems by defining the fundamental design trade spaces between component design variables and system level performance parameters. This permits the development of mobility enabling component technologies within a system relevant context. Feasible ranges of system mass, required aerodynamic power, available battery power, load supported power, flight endurance, and required leg load bearing capability are presented for millimeter-scale platforms. The analysis illustrates the feasibility of developing both flight capable and ground mobile millimeter-scale autonomous systems while highlighting the significant challenges that must be overcome to realize their potential.

  9. Cosmological constraints from strong gravitational lensing in clusters of galaxies.

    PubMed

    Jullo, Eric; Natarajan, Priyamvada; Kneib, Jean-Paul; D'Aloisio, Anson; Limousin, Marceau; Richard, Johan; Schimd, Carlo

    2010-08-20

    Current efforts in observational cosmology are focused on characterizing the mass-energy content of the universe. We present results from a geometric test based on strong lensing in galaxy clusters. Based on Hubble Space Telescope images and extensive ground-based spectroscopic follow-up of the massive galaxy cluster Abell 1689, we used a parametric model to simultaneously constrain the cluster mass distribution and dark energy equation of state. Combining our cosmological constraints with those from x-ray clusters and the Wilkinson Microwave Anisotropy Probe 5-year data gives Omega(m) = 0.25 +/- 0.05 and w(x) = -0.97 +/- 0.07, which are consistent with results from other methods. Inclusion of our method with all other available techniques brings down the current 2sigma contours on the dark energy equation-of-state parameter w(x) by approximately 30%.

  10. Detection and modelling of the ionospheric perturbation caused by a Space Shuttle launch using a network of ground-based Global Positioning System stations

    NASA Astrophysics Data System (ADS)

    Bowling, Timothy; Calais, Eric; Haase, Jennifer S.

    2013-03-01

    The exhaust plume of the Space Shuttle during its ascent triggers acoustic waves which propagate through the atmosphere and induce electron density changes at ionospheric heights which changes can be measured using ground-based Global Positioning System (GPS) phase data. Here, we use a network of GPS stations to study the acoustic wave generated by the STS-125 Space Shuttle launch on May 11, 2009. We detect the resulting changes in ionospheric electron density, with characteristics that are typical of acoustic waves triggered by explosions at or near the Earth's surface or in the atmosphere. We successfully reproduce the amplitude and timing of the observed signal using a ray-tracing model with a moving source whose amplitude is directly scaled by a physical model of the shuttle exhaust energy, acoustic propagation in a dispersive atmosphere and a simplified two-fluid model of collisions between neutral gas and free electrons in the ionosphere. The close match between observed and model waveforms validates the modelling approach. This raises the possibility of using ground-based GPS networks to estimate the acoustic energy release of explosive sources near the Earth's surface or in atmosphere, and to constrain some atmospheric acoustic parameters.

  11. Wavefront Control Toolbox for James Webb Space Telescope Testbed

    NASA Technical Reports Server (NTRS)

    Shiri, Ron; Aronstein, David L.; Smith, Jeffery Scott; Dean, Bruce H.; Sabatke, Erin

    2007-01-01

    We have developed a Matlab toolbox for wavefront control of optical systems. We have applied this toolbox to the optical models of James Webb Space Telescope (JWST) in general and to the JWST Testbed Telescope (TBT) in particular, implementing both unconstrained and constrained wavefront optimization to correct for possible misalignments present on the segmented primary mirror or the monolithic secondary mirror. The optical models implemented in Zemax optical design program and information is exchanged between Matlab and Zemax via the Dynamic Data Exchange (DDE) interface. The model configuration is managed using the XML protocol. The optimization algorithm uses influence functions for each adjustable degree of freedom of the optical mode. The iterative and non-iterative algorithms have been developed to converge to a local minimum of the root-mean-square (rms) of wavefront error using singular value decomposition technique of the control matrix of influence functions. The toolkit is highly modular and allows the user to choose control strategies for the degrees of freedom to be adjusted on a given iteration and wavefront convergence criterion. As the influence functions are nonlinear over the control parameter space, the toolkit also allows for trade-offs between frequency of updating the local influence functions and execution speed. The functionality of the toolbox and the validity of the underlying algorithms have been verified through extensive simulations.

  12. KiDS+GAMA: cosmology constraints from a joint analysis of cosmic shear, galaxy-galaxy lensing, and angular clustering

    NASA Astrophysics Data System (ADS)

    van Uitert, Edo; Joachimi, Benjamin; Joudaki, Shahab; Amon, Alexandra; Heymans, Catherine; Köhlinger, Fabian; Asgari, Marika; Blake, Chris; Choi, Ami; Erben, Thomas; Farrow, Daniel J.; Harnois-Déraps, Joachim; Hildebrandt, Hendrik; Hoekstra, Henk; Kitching, Thomas D.; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Miller, Lance; Nakajima, Reiko; Schneider, Peter; Valentijn, Edwin; Viola, Massimo

    2018-06-01

    We present cosmological parameter constraints from a joint analysis of three cosmological probes: the tomographic cosmic shear signal in ˜450 deg2 of data from the Kilo Degree Survey (KiDS), the galaxy-matter cross-correlation signal of galaxies from the Galaxies And Mass Assembly (GAMA) survey determined with KiDS weak lensing, and the angular correlation function of the same GAMA galaxies. We use fast power spectrum estimators that are based on simple integrals over the real-space correlation functions, and show that they are practically unbiased over relevant angular frequency ranges. We test our full pipeline on numerical simulations that are tailored to KiDS and retrieve the input cosmology. By fitting different combinations of power spectra, we demonstrate that the three probes are internally consistent. For all probes combined, we obtain S_8≡ σ _8 √{Ω _m/0.3}=0.800_{-0.027}^{+0.029}, consistent with Planck and the fiducial KiDS-450 cosmic shear correlation function results. Marginalizing over wide priors on the mean of the tomographic redshift distributions yields consistent results for S8 with an increase of 28 {per cent} in the error. The combination of probes results in a 26 per cent reduction in uncertainties of S8 over using the cosmic shear power spectra alone. The main gain from these additional probes comes through their constraining power on nuisance parameters, such as the galaxy intrinsic alignment amplitude or potential shifts in the redshift distributions, which are up to a factor of 2 better constrained compared to using cosmic shear alone, demonstrating the value of large-scale structure probe combination.

  13. Discovery of wide low and very low-mass binary systems using Virtual Observatory tools

    NASA Astrophysics Data System (ADS)

    Gálvez-Ortiz, M. C.; Solano, E.; Lodieu, N.; Aberasturi, M.

    2017-04-01

    The frequency of multiple systems and their properties are key constraints of stellar formation and evolution. Formation mechanisms of very low-mass (VLM) objects are still under considerable debate, and an accurate assessment of their multiplicity and orbital properties is essential for constraining current theoretical models. Taking advantage of the virtual observatory capabilities, we looked for comoving low and VLM binary (or multiple) systems using the Large Area Survey of the UKIDSS LAS DR10, SDSS DR9 and the 2MASS Catalogues. Other catalogues (WISE, GLIMPSE, SuperCosmos, etc.) were used to derive the physical parameters of the systems. We report the identification of 36 low and VLM (˜M0-L0 spectral types) candidates to binary/multiple system (separations between 200 and 92 000 au), whose physical association is confirmed through common proper motion, distance and low probability of chance alignment. This new system list notably increases the previous sampling in their mass-separation parameter space (˜100). We have also found 50 low-mass objects that we can classify as ˜L0-T2 according to their photometric information. Only one of these objects presents a common proper motion high-mass companion. Although we could not constrain the age of the majority of the candidates, probably most of them are still bound except four that may be under disruption processes. We suggest that our sample could be divided in two populations: one tightly bound wide VLM systems that are expected to last more than 10 Gyr, and other formed by weak bound wide VLM systems that will dissipate within a few Gyr.

  14. Constraints on high-energy neutrino emission from SN 2008D

    NASA Astrophysics Data System (ADS)

    IceCube Collaboration; Abbasi, R.; Abdou, Y.; Abu-Zayyad, T.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Andeen, K.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Bay, R.; Bazo Alba, J. L.; Beattie, K.; Beatty, J. J.; Bechet, S.; Becker, J. K.; Becker, K.-H.; Benabderrahmane, M. L.; Ben Zvi, S.; Berdermann, J.; Berghaus, P.; Berley, D.; Bernardini, E.; Bertrand, D.; Besson, D. Z.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Buitink, S.; Carson, M.; Chirkin, D.; Christy, B.; Clem, J.; Clevermann, F.; Cohen, S.; Colnard, C.; Cowen, D. F.; D'Agostino, M. V.; Danninger, M.; Davis, J. C.; De Clercq, C.; Demirörs, L.; Depaepe, O.; Descamps, F.; Desiati, P.; de Vries-Uiterweerd, G.; DeYoung, T.; Díaz-Vélez, J. C.; Dierckxsens, M.; Dreyer, J.; Dumm, J. P.; Duvoort, M. R.; Ehrlich, R.; Eisch, J.; Ellsworth, R. W.; Engdegård, O.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feusels, T.; Filimonov, K.; Finley, C.; Foerster, M. M.; Fox, B. D.; Franckowiak, A.; Franke, R.; Gaisser, T. K.; Gallagher, J.; Geisler, M.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Goodman, J. A.; Grant, D.; Griesel, T.; Gro, A.; Grullon, S.; Gurtner, M.; Ha, C.; Hallgren, A.; Halzen, F.; Han, K.; Hanson, K.; Helbing, K.; Herquet, P.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Homeier, A.; Hoshina, K.; Hubert, D.; Huelsnitz, W.; Hül, J. P.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jacobsen, J.; Japaridze, G. S.; Johansson, H.; Joseph, J. M.; Kampert, K. H.; Kappes A.; Karg, T.; Karle, A.; Kelley, J. L.; Kemming, N.; Kenny, P.; Kiryluk, J.; Kislat, F.; Klein, S. R.; Köhne, J. H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Koskinen, D. J.; Kowalski, M.; Kowarik, T.; Krasberg, M.; Krings, T.; Kroll, G.; Kuehn, K.; Kuwabara, T.; Labare, M.; Lafebre, S.; Laihem, K.; Landsman, H.; Larson, M. J.; Lauer, R.; Lehmann, R.; Lünemann, J.; Madsen, J.; Majumdar, P.; Marotta, A.; Maruyama, R.; Mase, K.; Matis, H. S.; Matusik, M.; Meagher, K.; Merck, M.; Mészáros, P.; Meures, T.; Middell, E.; Milke, N.; Miller, J.; Montaruli, T.; Morse, R.; Movit, S. M.; Nahnhauer, R.; Nam, J. W.; Naumann, U.; Nießen, P.; Nygren, D. R.; Odrowski, S.; Olivas, A.; Olivo, M.; O'Murchadha, A.; Ono, M.; Panknin, S.; Paul, L.; Pérez de los Heros, C.; Petrovic, J.; Piegsa, A.; Pieloth, D.; Porrata, R.; Posselt, J.; Price, P. B.; Prikockis, M.; Przybylski, G. T.; Rawlins, K.; Redl, P.; Resconi, E.; Rhode, W.; Ribordy, M.; Rizzo, A.; Rodrigues, J. P.; Roth, P.; Rothmaier, F.; Rott, C.; Ruhe, T.; Rutledge, D.; Ruzybayev, B.; Ryckbosch, D.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Schlenstedt, S.; Schmidt, T.; Schukraft, A.; Schultes, A.; Schulz, O.; Schunck, M.; Seckel, D.; Semburg, B.; Seo, S. H.; Sestayo, Y.; Seunarine, S.; Silvestri, A.; Singh, K.; Slipak, A.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stephens, G.; Stezelberger, T.; Stokstad, R. G.; Stoyanov, S.; Strahler, E. A.; Straszheim, T.; Sullivan, G. W.; Swillens, Q.; Taavola, H.; Taboada, I.; Tamburro, A.; Tarasova, O.; Tepe, A.; Ter-Antonyan, S.; Tilav, S.; Toale, P. A.; Toscano, S.; Tosi, D.; Turčan, D.; van Eijndhoven, N.; Vandenbroucke, J.; Van Overloop, A.; van Santen, J.; Voge, M.; Voigt, B.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Walter, M.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Wikström, G.; Williams, D. R.; Wischnewski, R.; Wissing, H.; Wolf, M.; Woschnagg, K.; Xu, C.; Xu, X. W.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.

    2011-03-01

    SN 2008D, a core collapse supernova at a distance of 27 Mpc, was serendipitously discovered by the Swift satellite through an associated X-ray flash. Core collapse supernovae have been observed in association with long gamma-ray bursts and X-ray flashes and a physical connection is widely assumed. This connection could imply that some core collapse supernovae possess mildly relativistic jets in which high-energy neutrinos are produced through proton-proton collisions. The predicted neutrino spectra would be detectable by Cherenkov neutrino detectors like IceCube. A search for a neutrino signal in temporal and spatial correlation with the observed X-ray flash of SN 2008D was conducted using data taken in 2007-2008 with 22 strings of the IceCube detector. Events were selected based on a boosted decision tree classifier trained with simulated signal and experimental background data. The classifier was optimized to the position and a "soft jet" neutrino spectrum assumed for SN 2008D. Using three search windows placed around the X-ray peak, emission time scales from 100-10 000 s were probed. No events passing the cuts were observed in agreement with the signal expectation of 0.13 events. Upper limits on the muon neutrino flux from core collapse supernovae were derived for different emission time scales and the principal model parameters were constrained. While no meaningful limits can be given in the case of an isotropic neutrino emission, the parameter space for a jetted emission can be constrained. Future analyses with the full 86 string IceCube detector could detect up to ~100 events for a core-collapse supernova at 10 Mpc according to the soft jet model.

  15. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  16. Butterfly Encryption Scheme for Resource-Constrained Wireless Networks †

    PubMed Central

    Sampangi, Raghav V.; Sampalli, Srinivas

    2015-01-01

    Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis. PMID:26389899

  17. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  18. Butterfly Encryption Scheme for Resource-Constrained Wireless Networks.

    PubMed

    Sampangi, Raghav V; Sampalli, Srinivas

    2015-09-15

    Resource-constrained wireless networks are emerging networks such as Radio Frequency Identification (RFID) and Wireless Body Area Networks (WBAN) that might have restrictions on the available resources and the computations that can be performed. These emerging technologies are increasing in popularity, particularly in defence, anti-counterfeiting, logistics and medical applications, and in consumer applications with growing popularity of the Internet of Things. With communication over wireless channels, it is essential to focus attention on securing data. In this paper, we present an encryption scheme called Butterfly encryption scheme. We first discuss a seed update mechanism for pseudorandom number generators (PRNG), and employ this technique to generate keys and authentication parameters for resource-constrained wireless networks. Our scheme is lightweight, as in it requires less resource when implemented and offers high security through increased unpredictability, owing to continuously changing parameters. Our work focuses on accomplishing high security through simplicity and reuse. We evaluate our encryption scheme using simulation, key similarity assessment, key sequence randomness assessment, protocol analysis and security analysis.

  19. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Constraining dark sector perturbations I: cosmic shear and CMB lensing

    NASA Astrophysics Data System (ADS)

    Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.

    2015-04-01

    We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant Script L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=-1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.

Top