Sample records for exploring parameter constraints

  1. Emulating Simulations of Cosmic Dawn for 21 cm Power Spectrum Constraints on Cosmology, Reionization, and X-Ray Heating

    NASA Astrophysics Data System (ADS)

    Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley

    2017-10-01

    Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.

  2. Constraints on Cosmological Parameters from the Angular Power Spectrum of a Combined 2500 deg$^2$ SPT-SZ and Planck Gravitational Lensing Map

    DOE PAGES

    Simard, G.; et al.

    2018-06-20

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\

  3. Constraints on Cosmological Parameters from the Angular Power Spectrum of a Combined 2500 deg$^2$ SPT-SZ and Planck Gravitational Lensing Map

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simard, G.; et al.

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simard, G.; et al.

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\

  5. Constraining neutron guide optimizations with phase-space considerations

    NASA Astrophysics Data System (ADS)

    Bertelsen, Mads; Lefmann, Kim

    2016-09-01

    We introduce a method named the Minimalist Principle that serves to reduce the parameter space for neutron guide optimization when the required beam divergence is limited. The reduced parameter space will restrict the optimization to guides with a minimal neutron intake that are still theoretically able to deliver the maximal possible performance. The geometrical constraints are derived using phase-space propagation from moderator to guide and from guide to sample, while assuming that the optimized guides will achieve perfect transport of the limited neutron intake. Guide systems optimized using these constraints are shown to provide performance close to guides optimized without any constraints, however the divergence received at the sample is limited to the desired interval, even when the neutron transport is not limited by the supermirrors used in the guide. As the constraints strongly limit the parameter space for the optimizer, two control parameters are introduced that can be used to adjust the selected subspace, effectively balancing between maximizing neutron transport and avoiding background from unnecessary neutrons. One parameter is needed to describe the expected focusing abilities of the guide to be optimized, going from perfectly focusing to no correlation between position and velocity. The second parameter controls neutron intake into the guide, so that one can select exactly how aggressively the background should be limited. We show examples of guides optimized using these constraints which demonstrates the higher signal to noise than conventional optimizations. Furthermore the parameter controlling neutron intake is explored which shows that the simulated optimal neutron intake is close to the analytically predicted, when assuming that the guide is dominated by multiple scattering events.

  6. Constraints on brane-world inflation from the CMB power spectrum: revisited

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, Mayukh R.; Mathews, Grant J.

    2018-03-01

    We analyze the Randal Sundrum brane-world inflation scenario in the context of the latest CMB constraints from Planck. We summarize constraints on the most popular classes of models and explore some more realistic inflaton effective potentials. The constraint on standard inflationary parameters changes in the brane-world scenario. We confirm that in general the brane-world scenario increases the tensor-to-scalar ratio, thus making this paradigm less consistent with the Planck constraints. Indeed, when BICEP2/Keck constraints are included, all monomial potentials in the brane-world scenario become disfavored compared to the standard scenario. However, for natural inflation the brane-world scenario could fit the constraints better due to larger allowed values of e-foldings N before the end of inflation in the brane-world.

  7. Performance of convolutional codes on fading channels typical of planetary entry missions

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.; Reale, T. J.

    1974-01-01

    The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lazkoz, Ruth; Escamilla-Rivera, Celia; Salzano, Vincenzo

    Cosmography provides a model-independent way to map the expansion history of the Universe. In this paper we simulate a Euclid-like survey and explore cosmographic constraints from future Baryonic Acoustic Oscillations (BAO) observations. We derive general expressions for the BAO transverse and radial modes and discuss the optimal order of the cosmographic expansion that provides reliable cosmological constraints. Through constraints on the deceleration and jerk parameters, we show that future BAO data have the potential to provide a model-independent check of the cosmic acceleration as well as a discrimination between the standard ΛCDM model and alternative mechanisms of cosmic acceleration.

  9. Multi-objective trajectory optimization for the space exploration vehicle

    NASA Astrophysics Data System (ADS)

    Qin, Xiaoli; Xiao, Zhen

    2016-07-01

    The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.

  10. Hilltop supernatural inflation and gravitino problem

    NASA Astrophysics Data System (ADS)

    Kohri, Kazunori; Lin, Chia-Min

    2010-11-01

    In this paper, we explore the parameter space of hilltop supernatural inflation model and show the regime within which there is no gravitino problem even if we consider both thermal and nonthermal production mechanisms. We make plots for the allowed reheating temperature as a function of gravitino mass by constraints from big-bang nucleosynthesis. We also plot the constraint when gravitino is assumed to be stable and plays the role of dark matter.

  11. Misspecification in Latent Change Score Models: Consequences for Parameter Estimation, Model Evaluation, and Predicting Change.

    PubMed

    Clark, D Angus; Nuttall, Amy K; Bowles, Ryan P

    2018-01-01

    Latent change score models (LCS) are conceptually powerful tools for analyzing longitudinal data (McArdle & Hamagami, 2001). However, applications of these models typically include constraints on key parameters over time. Although practically useful, strict invariance over time in these parameters is unlikely in real data. This study investigates the robustness of LCS when invariance over time is incorrectly imposed on key change-related parameters. Monte Carlo simulation methods were used to explore the impact of misspecification on parameter estimation, predicted trajectories of change, and model fit in the dual change score model, the foundational LCS. When constraints were incorrectly applied, several parameters, most notably the slope (i.e., constant change) factor mean and autoproportion coefficient, were severely and consistently biased, as were regression paths to the slope factor when external predictors of change were included. Standard fit indices indicated that the misspecified models fit well, partly because mean level trajectories over time were accurately captured. Loosening constraint improved the accuracy of parameter estimates, but estimates were more unstable, and models frequently failed to converge. Results suggest that potentially common sources of misspecification in LCS can produce distorted impressions of developmental processes, and that identifying and rectifying the situation is a challenge.

  12. Optimal Design of Integrated Systems Health Management (ISHM) Systems for improving safety in NASA's Exploration Vehicles: A Two-Level Multidisciplinary Design Approach

    NASA Technical Reports Server (NTRS)

    Tumer, Irem; Mehr, Ali Farhang

    2005-01-01

    In this paper, a two-level multidisciplinary design approach is described to optimize the effectiveness of ISHM s. At the top level, the overall safety of the mission consists of system-level variables, parameters, objectives, and constraints that are shared throughout the system and by all subsystems. Each subsystem level will then comprise of these shared values in addition to subsystem-specific variables, parameters, objectives and constraints. A hierarchical structure will be established to pass up or down shared values between the two levels with system-level and subsystem-level optimization routines.

  13. Propagation of error from parameter constraints in quantitative MRI: Example application of multiple spin echo T2 mapping.

    PubMed

    Lankford, Christopher L; Does, Mark D

    2018-02-01

    Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Constraint damping for the Z4c formulation of general relativity

    NASA Astrophysics Data System (ADS)

    Weyhausen, Andreas; Bernuzzi, Sebastiano; Hilditch, David

    2012-01-01

    One possibility for avoiding constraint violation in numerical relativity simulations adopting free-evolution schemes is to modify the continuum evolution equations so that constraint violations are damped away. Gundlach et al. demonstrated that such a scheme damps low-amplitude, high-frequency constraint-violating modes exponentially for the Z4 formulation of general relativity. Here we analyze the effect of the damping scheme in numerical applications on a conformal decomposition of Z4. After reproducing the theoretically predicted damping rates of constraint violations in the linear regime, we explore numerical solutions not covered by the theoretical analysis. In particular we examine the effect of the damping scheme on low-frequency and on high-amplitude perturbations of flat spacetime as well and on the long-term dynamics of puncture and compact star initial data in the context of spherical symmetry. We find that the damping scheme is effective provided that the constraint violation is resolved on the numerical grid. On grid noise the combination of artificial dissipation and damping helps to suppress constraint violations. We find that care must be taken in choosing the damping parameter in simulations of puncture black holes. Otherwise the damping scheme can cause undesirable growth of the constraints, and even qualitatively incorrect evolutions. In the numerical evolution of a compact static star we find that the choice of the damping parameter is even more delicate, but may lead to a small decrease of constraint violation. For a large range of values it results in unphysical behavior.

  15. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    NASA Astrophysics Data System (ADS)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or more metrics.

  16. Equivalence between the Lovelock-Cartan action and a constrained gauge theory

    NASA Astrophysics Data System (ADS)

    Junqueira, O. C.; Pereira, A. D.; Sadovski, G.; Santos, T. R. S.; Sobreiro, R. F.; Tomaz, A. A.

    2017-04-01

    We show that the four-dimensional Lovelock-Cartan action can be derived from a massless gauge theory for the SO(1, 3) group with an additional BRST trivial part. The model is originally composed of a topological sector and a BRST exact piece and has no explicit dependence on the metric, the vierbein or a mass parameter. The vierbein is introduced together with a mass parameter through some BRST trivial constraints. The effect of the constraints is to identify the vierbein with some of the additional fields, transforming the original action into the Lovelock-Cartan one. In this scenario, the mass parameter is identified with Newton's constant, while the gauge field is identified with the spin connection. The symmetries of the model are also explored. Moreover, the extension of the model to a quantum version is qualitatively discussed.

  17. Supersymmetry Breaking, Gauge Mediation, and the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shih, David

    2015-04-14

    Gauge mediated SUSY breaking (GMSB) is a promising class of supersymmetric models that automatically satisfies the precision constraints. Prior work of Meade, Seiberg and Shih in 2008 established the full, model-independent parameter space of GMSB, which they called "General Gauge Mediation" (GGM). During the first half of 2010-2015, Shih and his collaborators thoroughly explored the parameter space of GGM and established many well-motivated benchmark models for use by the experimentalists at the LHC. Through their work, the current constraints on GGM from LEP, the Tevatron and the LHC were fully elucidated, together with the possible collider signatures of GMSB atmore » the LHC. This ensured that the full discovery potential for GGM could be completely realized at the LHC.« less

  18. Multipartite interacting scalar dark matter in the light of updated LUX data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Subhaditya; Ghosh, Purusottam; Poulose, Poulose, E-mail: subhab@iitg.ernet.in, E-mail: p.ghosh@iitg.ernet.in, E-mail: poulose@iitg.ernet.in

    2017-04-01

    We explore constraints on multipartite dark matter (DM) framework composed of singlet scalar DM interacting with the Standard Model (SM) through Higgs portal coupling. We compute relic density and direct search constraints including the updated LUX bound for two component scenario with non-zero interactions between two DM components in Z{sub 2} × Z{sub 2}{sup '} framework in comparison with the one having O(2) symmetry. We point out availability of a significantly large region of parameter space of such a multipartite model with DM-DM interactions.

  19. Constraints on Stress Components at the Internal Singular Point of an Elastic Compound Structure

    NASA Astrophysics Data System (ADS)

    Pestrenin, V. M.; Pestrenina, I. V.

    2017-03-01

    The classical analytical and numerical methods for investigating the stress-strain state (SSS) in the vicinity of a singular point consider the point as a mathematical one (having no linear dimensions). The reliability of the solution obtained by such methods is valid only outside a small vicinity of the singular point, because the macroscopic equations become incorrect and microscopic ones have to be used to describe the SSS in this vicinity. Also, it is impossible to set constraint or to formulate solutions in stress-strain terms for a mathematical point. These problems do not arise if the singular point is identified with the representative volume of material of the structure studied. In authors' opinion, this approach is consistent with the postulates of continuum mechanics. In this case, the formulation of constraints at a singular point and their investigation becomes an independent problem of mechanics for bodies with singularities. This method was used to explore constraints at an internal singular point (representative volume) of a compound wedge and a compound rib. It is shown that, in addition to the constraints given in the classical approach, there are also constraints depending on the macroscopic parameters of constituent materials. These constraints turn the problems of deformable bodies with an internal singular point into nonclassical ones. Combinations of material parameters determine the number of additional constraints and the critical stress state at the singular point. Results of this research can be used in the mechanics of composite materials and fracture mechanics and in studying stress concentrations in composite structural elements.

  20. Cosmological constraints with clustering-based redshifts

    NASA Astrophysics Data System (ADS)

    Kovetz, Ely D.; Raccanelli, Alvise; Rahman, Mubdi

    2017-07-01

    We demonstrate that observations lacking reliable redshift information, such as photometric and radio continuum surveys, can produce robust measurements of cosmological parameters when empowered by clustering-based redshift estimation. This method infers the redshift distribution based on the spatial clustering of sources, using cross-correlation with a reference data set with known redshifts. Applying this method to the existing Sloan Digital Sky Survey (SDSS) photometric galaxies, and projecting to future radio continuum surveys, we show that sources can be efficiently divided into several redshift bins, increasing their ability to constrain cosmological parameters. We forecast constraints on the dark-energy equation of state and on local non-Gaussianity parameters. We explore several pertinent issues, including the trade-off between including more sources and minimizing the overlap between bins, the shot-noise limitations on binning and the predicted performance of the method at high redshifts, and most importantly pay special attention to possible degeneracies with the galaxy bias. Remarkably, we find that once this technique is implemented, constraints on dynamical dark energy from the SDSS imaging catalogue can be competitive with, or better than, those from the spectroscopic BOSS survey and even future planned experiments. Further, constraints on primordial non-Gaussianity from future large-sky radio-continuum surveys can outperform those from the Planck cosmic microwave background experiment and rival those from future spectroscopic galaxy surveys. The application of this method thus holds tremendous promise for cosmology.

  1. NEPTUNE'S WILD DAYS: CONSTRAINTS FROM THE ECCENTRICITY DISTRIBUTION OF THE CLASSICAL KUIPER BELT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Rebekah I.; Murray-Clay, Ruth, E-mail: rdawson@cfa.harvard.edu

    2012-05-01

    Neptune's dynamical history shaped the current orbits of Kuiper Belt objects (KBOs), leaving clues to the planet's orbital evolution. In the 'classical' region, a population of dynamically 'hot' high-inclination KBOs overlies a flat 'cold' population with distinct physical properties. Simulations of qualitatively different histories for Neptune, including smooth migration on a circular orbit or scattering by other planets to a high eccentricity, have not simultaneously produced both populations. We explore a general Kuiper Belt assembly model that forms hot classical KBOs interior to Neptune and delivers them to the classical region, where the cold population forms in situ. First, wemore » present evidence that the cold population is confined to eccentricities well below the limit dictated by long-term survival. Therefore, Neptune must deliver hot KBOs into the long-term survival region without excessively exciting the eccentricities of the cold population. Imposing this constraint, we explore the parameter space of Neptune's eccentricity and eccentricity damping, migration, and apsidal precession. We rule out much of parameter space, except where Neptune is scattered to a moderately eccentric orbit (e > 0.15) and subsequently migrates a distance {Delta}a{sub N} = 1-6 AU. Neptune's moderate eccentricity must either damp quickly or be accompanied by fast apsidal precession. We find that Neptune's high eccentricity alone does not generate a chaotic sea in the classical region. Chaos can result from Neptune's interactions with Uranus, exciting the cold KBOs and placing additional constraints. Finally, we discuss how to interpret our constraints in the context of the full, complex dynamical history of the solar system.« less

  2. Cut-off characterisation of energy spectra of bright fermi sources: Current instrument limits and future possibilities

    NASA Astrophysics Data System (ADS)

    Romoli, C.; Taylor, A. M.; Aharonian, F.

    2017-02-01

    In this paper some of the brightest GeV sources observed by the Fermi-LAT were analysed, focusing on their spectral cut-off region. The sources chosen for this investigation were the brightest blazar flares of 3C 454.3 and 3C 279 and the Vela pulsar with a reanalysis with the latest Fermi-LAT software. For the study of the spectral cut-off we first explored the Vela pulsar spectrum, whose statistics in the time interval of the 3FGL catalog allowed strong constraints to be obtained on the parameters. We subsequently performed a new analysis of the flaring blazar SEDs. For these sources we obtained constraints on the cut-off parameters under the assumption that their underlying spectral distribution is described by a power-law with a stretched exponential cut-off. We then highlighted the significant potential improvements on such constraints by observations with next generation ground based Cherenkov telescopes, represented in our study by the Cherenkov Telescope Array (CTA). Adopting currently available simulations for this future observatory, we demonstrate the considerable improvement in cut-off constraints achievable by observations with this new instrument when compared with that achievable by satellite observations.

  3. Testing backreaction effects with observational Hubble parameter data

    NASA Astrophysics Data System (ADS)

    Cao, Shu-Lei; Teng, Huan-Yu; Wan, Hao-Yi; Yu, Hao-Ran; Zhang, Tong-Jie

    2018-02-01

    The spatially averaged inhomogeneous Universe includes a kinematical backreaction term Q_{D} that is relate to the averaged spatial Ricci scalar _{D} in the framework of general relativity. Under the assumption that Q_{D} and < R > _{D} obey the scaling laws of the volume scale factor a_{D}, a direct coupling between them with a scaling index n is remarkable. In order to explore the generic properties of a backreaction model for explaining the accelerated expansion of the Universe, we exploit two metrics to describe the late time Universe. Since the standard FLRW metric cannot precisely describe the late time Universe on small scales, the template metric with an evolving curvature parameter κ _{D}(t) is employed. However, we doubt the validity of the prescription for κ _{D}, which motivates us apply observational Hubble parameter data (OHD) to constrain parameters in dust cosmology. First, for FLRW metric, by getting best-fit constraints of Ω^{D_0}_m = 0.25^{+0.03}_{-0.03}, n = 0.02^{+0.69}_{-0.66}, and H_{D_0} = 70.54^{+4.24}_{-3.97} km s^{-1 Mpc^{-1}}, the evolutions of parameters are explored. Second, in template metric context, by marginalizing over H_{D_0} as a prior of uniform distribution, we obtain the best-fit values of n=-1.22^{+0.68}_{-0.41} and Ωm^{D0}=0.12^{+0.04}_{-0.02}. Moreover, we utilize three different Gaussian priors of H_{D_0}, which result in different best-fits of n, but almost the same best-fit value of Ωm^{D0}˜ 0.12. Also, the absolute constraints without marginalization of parameter are obtained: n=-1.1^{+0.58}_{-0.50} and Ωm^{D0}=0.13± 0.03. With these constraints, the evolutions of the effective deceleration parameter q^{D} indicate that the backreaction can account for the accelerated expansion of the Universe without involving extra dark energy component in the scaling solution context. Nevertheless, the results also verify that the prescription of κ _{D} is insufficient and should be improved.

  4. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  5. Constraining the location of gamma-ray flares in luminous blazars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nalewajko, Krzysztof; Begelman, Mitchell C.; Sikora, Marek, E-mail: knalew@jila.colorado.edu

    2014-07-10

    Locating the gamma-ray emission sites in blazar jets is a long standing and highly controversial issue. We jointly investigate several constraints on the distance scale r and Lorentz factor Γ of the gamma-ray emitting regions in luminous blazars (primarily flat spectrum radio quasars). Working in the framework of one-zone external radiation Comptonization models, we perform a parameter space study for several representative cases of actual gamma-ray flares in their multiwavelength context. We find a particularly useful combination of three constraints: from an upper limit on the collimation parameter Γθ ≲ 1, from an upper limit on the synchrotron self-Compton (SSC)more » luminosity L{sub SSC} ≲ L{sub X}, and from an upper limit on the efficient cooling photon energy E{sub cool,obs} ≲ 100 MeV. These three constraints are particularly strong for sources with low accretion disk luminosity L{sub d}. The commonly used intrinsic pair-production opacity constraint on Γ is usually much weaker than the SSC constraint. The SSC and cooling constraints provide a robust lower limit on the collimation parameter Γθ ≳ 0.1-0.7. Typical values of r corresponding to moderate values of Γ ∼ 20 are in the range 0.1-1 pc, and are determined primarily by the observed variability timescale t{sub var,obs}. Alternative scenarios motivated by the observed gamma-ray/millimeter connection, in which gamma-ray flares of t{sub var,obs} ∼ a few days are located at r ∼ 10 pc, are in conflict with both the SSC and cooling constraints. Moreover, we use a simple light travel time argument to point out that the gamma-ray/millimeter connection does not provide a significant constraint on the location of gamma-ray flares. We argue that spine-sheath models of the jet structure do not offer a plausible alternative to external radiation fields at large distances; however, an extended broad-line region is an idea worth exploring. We propose that the most definite additional constraint could be provided by determination of the synchrotron self-absorption frequency for correlated synchrotron and gamma-ray flares.« less

  6. Constraining modified theories of gravity with the galaxy bispectrum

    NASA Astrophysics Data System (ADS)

    Yamauchi, Daisuke; Yokoyama, Shuichiro; Tashiro, Hiroyuki

    2017-12-01

    We explore the use of the galaxy bispectrum induced by the nonlinear gravitational evolution as a possible probe to test general scalar-tensor theories with second-order equations of motion. We find that time dependence of the leading second-order kernel is approximately characterized by one parameter, the second-order index, which is expected to trace the higher-order growth history of the Universe. We show that our new parameter can significantly carry new information about the nonlinear growth of structure. We forecast future constraints on the second-order index as well as the equation-of-state parameter and the growth index.

  7. Scale-dependent CMB power asymmetry from primordial speed of sound and a generalized δ N formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dong-Gang; Cai, Yi-Fu; Zhao, Wen

    2016-02-01

    We explore a plausible mechanism that the hemispherical power asymmetry in the CMB is produced by the spatial variation of the primordial sound speed parameter. We suggest that in a generalized approach of the δ N formalism the local e-folding number may depend on some other primordial parameters besides the initial values of inflaton. Here the δ N formalism is extended by considering the effects of a spatially varying sound speed parameter caused by a super-Hubble perturbation of a light field. Using this generalized δ N formalism, we systematically calculate the asymmetric primordial spectrum in the model of multi-speed inflation by taking intomore » account the constraints of primordial non-Gaussianities. We further discuss specific model constraints, and the corresponding asymmetry amplitudes are found to be scale-dependent, which can accommodate current observations of the power asymmetry at different length scales.« less

  8. First X-ray Statistical Tests for Clumpy Torii Models: Constraints from RXTE monitoring of Seyfert AGN

    NASA Astrophysics Data System (ADS)

    Markowitz, A.

    2015-09-01

    We summarize two papers providing the first X-ray-derived statistical constraints for both clumpy-torus model parameters and cloud ensemble properties. In Markowitz, Krumpe, & Nikutta (2014), we explored multi-timescale variability in line-of-sight X-ray absorbing gas as a function of optical classification. We examined 55 Seyferts monitored with the Rossi X-ray Timing Explorer, and found in 8 objects a total of 12 eclipses, with durations between hours and years. Most clouds are commensurate with the outer portions of the BLR, or the inner regions of infrared-emitting dusty tori. The detection of eclipses in type Is disfavors sharp-edged tori. We provide probabilities to observe a source undergoing an absorption event for both type Is and IIs, yielding constraints in [N_0, sigma, i] parameter space. In Nikutta et al., in prep., we infer that the small cloud angular sizes, as seen from the SMBH, imply the presence of >10^7 clouds in BLR+torus to explain observed covering factors. Cloud size is roughly proportional to distance from the SMBH, hinting at the formation processes (e.g. disk fragmentation). All observed clouds are sub-critical with respect to tidal disruption; self-gravity alone cannot contain them. External forces (e.g. magnetic fields, ambient pressure) are needed to contain them, or otherwise the clouds must be short-lived. Finally, we infer that the radial cloud density distribution behaves as 1/r^{0.7}, compatible with VLTI observations. Our results span both dusty and non-dusty clumpy media, and probe model parameter space complementary to that for short-term eclipses observed with XMM-Newton, Suzaku, and Chandra.

  9. High-order tracking differentiator based adaptive neural control of a flexible air-breathing hypersonic vehicle subject to actuators constraints.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Tian, Mingyan; Huang, Jiaqi; Zhang, Rui; Ma, Zhen

    2015-09-01

    In this paper, an adaptive neural controller is exploited for a constrained flexible air-breathing hypersonic vehicle (FAHV) based on high-order tracking differentiator (HTD). By utilizing functional decomposition methodology, the dynamic model is reasonably decomposed into the respective velocity subsystem and altitude subsystem. For the velocity subsystem, a dynamic inversion based neural controller is constructed. By introducing the HTD to adaptively estimate the newly defined states generated in the process of model transformation, a novel neural based altitude controller that is quite simpler than the ones derived from back-stepping is addressed based on the normal output-feedback form instead of the strict-feedback formulation. Based on minimal-learning parameter scheme, only two neural networks with two adaptive parameters are needed for neural approximation. Especially, a novel auxiliary system is explored to deal with the problem of control inputs constraints. Finally, simulation results are presented to test the effectiveness of the proposed control strategy in the presence of system uncertainties and actuators constraints. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Cosmological constraints and comparison of viable f (R ) models

    NASA Astrophysics Data System (ADS)

    Pérez-Romero, Judit; Nesseris, Savvas

    2018-01-01

    In this paper we present cosmological constraints on several well-known f (R ) models, but also on a new class of models that are variants of the Hu-Sawicki one of the form f (R )=R -2/Λ 1 +b y (R ,Λ ) , that interpolate between the cosmological constant model and a matter dominated universe for different values of the parameter b , which is usually expected to be small for viable models and which in practice measures the deviation from general relativity. We use the latest growth rate, cosmic microwave background, baryon acoustic oscillations, supernovae type Ia and Hubble parameter data to place stringent constraints on the models and to compare them to the cosmological constant model but also other viable f (R ) models such as the Starobinsky or the degenerate hypergeometric models. We find that these kinds of Hu-Sawicki variant parametrizations are in general compatible with the currently available data and can provide useful toy models to explore the available functional space of f (R ) models, something very useful with the current and upcoming surveys that will test deviations from general relativity.

  11. A derivation of the Cramer-Rao lower bound of euclidean parameters under equality constraints via score function

    NASA Astrophysics Data System (ADS)

    Susyanto, Nanang

    2017-12-01

    We propose a simple derivation of the Cramer-Rao Lower Bound (CRLB) of parameters under equality constraints from the CRLB without constraints in regular parametric models. When a regular parametric model and an equality constraint of the parameter are given, a parametric submodel can be defined by restricting the parameter under that constraint. The tangent space of this submodel is then computed with the help of the implicit function theorem. Finally, the score function of the restricted parameter is obtained by projecting the efficient influence function of the unrestricted parameter on the appropriate inner product spaces.

  12. Cosmological implications of primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luis Bernal, José; Bellomo, Nicola; Raccanelli, Alvise

    The possibility that a relevant fraction of the dark matter might be comprised of Primordial Black Holes (PBHs) has been seriously reconsidered after LIGO's detection of a ∼ 30 M {sub ⊙} binary black holes merger. Despite the strong interest in the model, there is a lack of studies on possible cosmological implications and effects on cosmological parameters inference. We investigate correlations with the other standard cosmological parameters using cosmic microwave background observations, finding significant degeneracies, especially with the tilt of the primordial power spectrum and the sound horizon at radiation drag. However, these degeneracies can be greatly reduced withmore » the inclusion of small scale polarization data. We also explore if PBHs as dark matter in simple extensions of the standard ΛCDM cosmological model induces extra degeneracies, especially between the additional parameters and the PBH's ones. Finally, we present cosmic microwave background constraints on the fraction of dark matter in PBHs, not only for monochromatic PBH mass distributions but also for popular extended mass distributions. Our results show that extended mass distribution's constraints are tighter, but also that a considerable amount of constraining power comes from the high-ℓ polarization data. Moreover, we constrain the shape of such mass distributions in terms of the correspondent constraints on the PBH mass fraction.« less

  13. WMAP7 constraints on oscillations in the primordial power spectrum

    NASA Astrophysics Data System (ADS)

    Meerburg, P. Daniel; Wijers, Ralph A. M. J.; van der Schaar, Jan Pieter

    2012-03-01

    We use the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) data to place constraints on oscillations supplementing an almost scale-invariant primordial power spectrum. Such oscillations are predicted by a variety of models, some of which amount to assuming that there is some non-trivial choice of the vacuum state at the onset of inflation. In this paper, we will explore data-driven constraints on two distinct models of initial state modifications. In both models, the frequency, phase and amplitude are degrees of freedom of the theory for which the theoretical bounds are rather weak: both the amplitude and frequency have allowed values ranging over several orders of magnitude. This requires many computationally expensive evaluations of the model cosmic microwave background (CMB) spectra and their goodness of fit, even in a Markov chain Monte Carlo (MCMC), normally the most efficient fitting method for such a problem. To search more efficiently, we first run a densely-spaced grid, with only three varying parameters: the frequency, the amplitude and the baryon density. We obtain the optimal frequency and run an MCMC at the best-fitting frequency, randomly varying all other relevant parameters. To reduce the computational time of each power spectrum computation, we adjust both comoving momentum integration and spline interpolation (in l) as a function of frequency and amplitude of the primordial power spectrum. Applying this to the WMAP7 data allows us to improve existing constraints on the presence of oscillations. We confirm earlier findings that certain frequencies can improve the fitting over a model without oscillations. For those frequencies we compute the posterior probability, allowing us to put some constraints on the primordial parameter space of both models.

  14. Exploring JLA supernova data with improved flux-averaging technique

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Wen, Sixiang; Li, Miao

    2017-03-01

    In this work, we explore the cosmological consequences of the ``Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the (zcut, Δ z) plane, where zcut and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, we discuss the impacts of varying zcut and varying Δ z, revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is (zcut = 0.6, Δ z=0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at zcut >= 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ωm. In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.

  15. Mixtures of GAMs for habitat suitability analysis with overdispersed presence / absence data

    PubMed Central

    Pleydell, David R.J.; Chrétien, Stéphane

    2009-01-01

    A new approach to species distribution modelling based on unsupervised classification via a finite mixture of GAMs incorporating habitat suitability curves is proposed. A tailored EM algorithm is outlined for computing maximum likelihood estimates. Several submodels incorporating various parameter constraints are explored. Simulation studies confirm, that under certain constraints, the habitat suitability curves are recovered with good precision. The method is also applied to a set of real data concerning presence/absence of observable small mammal indices collected on the Tibetan plateau. The resulting classification was found to correspond to species-level differences in habitat preference described in previous ecological work. PMID:20401331

  16. Dark energy models through nonextensive Tsallis' statistics

    NASA Astrophysics Data System (ADS)

    Barboza, Edésio M.; Nunes, Rafael da C.; Abreu, Everton M. C.; Ananias Neto, Jorge

    2015-10-01

    The accelerated expansion of the Universe is one of the greatest challenges of modern physics. One candidate to explain this phenomenon is a new field called dark energy. In this work we have used the Tsallis nonextensive statistical formulation of the Friedmann equation to explore the Barboza-Alcaniz and Chevalier-Polarski-Linder parametric dark energy models and the Wang-Meng and Dalal vacuum decay models. After that, we have discussed the observational tests and the constraints concerning the Tsallis nonextensive parameter. Finally, we have described the dark energy physics through the role of the q-parameter.

  17. Constraints on Cosmological Parameters from the Angular Power Spectrum of a Combined 2500 deg2 SPT-SZ and Planck Gravitational Lensing Map

    NASA Astrophysics Data System (ADS)

    Simard, G.; Omori, Y.; Aylor, K.; Baxter, E. J.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Cho, H.-M.; Chown, R.; Crawford, T. M.; Crites, A. T.; de Haan, T.; Dobbs, M. A.; Everett, W. B.; George, E. M.; Halverson, N. W.; Harrington, N. L.; Henning, J. W.; Holder, G. P.; Hou, Z.; Holzapfel, W. L.; Hrubes, J. D.; Knox, L.; Lee, A. T.; Leitch, E. M.; Luong-Van, D.; Manzotti, A.; McMahon, J. J.; Meyer, S. S.; Mocanu, L. M.; Mohr, J. J.; Natoli, T.; Padin, S.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Staniszewski, Z.; Stark, A. A.; Story, K. T.; Vanderlinde, K.; Vieira, J. D.; Williamson, R.; Wu, W. L. K.

    2018-06-01

    We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 deg2 of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the lensing power spectrum to a model including cold dark matter and a cosmological constant ({{Λ }}{CDM}), and to models with single-parameter extensions to {{Λ }}{CDM}. We find constraints that are comparable to and consistent with those found using the full-sky Planck CMB lensing data, e.g., {σ }8{{{Ω }}}{{m}}0.25 = 0.598 ± 0.024 from the lensing data alone with weak priors placed on other parameters. Combining with primary CMB data, we explore single-parameter extensions to {{Λ }}{CDM}. We find {{{Ω }}}k =-{0.012}-0.023+0.021 or {M}ν < 0.70 eV at 95% confidence, in good agreement with results including the lensing potential as measured by Planck. We include two parameters that scale the effect of lensing on the CMB: {A}L, which scales the lensing power spectrum in both the lens reconstruction power and in the smearing of the acoustic peaks, and {A}φ φ , which scales only the amplitude of the lensing reconstruction power spectrum. We find {A}φ φ × {A}L = 1.01 ± 0.08 for the lensing map made from combined SPT and Planck data, indicating that the amount of lensing is in excellent agreement with expectations from the observed CMB angular power spectrum when not including the information from smearing of the acoustic peaks.

  18. Direct constraints on minimal supersymmetry from Fermi-LAT observations of the dwarf galaxy Segue 1

    DOE PAGES

    Scott, Pat; Conrad, Jan; Edsjö, Joakim; ...

    2010-01-26

    The dwarf galaxy Segue 1 is one of the most promising targets for the indirect detection of dark matter. We examine what constraints 9 months of Fermi-LAT gamma-ray observations of Segue 1 place upon the Constrained Minimal Supersymmetric Standard Model (CMSSM), with the lightest neutralino as the dark matter particle. We also use nested sampling to explore the CMSSM parameter space, simultaneously fitting other relevant constraints from accelerator bounds, the relic density, electroweak precision observables, the anomalous magnetic moment of the muon and B-physics. We include spectral and spatial fits to the Fermi observations, a full treatment of the instrumentalmore » response and its related uncertainty, and detailed background models. We also perform an extrapolation to 5 years of observations, assuming no signal is observed from Segue 1 in that time. Our results marginally disfavour models with low neutralino masses and high annihilation cross-sections. Virtually all of these models are however already disfavoured by existing experimental or relic density constraints.« less

  19. Natural implementation of neutralino dark matter

    NASA Astrophysics Data System (ADS)

    King, Steve F.; Roberts, Jonathan P.

    2006-09-01

    The prediction of neutralino dark matter is generally regarded as one of the successes of the Minimal Supersymmetric Standard Model (MSSM). However the successful regions of parameter space allowed by WMAP and collider constraints are quite restricted. We discuss fine-tuning with respect to both dark matter and Electroweak Symmetry Breaking (EWSB) and explore regions of MSSM parameter space with non-universal gaugino and third family scalar masses in which neutralino dark matter may be implemented naturally. In particular allowing non-universal gauginos opens up the bulk region that allows Bino annihilation via t-channel slepton exchange, leading to ``supernatural dark matter'' corresponding to no fine-tuning at all with respect to dark matter. By contrast we find that the recently proposed ``well tempered neutralino'' regions involve substantial fine-tuning of MSSM parameters in order to satisfy the dark matter constraints, although the fine tuning may be ameliorated if several annihilation channels act simultaneously. Although we have identified regions of ``supernatural dark matter'' in which there is no fine tuning to achieve successful dark matter, the usual MSSM fine tuning to achieve EWSB always remains.

  20. A Method of Trajectory Design for Manned Asteroid Explorations1,2

    NASA Astrophysics Data System (ADS)

    Gan, Qing-Bo; Zhang, Yang; Zhu, Zheng-Fan; Han, Wei-Hua; Dong, Xin

    2015-07-01

    A trajectory optimization method for the nuclear-electric propulsion manned asteroid explorations is presented. In the case of launching between 2035 and 2065, based on the two-pulse single-cycle Lambert transfer orbit, the phases of departure from and return to the Earth are searched at first. Then the optimal flight trajectory is selected by pruning the flight sequences in two feasible regions. Setting the flight strategy of propelling-taxiing-propelling, and taking the minimal fuel consumption as the performance index, the nuclear-electric propulsion flight trajectory is optimized using the hybrid method. Finally, taking the segmentally optimized parameters as the initial values, in accordance with the overall mission constraints, the globally optimized parameters are obtained. And the numerical and diagrammatical results are given at the same time.

  1. Exploring JLA supernova data with improved flux-averaging technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Shuang; Wen, Sixiang; Li, Miao, E-mail: wangshuang@mail.sysu.edu.cn, E-mail: wensx@mail2.sysu.edu.cn, E-mail: limiao9@mail.sysu.edu.cn

    2017-03-01

    In this work, we explore the cosmological consequences of the ''Joint Light-curve Analysis'' (JLA) supernova (SN) data by using an improved flux-averaging (FA) technique, in which only the type Ia supernovae (SNe Ia) at high redshift are flux-averaged. Adopting the criterion of figure of Merit (FoM) and considering six dark energy (DE) parameterizations, we search the best FA recipe that gives the tightest DE constraints in the ( z {sub cut}, Δ z ) plane, where z {sub cut} and Δ z are redshift cut-off and redshift interval of FA, respectively. Then, based on the best FA recipe obtained, wemore » discuss the impacts of varying z {sub cut} and varying Δ z , revisit the evolution of SN color luminosity parameter β, and study the effects of adopting different FA recipe on parameter estimation. We find that: (1) The best FA recipe is ( z {sub cut} = 0.6, Δ z =0.06), which is insensitive to a specific DE parameterization. (2) Flux-averaging JLA samples at z {sub cut} ≥ 0.4 will yield tighter DE constraints than the case without using FA. (3) Using FA can significantly reduce the redshift-evolution of β. (4) The best FA recipe favors a larger fractional matter density Ω {sub m} . In summary, we present an alternative method of dealing with JLA data, which can reduce the systematic uncertainties of SNe Ia and give the tighter DE constraints at the same time. Our method will be useful in the use of SNe Ia data for precision cosmology.« less

  2. Power spectrum constraints from spectral distortions in the cosmic microwave background

    NASA Technical Reports Server (NTRS)

    Hu, Wayne; Scott, Douglas; Silk, Joseph

    1994-01-01

    Using recent experimental limits on chemical potential distortions from Cosmic Background Explorer (COBE) Far Infrared Astronomy Satellite (FIRAS), and the large lever-arm spanning the damping of sub-Jeans scale fluctuations to the COBE DMR fluctuations, we set a constraint on the slope of the primordial power spectrum n. It is possible to analytically calculate the contribution over the full range of scales and redshifts, correctly taking into account fluctuation growth and damping as well as thermalization processes. Assuming conservatively that mu is less than 1.76 x 10(exp -4), we find that the 95% upper limit on n is only weakly dependent on other cosmological parameters, e.g., n is less than 1.60 (h=0.5) and n is less than 1.63 (h=1.0) for Omega(sub 0) = 1, with marginally weaker constraints for Omega(sub 0) is less than 1 in a flat model with a cosmological constant.

  3. Cosmological Constraint on Brans-Dicke Theory

    NASA Astrophysics Data System (ADS)

    Chen, Xuelei; Wu, Fengquan

    We develop the covariant formalism of the cosmological perturbation theory for the Brans-Dicke gravity, and use it to calculate the cosmic microwave background (CMB) anisotropy and large scale structure (LSS) power spectrum. We introduce a new parameter ζ which is related to the Brans-Dicke parameter ζ = ln(1/ω + 1), and use the Markov-Chain Monte Carlo (MCMC) method to explore the parameter space. Using the latest CMB data published by WMAP, ACBAR, CBI, Boomerang teams, and the LSS data from the SDSS survey DR4, we find that the the 2σ (95.5%) bound on ζ is about |ζ| > 10-2, or |ω| > 102, the precise limit depends somewhat on the prior used.

  4. {gamma} parameter and Solar System constraint in chameleon-Brans-Dicke theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaidi, Kh.; Mohammadi, A.; Sheikhahmadi, H.

    2011-05-15

    The post Newtonian parameter is considered in the chameleon-Brans-Dicke model. In the first step, the general form of this parameter and also effective gravitational constant is obtained. An arbitrary function for f({Phi}), which indicates the coupling between matter and scalar field, is introduced to investigate validity of solar system constraint. It is shown that the chameleon-Brans-Dicke model can satisfy the solar system constraint and gives us an {omega} parameter of order 10{sup 4}, which is in comparable to the constraint which has been indicated in [19].

  5. Towards a rational theory for CFD global stability

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Iannelli, G. S.

    1989-01-01

    The fundamental notion of the consistent stability of semidiscrete analogues of evolution PDEs is explored. Lyapunov's direct method is used to develop CFD semidiscrete algorithms which yield the TVD constraint as a special case. A general formula for supplying dissipation parameters for arbitrary multidimensional conservation law systems is proposed. The reliability of the method is demonstrated by the results of two numerical tests for representative Euler shocked flows.

  6. Observational Constraints on the Nature of the Dark Energy: First Cosmological Results From the ESSENCE Supernova Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood-Vasey, W.Michael; Miknaitis, G.; Stubbs, C.W.

    We present constraints on the dark energy equation-of-state parameter, w = P/({rho}c{sup 2}), using 60 Type Ia supernovae (SNe Ia) from the ESSENCE supernova survey. We derive a set of constraints on the nature of the dark energy assuming a flat Universe. By including constraints on ({Omega}{sub M}, w) from baryon acoustic oscillations, we obtain a value for a static equation-of-state parameter w = -1.05{sub -0.12}{sup +0.13} (stat 1{sigma}) {+-} 0.13 (sys) and {Omega}{sub M} = 0.274{sub -0.020}{sup +0.033} (stat 1{sigma}) with a best-fit {chi}{sup 2}/DoF of 0.96. These results are consistent with those reported by the Super-Nova Legacy Surveymore » in a similar program measuring supernova distances and redshifts. We evaluate sources of systematic error that afflict supernova observations and present Monte Carlo simulations that explore these effects. Currently, the largest systematic currently with the potential to affect our measurements is the treatment of extinction due to dust in the supernova host galaxies. Combining our set of ESSENCE SNe Ia with the SuperNova Legacy Survey SNe Ia, we obtain a joint constraint of w = -1.07{sub -0.09}{sup +0.09} (stat 1{sigma}) {+-} 0.13 (sys), {Omega}{sub M} = 0.267{sub -0.018}{sup +0.028} (stat 1{sigma}) with a best-fit {chi}{sup 2}/DoF of 0.91. The current SNe Ia data are fully consistent with a cosmological constant.« less

  7. Cosmological constraints from the CFHTLenS shear measurements using a new, accurate, and flexible way of predicting non-linear mass clustering

    NASA Astrophysics Data System (ADS)

    Angulo, Raul E.; Hilbert, Stefan

    2015-03-01

    We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.

  8. Exploring cosmic origins with CORE: Cosmological parameters

    NASA Astrophysics Data System (ADS)

    Di Valentino, E.; Brinckmann, T.; Gerbino, M.; Poulin, V.; Bouchet, F. R.; Lesgourgues, J.; Melchiorri, A.; Chluba, J.; Clesse, S.; Delabrouille, J.; Dvorkin, C.; Forastieri, F.; Galli, S.; Hooper, D. C.; Lattanzi, M.; Martins, C. J. A. P.; Salvati, L.; Cabass, G.; Caputo, A.; Giusarma, E.; Hivon, E.; Natoli, P.; Pagano, L.; Paradiso, S.; Rubiño-Martin, J. A.; Achúcarro, A.; Ade, P.; Allison, R.; Arroja, F.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartolo, N.; Bartlett, J. G.; Basak, S.; Baumann, D.; de Bernardis, P.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Boulanger, F.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C. S.; Castellano, G.; Challinor, A.; Charles, I.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; De Petris, M.; De Zotti, G.; Diego, J. M.; Errard, J.; Feeney, S.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; de Gasperis, G.; Génova-Santos, R. T.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hazra, D. K.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hills, M.; Kiiveri, K.; Kisner, T.; Kitching, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lewis, A.; Liguori, M.; Lindholm, V.; Lopez-Caniego, M.; Luzzi, G.; Maffei, B.; Martin, S.; Martinez-Gonzalez, E.; Masi, S.; Matarrese, S.; McCarthy, D.; Melin, J.-B.; Mohr, J. J.; Molinari, D.; Monfardini, A.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piacentini, F.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Quartin, M.; Remazeilles, M.; Roman, M.; Ringeval, C.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Väliviita, J.; van de Weygaert, R.; Van Tent, B.; Vennin, V.; Vermeulen, G.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.

    2018-04-01

    We forecast the main cosmological parameter constraints achievable with the CORE space mission which is dedicated to mapping the polarisation of the Cosmic Microwave Background (CMB). CORE was recently submitted in response to ESA's fifth call for medium-sized mission proposals (M5). Here we report the results from our pre-submission study of the impact of various instrumental options, in particular the telescope size and sensitivity level, and review the great, transformative potential of the mission as proposed. Specifically, we assess the impact on a broad range of fundamental parameters of our Universe as a function of the expected CMB characteristics, with other papers in the series focusing on controlling astrophysical and instrumental residual systematics. In this paper, we assume that only a few central CORE frequency channels are usable for our purpose, all others being devoted to the cleaning of astrophysical contaminants. On the theoretical side, we assume ΛCDM as our general framework and quantify the improvement provided by CORE over the current constraints from the Planck 2015 release. We also study the joint sensitivity of CORE and of future Baryon Acoustic Oscillation and Large Scale Structure experiments like DESI and Euclid. Specific constraints on the physics of inflation are presented in another paper of the series. In addition to the six parameters of the base ΛCDM, which describe the matter content of a spatially flat universe with adiabatic and scalar primordial fluctuations from inflation, we derive the precision achievable on parameters like those describing curvature, neutrino physics, extra light relics, primordial helium abundance, dark matter annihilation, recombination physics, variation of fundamental constants, dark energy, modified gravity, reionization and cosmic birefringence. In addition to assessing the improvement on the precision of individual parameters, we also forecast the post-CORE overall reduction of the allowed parameter space with figures of merit for various models increasing by as much as ~ 107 as compared to Planck 2015, and 105 with respect to Planck 2015 + future BAO measurements.

  9. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: Cosmological implications of the configuration-space clustering wedges

    NASA Astrophysics Data System (ADS)

    Sánchez, Ariel G.; Scoccimarro, Román; Crocce, Martín; Grieb, Jan Niklas; Salazar-Albornoz, Salvador; Dalla Vecchia, Claudio; Lippich, Martha; Beutler, Florian; Brownstein, Joel R.; Chuang, Chia-Hsun; Eisenstein, Daniel J.; Kitaura, Francisco-Shu; Olmstead, Matthew D.; Percival, Will J.; Prada, Francisco; Rodríguez-Torres, Sergio; Ross, Ashley J.; Samushia, Lado; Seo, Hee-Jong; Tinker, Jeremy; Tojeiro, Rita; Vargas-Magaña, Mariana; Wang, Yuting; Zhao, Gong-Bo

    2017-01-01

    We explore the cosmological implications of anisotropic clustering measurements in configuration space of the final galaxy samples from Data Release 12 of the Sloan Digital Sky Survey III Baryon Oscillation Spectroscopic Survey. We implement a new detailed modelling of the effects of non-linearities, bias and redshift-space distortions that can be used to extract unbiased cosmological information from our measurements for scales s ≳ 20 h-1 Mpc. We combined the information from Baryon Oscillation Spectroscopic Survey (BOSS) with the latest cosmic microwave background (CMB) observations and Type Ia supernovae samples and found no significant evidence for a deviation from the Λ cold dark matter (ΛCDM) cosmological model. In particular, these data sets can constrain the dark energy equation-of-state parameter to wDE = -0.996 ± 0.042 when to be assumed time independent, the curvature of the Universe to Ωk = -0.0007 ± 0.0030 and the sum of the neutrino masses to ∑mν < 0.25 eV at 95 per cent confidence levels. We explore the constraints on the growth rate of cosmic structures assuming f(z) = Ωm(z)γ and obtain γ = 0.609 ± 0.079, in good agreement with the predictions of general relativity of γ = 0.55. We compress the information of our clustering measurements into constraints on the parameter combinations DV(z)/rd, FAP(z) and fσ8(z) at zeff = 0.38, 0.51 and 0.61 with their respective covariance matrices and find good agreement with the predictions for these parameters obtained from the best-fitting ΛCDM model to the CMB data from the Planck satellite. This paper is part of a set that analyses the final galaxy clustering data set from BOSS. The measurements and likelihoods presented here are combined with others by Alam et al. to produce the final cosmological constraints from BOSS.

  10. New constraints on Mars rotation determined from radiometric tracking of the Opportunity Mars Exploration Rover

    NASA Astrophysics Data System (ADS)

    Kuchynka, Petr; Folkner, William M.; Konopliv, Alex S.; Parker, Timothy J.; Park, Ryan S.; Le Maistre, Sebastien; Dehant, Veronique

    2014-02-01

    The Opportunity Mars Exploration Rover remained stationary between January and May 2012 in order to conserve solar energy for running its survival heaters during martian winter. While stationary, extra Doppler tracking was performed in order to allow an improved estimate of the martian precession rate. In this study, we determine Mars rotation by combining the new Opportunity tracking data with historic tracking data from the Viking and Pathfinder landers and tracking data from Mars orbiters (Mars Global Surveyor, Mars Odyssey and Mars Reconnaissance Orbiter). The estimated rotation parameters are stable in cross-validation tests and compare well with previously published values. In particular, the Mars precession rate is estimated to be -7606.1 ± 3.5 mas/yr. A representation of Mars rotation as a series expansion based on the determined rotation parameters is provided.

  11. Constraints on the symmetry energy from neutron star observations

    NASA Astrophysics Data System (ADS)

    Newton, W. G.; Gearheart, M.; Wen, De-Hua; Li, Bao-An

    2013-03-01

    The modeling of many neutron star observables incorporates the microphysics of both the stellar crust and core, which is tied intimately to the properties of the nuclear matter equation of state (EoS). We explore the predictions of such models over the range of experimentally constrained nuclear matter parameters, focusing on the slope of the symmetry energy at nuclear saturation density L. We use a consistent model of the composition and EoS of neutron star crust and core matter to model the binding energy of pulsar B of the double pulsar system J0737-3039, the frequencies of torsional oscillations of the neutron star crust and the instability region for r-modes in the neutron star core damped by electron-electron viscosity at the crust-core interface. By confronting these models with observations, we illustrate the potential of astrophysical observables to offer constraints on poorly known nuclear matter parameters complementary to terrestrial experiments, and demonstrate that our models consistently predict L < 70 MeV.

  12. Not-so-well-tempered neutralino

    NASA Astrophysics Data System (ADS)

    Profumo, Stefano; Stefaniak, Tim; Stephenson-Haskins, Laurel

    2017-09-01

    Light electroweakinos, the neutral and charged fermionic supersymmetric partners of the standard model SU (2 )×U (1 ) gauge bosons and of the two SU(2) Higgs doublets, are an important target for searches for new physics with the Large Hadron Collider (LHC). However, if the lightest neutralino is the dark matter, constraints from direct dark matter detection experiments rule out large swaths of the parameter space accessible to the LHC, including in large part the so-called "well-tempered" neutralinos. We focus on the minimal supersymmetric standard model (MSSM) and explore in detail which regions of parameter space are not excluded by null results from direct dark matter detection, assuming exclusive thermal production of neutralinos in the early universe, and illustrate the complementarity with current and future LHC searches for electroweak gauginos. We consider both bino-Higgsino and bino-wino "not-so-well-tempered" neutralinos, i.e. we include models where the lightest neutralino constitutes only part of the cosmological dark matter, with the consequent suppression of the constraints from direct and indirect dark matter searches.

  13. Influence of Constraint in Parameter Space on Quantum Games

    NASA Astrophysics Data System (ADS)

    Zhao, Hai-Jun; Fang, Xi-Ming

    2004-04-01

    We study the influence of the constraint in the parameter space on quantum games. Decomposing SU(2) operator into product of three rotation operators and controlling one kind of them, we impose a constraint on the parameter space of the players' operator. We find that the constraint can provide a tuner to make the bilateral payoffs equal, so that the mismatch of the players' action at multi-equilibrium could be avoided. We also find that the game exhibits an intriguing structure as a function of the parameter of the controlled operators, which is useful for making game models.

  14. Reproduction in the space environment: Part II. Concerns for human reproduction

    NASA Technical Reports Server (NTRS)

    Jennings, R. T.; Santy, P. A.

    1990-01-01

    Long-duration space flight and eventual colonization of our solar system will require successful control of reproductive function and a thorough understanding of factors unique to space flight and their impact on gynecologic and obstetric parameters. Part II of this paper examines the specific environmental factors associated with space flight and the implications for human reproduction. Space environmental hazards discussed include radiation, alteration in atmospheric pressure and breathing gas partial pressures, prolonged toxicological exposure, and microgravity. The effects of countermeasures necessary to reduce cardiovascular deconditioning, calcium loss, muscle wasting, and neurovestibular problems are also considered. In addition, the impact of microgravity on male fertility and gamete quality is explored. Due to current constraints, human pregnancy is now contraindicated for space flight. However, a program to explore effective countermeasures to current constraints and develop the required health care delivery capability for extended-duration space flight is suggested. A program of Earth- and space-based research to provide further answers to reproductive questions is suggested.

  15. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models.

    PubMed

    Cotten, Cameron; Reed, Jennifer L

    2013-01-30

    Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.

  16. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models

    PubMed Central

    2013-01-01

    Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254

  17. A model with isospin doublet U(1)D gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-05-01

    We propose a model with an extra isospin doublet U(1)D gauge symmetry, in which we introduce several extra fermions with odd parity under a discrete Z2 symmetry in order to cancel the gauge anomalies out. A remarkable issue is that we impose nonzero U(1)D charge to the Standard Model Higgs, and it gives the most stringent constraint to the vacuum expectation value of a scalar field breaking the U(1)D symmetry that is severer than the LEP bound. We then explore relic density of a Majorana dark matter candidate without conflict of constraints from lepton flavor violating processes. A global analysis is carried out to search for parameters which can accommodate with the observed data.

  18. Discovery Planetary Mission Operations Concepts

    NASA Technical Reports Server (NTRS)

    Coffin, R.

    1994-01-01

    The NASA Discovery Program of small planetary missions will provide opportunities to continue scientific exploration of the solar system in today's cost-constrained environment. Using a multidisciplinary team, JPL has developed plans to provide mission operations within the financial parameters established by the Discovery Program. This paper describes experiences and methods that show promise of allowing the Discovery Missions to operate within the program cost constraints while maintaining low mission risk, high data quality, and reponsive operations.

  19. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  20. Exploring the relationship between posttraumatic growth, cognitive processing, psychological distress, and social constraints in a sample of breast cancer patients.

    PubMed

    Koutrouli, Natalia; Anagnostopoulos, Fotios; Griva, Fay; Gourounti, Kleanthi; Kolokotroni, Filippa; Efstathiou, Vasia; Mellon, Robert; Papastylianou, Dona; Niakas, Dimitris; Potamianos, Gregory

    2016-01-01

    Posttraumatic growth (the perception of positive life changes after an encounter with a trauma) often occurs among breast cancer patients and can be influenced by certain demographic, medical, and psychosocial parameters. Social constraints on disclosure (the deprivation of the opportunity to express feelings and thoughts regarding the trauma) and the cognitive processing of the disease seem to be involved in the development of posttraumatic growth. Through the present study the authors aim to: investigate the levels of posttraumatic growth in a sample of 202 women with breast cancer in Greece, explore the relationships between posttraumatic growth and particular demographic, medical, and psychosocial variables according to a proposed model, and test the role of social constraints in the relationship between automatic and deliberate cognitive processing of the trauma. The results showed that posttraumatic growth was evident in the majority of the sample and was associated inversely with age at diagnosis (β = -0.174, p < .05) and psychological distress (β = -0.394, p = .001), directly with time since diagnosis (β = 0.181, p < .05), and indirectly with intrusions and psychological distress, through reflective rumination (β = 0.323, p = .001). Social constraints were found to moderate the relationship between intrusions and reflective rumination. Implications of the results and suggestions for future research and practice are outlined.

  1. Exploring the hyperchargeless Higgs triplet model up to the Planck scale

    NASA Astrophysics Data System (ADS)

    Khan, Najimuddin

    2018-04-01

    We examine an extension of the SM Higgs sector by a Higgs triplet taking into consideration the discovery of a Higgs-like particle at the LHC with mass around 125 GeV. We evaluate the bounds on the scalar potential through the unitarity of the scattering matrix. Considering the cases with and without Z_2-symmetry of the extra triplet, we derive constraints on the parameter space. We identify the region of the parameter space that corresponds to the stability and metastability of the electroweak vacuum. We also show that at large field values the scalar potential of this model is suitable to explain inflation.

  2. Gravitational-wave stochastic background from cosmic strings.

    PubMed

    Siemens, Xavier; Mandic, Vuk; Creighton, Jolien

    2007-03-16

    We consider the stochastic background of gravitational waves produced by a network of cosmic strings and assess their accessibility to current and planned gravitational wave detectors, as well as to big bang nucleosynthesis (BBN), cosmic microwave background (CMB), and pulsar timing constraints. We find that current data from interferometric gravitational wave detectors, such as Laser Interferometer Gravitational Wave Observatory (LIGO), are sensitive to areas of parameter space of cosmic string models complementary to those accessible to pulsar, BBN, and CMB bounds. Future more sensitive LIGO runs and interferometers such as Advanced LIGO and Laser Interferometer Space Antenna (LISA) will be able to explore substantial parts of the parameter space.

  3. Termites: a Retinex implementation based on a colony of agents

    NASA Astrophysics Data System (ADS)

    Simone, Gabriele; Audino, Giuseppe; Farup, Ivar; Rizzi, Alessandro

    2012-01-01

    This paper describes a novel implementation of the Retinex algorithm with the exploration of the image done by an ant swarm. In this case the purpose of the ant colony is not the optimization of some constraints but is an alternative way to explore the image content as diffused as possible, with the possibility of tuning the exploration parameters to the image content trying to better approach the Human Visual System behavior. For this reason, we used "termites", instead of ants, to underline the idea of the eager exploration of the image. The paper presents the spatial characteristics of locality and discusses differences in path exploration with other Retinex implementations. Furthermore a psychophysical experiment has been carried out on eight images with 20 observers and results indicate that a termite swarm should investigate a particular region of an image to find the local reference white.

  4. Testing for Lorentz violation: constraints on standard-model-extension parameters via lunar laser ranging.

    PubMed

    Battat, James B R; Chandler, John F; Stubbs, Christopher W

    2007-12-14

    We present constraints on violations of Lorentz invariance based on archival lunar laser-ranging (LLR) data. LLR measures the Earth-Moon separation by timing the round-trip travel of light between the two bodies and is currently accurate to the equivalent of a few centimeters (parts in 10(11) of the total distance). By analyzing this LLR data under the standard-model extension (SME) framework, we derived six observational constraints on dimensionless SME parameters that describe potential Lorentz violation. We found no evidence for Lorentz violation at the 10(-6) to 10(-11) level in these parameters. This work constitutes the first LLR constraints on SME parameters.

  5. Rephasing invariant parametrization of flavor mixing

    NASA Astrophysics Data System (ADS)

    Lee, Tae-Hun

    A new rephasing invariant parametrization for the 3 x 3 CKM matrix, called (x, y) parametrization, is introduced and the properties and applications of the parametrization are discussed. The overall phase condition leads this parametrization to have only six rephsing invariant parameters and two constraints. Its simplicity and regularity become apparent when it is applied to the one-loop RGE (renormalization group equations) for the Yukawa couplings. The implications of this parametrization for unification of the Yukawa couplings are also explored.

  6. Exploring the Diffuse X-ray Emission of Supernova Remnant Kesteven 69 with XMM-Newton

    NASA Astrophysics Data System (ADS)

    Seo, Kyoung-Ae; Hui, Chung Yue

    2013-06-01

    We have investigated the X-ray emission from the shock-heated plasma of the Galactic supernova remnant Kesteven 69 with XMM-Newton. Assuming the plasma is at collisional ionization equilibrium, a plasma temperature and a column absorption are found to be kT ~ 0.62 keV and NH ~ 2.85 ×10^22 cm-2 respectively by imaging spectroscopy. Together with the deduced emission measure, we place constraints on its Sedov parameters.

  7. Masked areas in shear peak statistics. A forward modeling approach

    DOE PAGES

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less

  8. MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bard, D.; Kratochvil, J. M.; Dawson, W., E-mail: djbard@slac.stanford.edu

    2016-03-10

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less

  9. Consistency among distance measurements: transparency, BAO scale and accelerated expansion

    NASA Astrophysics Data System (ADS)

    Avgoustidis, Anastasios; Verde, Licia; Jimenez, Raul

    2009-06-01

    We explore consistency among different distance measures, including Supernovae Type Ia data, measurements of the Hubble parameter, and determination of the Baryon acoustic oscillation scale. We present new constraints on the cosmic transparency combining H(z) data together with the latest Supernovae Type Ia data compilation. This combination, in the context of a flat ΛCDM model, improves current constraints by nearly an order of magnitude although the constraints presented here are parametric rather than non-parametric. We re-examine the recently reported tension between the Baryon acoustic oscillation scale and Supernovae data in light of possible deviations from transparency, concluding that the source of the discrepancy may most likely be found among systematic effects of the modelling of the low redshift data or a simple ~ 2-σ statistical fluke, rather than in exotic physics. Finally, we attempt to draw model-independent conclusions about the recent accelerated expansion, determining the acceleration redshift to be zacc = 0.35+0.20-0.13 (1-σ).

  10. Constraints on the dark matter neutralinos from the radio emissions of galaxy clusters

    NASA Astrophysics Data System (ADS)

    Kiew, Ching-Yee; Hwang, Chorng-Yuan; Zainal Abibin, Zamri

    2017-05-01

    By assuming the dark matter to be composed of neutralinos, we used the detection of upper limit on diffuse radio emission in a sample of galaxy clusters to put constraint on the properties of neutralinos. We showed the upper limit constraint on <σv>-mχ space with neutralino annihilation through b\\bar{b} and μ+μ- channels. The best constraint is from the galaxy clusters A2199 and A1367. We showed the uncertainty due to the density profile and cluster magnetic field. The largest uncertainty comes from the uncertainty in dark matter spatial distribution. We also investigated the constraints on minimal Supergravity (mSUGRA) and minimal supersymmetric standard model (MSSM) parameter space by scanning the parameters using the darksusy package. By using the current radio observation, we managed to exclude 40 combinations of mSUGRA parameters. On the other hand, 573 combinations of MSSM parameters can be excluded by current observation.

  11. Size constraints on a Majorana beam-splitter interferometer: Majorana coupling and surface-bulk scattering

    NASA Astrophysics Data System (ADS)

    Røising, Henrik Schou; Simon, Steven H.

    2018-03-01

    Topological insulator surfaces in proximity to superconductors have been proposed as a way to produce Majorana fermions in condensed matter physics. One of the simplest proposed experiments with such a system is Majorana interferometry. Here we consider two possibly conflicting constraints on the size of such an interferometer. Coupling of a Majorana mode from the edge (the arms) of the interferometer to vortices in the center of the device sets a lower bound on the size of the device. On the other hand, scattering to the usually imperfectly insulating bulk sets an upper bound. From estimates of experimental parameters, we find that typical samples may have no size window in which the Majorana interferometer can operate, implying that a new generation of more highly insulating samples must be explored.

  12. Exploring extended scalar sectors with di-Higgs signals: a Higgs EFT perspective

    NASA Astrophysics Data System (ADS)

    Corbett, Tyler; Joglekar, Aniket; Li, Hao-Lin; Yu, Jiang-Hao

    2018-05-01

    We consider extended scalar sectors of the Standard Model as ultraviolet complete motivations for studying the effective Higgs self-interaction operators of the Standard Model effective field theory. We investigate all motivated heavy scalar models which generate the dimension-six effective operator, | H|6, at tree level and proceed to identify the full set of tree-level dimension-six operators by integrating out the heavy scalars. Of seven models which generate | H|6 at tree level only two, quadruplets of hypercharge Y = 3 Y H and Y = Y H , generate only this operator. Next we perform global fits to constrain relevant Wilson coefficients from the LHC single Higgs measurements as well as the electroweak oblique parameters S and T. We find that the T parameter puts very strong constraints on the Wilson coefficient of the | H|6 operator in the triplet and quadruplet models, while the singlet and doublet models could still have Higgs self-couplings which deviate significantly from the standard model prediction. To determine the extent to which the | H|6 operator could be constrained, we study the di-Higgs signatures at the future 100 TeV collider and explore future sensitivity of this operator. Projected onto the Higgs potential parameters of the extended scalar sectors, with 30 ab-1 luminosity data we will be able to explore the Higgs potential parameters in all seven models.

  13. DECIPHERING THERMAL PHASE CURVES OF DRY, TIDALLY LOCKED TERRESTRIAL PLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koll, Daniel D. B.; Abbot, Dorian S., E-mail: dkoll@uchicago.edu

    2015-03-20

    Next-generation space telescopes will allow us to characterize terrestrial exoplanets. To do so effectively it will be crucial to make use of all available data. We investigate which atmospheric properties can, and cannot, be inferred from the broadband thermal phase curve of a dry and tidally locked terrestrial planet. First, we use dimensional analysis to show that phase curves are controlled by six nondimensional parameters. Second, we use an idealized general circulation model to explore the relative sensitivity of phase curves to these parameters. We find that the feature of phase curves most sensitive to atmospheric parameters is the peak-to-troughmore » amplitude. Moreover, except for hot and rapidly rotating planets, the phase amplitude is primarily sensitive to only two nondimensional parameters: (1) the ratio of dynamical to radiative timescales and (2) the longwave optical depth at the surface. As an application of this technique, we show how phase curve measurements can be combined with transit or emission spectroscopy to yield a new constraint for the surface pressure and atmospheric mass of terrestrial planets. We estimate that a single broadband phase curve, measured over half an orbit with the James Webb Space Telescope, could meaningfully constrain the atmospheric mass of a nearby super-Earth. Such constraints will be important for studying the atmospheric evolution of terrestrial exoplanets as well as characterizing the surface conditions on potentially habitable planets.« less

  14. Cosmic microwave background theory

    PubMed Central

    Bond, J. Richard

    1998-01-01

    A long-standing goal of theorists has been to constrain cosmological parameters that define the structure formation theory from cosmic microwave background (CMB) anisotropy experiments and large-scale structure (LSS) observations. The status and future promise of this enterprise is described. Current band-powers in ℓ-space are consistent with a ΔT flat in frequency and broadly follow inflation-based expectations. That the levels are ∼(10−5)2 provides strong support for the gravitational instability theory, while the Far Infrared Absolute Spectrophotometer (FIRAS) constraints on energy injection rule out cosmic explosions as a dominant source of LSS. Band-powers at ℓ ≳ 100 suggest that the universe could not have re-ionized too early. To get the LSS of Cosmic Background Explorer (COBE)-normalized fluctuations right provides encouraging support that the initial fluctuation spectrum was not far off the scale invariant form that inflation models prefer: e.g., for tilted Λ cold dark matter sequences of fixed 13-Gyr age (with the Hubble constant H0 marginalized), ns = 1.17 ± 0.3 for Differential Microwave Radiometer (DMR) only; 1.15 ± 0.08 for DMR plus the SK95 experiment; 1.00 ± 0.04 for DMR plus all smaller angle experiments; 1.00 ± 0.05 when LSS constraints are included as well. The CMB alone currently gives weak constraints on Λ and moderate constraints on Ωtot, but theoretical forecasts of future long duration balloon and satellite experiments are shown which predict percent-level accuracy among a large fraction of the 10+ parameters characterizing the cosmic structure formation theory, at least if it is an inflation variant. PMID:9419321

  15. Redshift drift constraints on holographic dark energy

    NASA Astrophysics Data System (ADS)

    He, Dong-Ze; Zhang, Jing-Fei; Zhang, Xin

    2017-03-01

    The Sandage-Loeb (SL) test is a promising method for probing dark energy because it measures the redshift drift in the spectra of Lyman- α forest of distant quasars, covering the "redshift desert" of 2 ≲ z ≲ 5, which is not covered by existing cosmological observations. Therefore, it could provide an important supplement to current cosmological observations. In this paper, we explore the impact of SL test on the precision of cosmological constraints for two typical holographic dark energy models, i.e., the original holographic dark energy (HDE) model and the Ricci holographic dark energy (RDE) model. To avoid data inconsistency, we use the best-fit models based on current combined observational data as the fiducial models to simulate 30 mock SL test data. The results show that SL test can effectively break the existing strong degeneracy between the present-day matter density Ωm0 and the Hubble constant H 0 in other cosmological observations. For the considered two typical dark energy models, not only can a 30-year observation of SL test improve the constraint precision of Ωm0 and h dramatically, but can also enhance the constraint precision of the model parameters c and α significantly.

  16. Modeling driver behavior in a cognitive architecture.

    PubMed

    Salvucci, Dario D

    2006-01-01

    This paper explores the development of a rigorous computational model of driver behavior in a cognitive architecture--a computational framework with underlying psychological theories that incorporate basic properties and limitations of the human system. Computational modeling has emerged as a powerful tool for studying the complex task of driving, allowing researchers to simulate driver behavior and explore the parameters and constraints of this behavior. An integrated driver model developed in the ACT-R (Adaptive Control of Thought-Rational) cognitive architecture is described that focuses on the component processes of control, monitoring, and decision making in a multilane highway environment. This model accounts for the steering profiles, lateral position profiles, and gaze distributions of human drivers during lane keeping, curve negotiation, and lane changing. The model demonstrates how cognitive architectures facilitate understanding of driver behavior in the context of general human abilities and constraints and how the driving domain benefits cognitive architectures by pushing model development toward more complex, realistic tasks. The model can also serve as a core computational engine for practical applications that predict and recognize driver behavior and distraction.

  17. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    NASA Astrophysics Data System (ADS)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.

  18. Constraints on modified gravity models from white dwarfs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Srimanta; Singh, Tejinder P.; Shankar, Swapnil, E-mail: srimanta.banerjee@tifr.res.in, E-mail: swapnil.shankar@cbs.ac.in, E-mail: tpsingh@tifr.res.in

    Modified gravity theories can introduce modifications to the Poisson equation in the Newtonian limit. As a result, we expect to see interesting features of these modifications inside stellar objects. White dwarf stars are one of the most well studied stars in stellar astrophysics. We explore the effect of modified gravity theories inside white dwarfs. We derive the modified stellar structure equations and solve them to study the mass-radius relationships for various modified gravity theories. We also constrain the parameter space of these theories from observations.

  19. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    PubMed Central

    Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander

    2018-01-01

    Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723

  20. Thermal inflation with a thermal waterfall scalar field coupled to a light spectator scalar field

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Konstantinos; Lyth, David H.; Rumsey, Arron

    2017-05-01

    A new model of thermal inflation is introduced, in which the mass of the thermal waterfall field is dependent on a light spectator scalar field. Using the δ N formalism, the "end of inflation" scenario is investigated in order to ascertain whether this model is able to produce the dominant contribution to the primordial curvature perturbation. A multitude of constraints are considered so as to explore the parameter space, with particular emphasis on key observational signatures. For natural values of the parameters, the model is found to yield a sharp prediction for the scalar spectral index and its running, well within the current observational bounds.

  1. Constraints on CDM cosmology from galaxy power spectrum, CMB and SNIa evolution

    NASA Astrophysics Data System (ADS)

    Ferramacho, L. D.; Blanchard, A.; Zolnierowski, Y.

    2009-05-01

    Aims: We examine the constraints that can be obtained on standard cold dark matter models from the most currently used data set: CMB anisotropies, type Ia supernovae and the SDSS luminous red galaxies. We also examine how these constraints are widened when the equation of state parameter w and the curvature parameter Ωk are left as free parameters. Finally, we investigate the impact on these constraints of a possible form of evolution in SNIa intrinsic luminosity. Methods: We obtained our results from MCMC analysis using the full likelihood of each data set. Results: For the ΛCDM model, our “vanilla” model, cosmological parameters are tightly constrained and consistent with current estimates from various methods. When the dark energy parameter w is free we find that the constraints remain mostly unchanged, i.e. changes are smaller than the 1 sigma uncertainties. Similarly, relaxing the assumption of a flat universe leads to nearly identical constraints on the dark energy density parameter of the universe Ω_Λ , baryon density of the universe Ω_b, the optical depth τ, the index of the power spectrum of primordial fluctuations n_S, with most one sigma uncertainties better than 5%. More significant changes appear on other parameters: while preferred values are almost unchanged, uncertainties for the physical dark matter density Ω_ch^2, Hubble constant H0 and σ8 are typically twice as large. The constraint on the age of the Universe, which is very accurate for the vanilla model, is the most degraded. We found that different methodological approaches on large scale structure estimates lead to appreciable differences in preferred values and uncertainty widths. We found that possible evolution in SNIa intrinsic luminosity does not alter these constraints by much, except for w, for which the uncertainty is twice as large. At the same time, this possible evolution is severely constrained. Conclusions: We conclude that systematic uncertainties for some estimated quantities are similar or larger than statistical ones.

  2. Power-rate-distortion analysis for wireless video communication under energy constraint

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq

    2004-01-01

    In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.

  3. Constraining Nonperturbative Strong-Field Effects in Scalar-Tensor Gravity by Combining Pulsar Timing and Laser-Interferometer Gravitational-Wave Detectors

    NASA Astrophysics Data System (ADS)

    Shao, Lijing; Sennett, Noah; Buonanno, Alessandra; Kramer, Michael; Wex, Norbert

    2017-10-01

    Pulsar timing and laser-interferometer gravitational-wave (GW) detectors are superb laboratories to study gravity theories in the strong-field regime. Here, we combine these tools to test the mono-scalar-tensor theory of Damour and Esposito-Farèse (DEF), which predicts nonperturbative scalarization phenomena for neutron stars (NSs). First, applying Markov-chain Monte Carlo techniques, we use the absence of dipolar radiation in the pulsar-timing observations of five binary systems composed of a NS and a white dwarf, and eleven equations of state (EOSs) for NSs, to derive the most stringent constraints on the two free parameters of the DEF scalar-tensor theory. Since the binary-pulsar bounds depend on the NS mass and the EOS, we find that current pulsar-timing observations leave scalarization windows, i.e., regions of parameter space where scalarization can still be prominent. Then, we investigate if these scalarization windows could be closed and if pulsar-timing constraints could be improved by laser-interferometer GW detectors, when spontaneous (or dynamical) scalarization sets in during the early (or late) stages of a binary NS (BNS) evolution. For the early inspiral of a BNS carrying constant scalar charge, we employ a Fisher-matrix analysis to show that Advanced LIGO can improve pulsar-timing constraints for some EOSs, and next-generation detectors, such as the Cosmic Explorer and Einstein Telescope, will be able to improve those bounds for all eleven EOSs. Using the late inspiral of a BNS, we estimate that for some of the EOSs under consideration, the onset of dynamical scalarization can happen early enough to improve the constraints on the DEF parameters obtained by combining the five binary pulsars. Thus, in the near future, the complementarity of pulsar timing and direct observations of GWs on the ground will be extremely valuable in probing gravity theories in the strong-field regime.

  4. Integrated optimization of planetary rover layout and exploration routes

    NASA Astrophysics Data System (ADS)

    Lee, Dongoo; Ahn, Jaemyung

    2018-01-01

    This article introduces an optimization framework for the integrated design of a planetary surface rover and its exploration route that is applicable to the initial phase of a planetary exploration campaign composed of multiple surface missions. The scientific capability and the mobility of a rover are modelled as functions of the science weight fraction, a key parameter characterizing the rover. The proposed problem is formulated as a mixed-integer nonlinear program that maximizes the sum of profits obtained through a planetary surface exploration mission by simultaneously determining the science weight fraction of the rover, the sites to visit and their visiting sequences under resource consumption constraints imposed on each route and collectively on a mission. A solution procedure for the proposed problem composed of two loops (the outer loop and the inner loop) is developed. The results of test cases demonstrating the effectiveness of the proposed framework are presented.

  5. Joint cosmic microwave background and weak lensing analysis: constraints on cosmological parameters.

    PubMed

    Contaldi, Carlo R; Hoekstra, Henk; Lewis, Antony

    2003-06-06

    We use cosmic microwave background (CMB) observations together with the red-sequence cluster survey weak lensing results to derive constraints on a range of cosmological parameters. This particular choice of observations is motivated by their robust physical interpretation and complementarity. Our combined analysis, including a weak nucleosynthesis constraint, yields accurate determinations of a number of parameters including the amplitude of fluctuations sigma(8)=0.89+/-0.05 and matter density Omega(m)=0.30+/-0.03. We also find a value for the Hubble parameter of H(0)=70+/-3 km s(-1) Mpc(-1), in good agreement with the Hubble Space Telescope key-project result. We conclude that the combination of CMB and weak lensing data provides some of the most powerful constraints available in cosmology today.

  6. The super-GUT CMSSM revisited

    DOE PAGES

    Ellis, John; Evans, Jason L.; Mustafayev, Azar; ...

    2016-10-28

    Here, we revisit minimal supersymmetric SU(5) grand unification (GUT) models in which the soft supersymmetry-breaking parameters of the minimal supersymmetric Standard Model (MSSM) are universal at some input scale, M in, above the supersymmetric gauge-coupling unification scale, M GUT. As in the constrained MSSM (CMSSM), we assume that the scalar masses and gaugino masses have common values, m 0 and m 1/2, respectively, at M in, as do the trilinear soft supersymmetry-breaking parameters A 0. Going beyond previous studies of such a super-GUT CMSSM scenario, we explore the constraints imposed by the lower limit on the proton lifetime and themore » LHC measurement of the Higgs mass, m h. We find regions of m 0, m 1/2 A 0 and the parameters of the SU(5) superpotential that are compatible with these and other phenomenological constraints such as the density of cold dark matter, which we assume to be provided by the lightest neutralino. Typically, these allowed regions appear for m 0 and m 1/2 in the multi-TeV region, for suitable values of the unknown SU(5) GUT-scale phases and superpotential couplings, and with the ratio of supersymmetric Higgs vacuum expectation values tan β≲6.« less

  7. Exploring Sulfur & Argon Abundances in Planetary Nebulae as Metallicity- Indicator Surrogates for Iron in the Interstellar Medium

    NASA Astrophysics Data System (ADS)

    Kwitter, Karen B.; Henry, Richard C.

    1999-02-01

    Our primary motivation for studying S and Ar distributions in planetary nebulae (PNe) across the Galactic disk is to explore the possibility of a surrogacy between (S+Ar)/O and Fe/O for use as a metallicity indicator in the interstellar medium. The chemical history of the Galaxy is usually studied through O and Fe distributions among objects of different ages. Historically, though, Fe and O have not been measured in the same systems: Fe is easily seen in stars but hard to detect in nebulae; the reverse is true for O. We know that S and Ar abundances are not affected by PN progenitor evolution, and we therefore seek to exploit both their unaltered abundances and ease of detectability in PNe to explore their surrogacy for Fe. If proven valid, this surrogacy carries broad and important ramifications for bridging the gap between stellar and interstellar abundances in the Galaxy, and potentially beyond. Observed S/O and Ar/O gradients will also provide constraints on theoretical stellar yields of S and Ar, since they can be compared with chemical evolution models (which incorporate theoretically-predicted stellar yields, an initial mass function, and rates of star formation and infall) to help place constraints on model parameters.

  8. Analysis of Network Topologies Underlying Ethylene Growth Response Kinetics

    PubMed Central

    Prescott, Aaron M.; McCollough, Forest W.; Eldreth, Bryan L.; Binder, Brad M.; Abel, Steven M.

    2016-01-01

    Most models for ethylene signaling involve a linear pathway. However, measurements of seedling growth kinetics when ethylene is applied and removed have resulted in more complex network models that include coherent feedforward, negative feedback, and positive feedback motifs. The dynamical responses of the proposed networks have not been explored in a quantitative manner. Here, we explore (i) whether any of the proposed models are capable of producing growth-response behaviors consistent with experimental observations and (ii) what mechanistic roles various parts of the network topologies play in ethylene signaling. To address this, we used computational methods to explore two general network topologies: The first contains a coherent feedforward loop that inhibits growth and a negative feedback from growth onto itself (CFF/NFB). In the second, ethylene promotes the cleavage of EIN2, with the product of the cleavage inhibiting growth and promoting the production of EIN2 through a positive feedback loop (PFB). Since few network parameters for ethylene signaling are known in detail, we used an evolutionary algorithm to explore sets of parameters that produce behaviors similar to experimental growth response kinetics of both wildtype and mutant seedlings. We generated a library of parameter sets by independently running the evolutionary algorithm many times. Both network topologies produce behavior consistent with experimental observations, and analysis of the parameter sets allows us to identify important network interactions and parameter constraints. We additionally screened these parameter sets for growth recovery in the presence of sub-saturating ethylene doses, which is an experimentally-observed property that emerges in some of the evolved parameter sets. Finally, we probed simplified networks maintaining key features of the CFF/NFB and PFB topologies. From this, we verified observations drawn from the larger networks about mechanisms underlying ethylene signaling. Analysis of each network topology results in predictions about changes that occur in network components that can be experimentally tested to give insights into which, if either, network underlies ethylene responses. PMID:27625669

  9. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  10. Constraints on a generalized deceleration parameter from cosmic chronometers

    NASA Astrophysics Data System (ADS)

    Mamon, Abdulla Al

    2018-04-01

    In this paper, we have proposed a generalized parametrization for the deceleration parameter q in order to study the evolutionary history of the universe. We have shown that the proposed model can reproduce three well known q-parametrized models for some specific values of the model parameter α. We have used the latest compilation of the Hubble parameter measurements obtained from the cosmic chronometer (CC) method (in combination with the local value of the Hubble constant H0) and the Type Ia supernova (SNIa) data to place constraints on the parameters of the model for different values of α. We have found that the resulting constraints on the deceleration parameter and the dark energy equation of state support the ΛCDM model within 1σ confidence level at the present epoch.

  11. UCMS - A new signal parameter measurement system using digital signal processing techniques. [User Constraint Measurement System

    NASA Technical Reports Server (NTRS)

    Choi, H. J.; Su, Y. T.

    1986-01-01

    The User Constraint Measurement System (UCMS) is a hardware/software package developed by NASA Goddard to measure the signal parameter constraints of the user transponder in the TDRSS environment by means of an all-digital signal sampling technique. An account is presently given of the features of UCMS design and of its performance capabilities and applications; attention is given to such important aspects of the system as RF interface parameter definitions, hardware minimization, the emphasis on offline software signal processing, and end-to-end link performance. Applications to the measurement of other signal parameters are also discussed.

  12. Future Cosmological Constraints From Fast Radio Bursts

    NASA Astrophysics Data System (ADS)

    Walters, Anthony; Weltman, Amanda; Gaensler, B. M.; Ma, Yin-Zhe; Witzemann, Amadeus

    2018-03-01

    We consider the possible observation of fast radio bursts (FRBs) with planned future radio telescopes, and investigate how well the dispersions and redshifts of these signals might constrain cosmological parameters. We construct mock catalogs of FRB dispersion measure (DM) data and employ Markov Chain Monte Carlo analysis, with which we forecast and compare with existing constraints in the flat ΛCDM model, as well as some popular extensions that include dark energy equation of state and curvature parameters. We find that the scatter in DM observations caused by inhomogeneities in the intergalactic medium (IGM) poses a big challenge to the utility of FRBs as a cosmic probe. Only in the most optimistic case, with a high number of events and low IGM variance, do FRBs aid in improving current constraints. In particular, when FRBs are combined with CMB+BAO+SNe+H 0 data, we find the biggest improvement comes in the {{{Ω }}}{{b}}{h}2 constraint. Also, we find that the dark energy equation of state is poorly constrained, while the constraint on the curvature parameter, Ω k , shows some improvement when combined with current constraints. When FRBs are combined with future baryon acoustic oscillation (BAO) data from 21 cm Intensity Mapping, we find little improvement over the constraints from BAOs alone. However, the inclusion of FRBs introduces an additional parameter constraint, {{{Ω }}}{{b}}{h}2, which turns out to be comparable to existing constraints. This suggests that FRBs provide valuable information about the cosmological baryon density in the intermediate redshift universe, independent of high-redshift CMB data.

  13. The Relationship Between Constraint and Ductile Fracture Initiation as Defined by Micromechanical Analyses

    NASA Technical Reports Server (NTRS)

    Panontin, Tina L.; Sheppard, Sheri D.

    1994-01-01

    The use of small laboratory specimens to predict the integrity of large, complex structures relies on the validity of single parameter fracture mechanics. Unfortunately, the constraint loss associated with large scale yielding, whether in a laboratory specimen because of its small size or in a structure because it contains shallow flaws loaded in tension, can cause the breakdown of classical fracture mechanics and the loss of transferability of critical, global fracture parameters. Although the issue of constraint loss can be eliminated by testing actual structural configurations, such an approach can be prohibitively costly. Hence, a methodology that can correct global fracture parameters for constraint effects is desirable. This research uses micromechanical analyses to define the relationship between global, ductile fracture initiation parameters and constraint in two specimen geometries (SECT and SECB with varying a/w ratios) and one structural geometry (circumferentially cracked pipe). Two local fracture criteria corresponding to ductile fracture micromechanisms are evaluated: a constraint-modified, critical strain criterion for void coalescence proposed by Hancock and Cowling and a critical void ratio criterion for void growth based on the Rice and Tracey model. Crack initiation is assumed to occur when the critical value in each case is reached over some critical length. The primary material of interest is A516-70, a high-hardening pressure vessel steel sensitive to constraint; however, a low-hardening structural steel that is less sensitive to constraint is also being studied. Critical values of local fracture parameters are obtained by numerical analysis and experimental testing of circumferentially notched tensile specimens of varying constraint (e.g., notch radius). These parameters are then used in conjunction with large strain, large deformation, two- and three-dimensional finite element analyses of the geometries listed above to predict crack initiation loads and to calculate the associated (critical) global fracture parameters. The loads are verified experimentally, and microscopy is used to measure pre-crack length, crack tip opening displacement (CTOD), and the amount of stable crack growth. Results for A516-70 steel indicate that the constraint-modified, critical strain criterion with a critical length approximately equal to the grain size (0.0025 inch) provides accurate predictions of crack initiation. The critical void growth criterion is shown to considerably underpredict crack initiation loads with the same critical length. The relationship between the critical value of the J-integral for ductile crack initiation and crack depth for SECT and SECB specimens has been determined using the constraint-modified, critical strain criterion, demonstrating that this micromechanical model can be used to correct in-plane constraint effects due to crack depth and bending vs. tension loading. Finally, the relationship developed for the SECT specimens is used to predict the behavior of circumferentially cracked pipe specimens.

  14. Constraints on deviations from ΛCDM within Horndeski gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellini, Emilio; Cuesta, Antonio J.; Jimenez, Raul

    2016-02-01

    Recent anomalies found in cosmological datasets such as the low multipoles of the Cosmic Microwave Background or the low redshift amplitude and growth of clustering measured by e.g., abundance of galaxy clusters and redshift space distortions in galaxy surveys, have motivated explorations of models beyond standard ΛCDM. Of particular interest are models where general relativity (GR) is modified on large cosmological scales. Here we consider deviations from ΛCDM+GR within the context of Horndeski gravity, which is the most general theory of gravity with second derivatives in the equations of motion. We adopt a parametrization in which the four additional Horndeskimore » functions of time α{sub i}(t) are proportional to the cosmological density of dark energy Ω{sub DE}(t). Constraints on this extended parameter space using a suite of state-of-the art cosmological observations are presented for the first time. Although the theory is able to accommodate the low multipoles of the Cosmic Microwave Background and the low amplitude of fluctuations from redshift space distortions, we find no significant tension with ΛCDM+GR when performing a global fit to recent cosmological data and thus there is no evidence against ΛCDM+GR from an analysis of the value of the Bayesian evidence ratio of the modified gravity models with respect to ΛCDM, despite introducing extra parameters. The posterior distribution of these extra parameters that we derive return strong constraints on any possible deviations from ΛCDM+GR in the context of Horndeski gravity. We illustrate how our results can be applied to a more general frameworks of modified gravity models.« less

  15. Design Parameters for Subwavelength Transparent Conductive Nanolattices

    DOE PAGES

    Diaz Leon, Juan J.; Feigenbaum, Eyal; Kobayashi, Nobuhiko P.; ...

    2017-09-29

    Recent advancements with the directed assembly of block copolymers have enabled the fabrication over cm 2 areas of highly ordered metal nanowire meshes, or nanolattices, which are of significant interest as transparent electrodes. Compared to randomly dispersed metal nanowire networks that have long been considered the most promising next-generation transparent electrode material, such ordered nanolattices represent a new design paradigm that is yet to be optimized. Here in this paper, through optical and electrical simulations, we explore the potential design parameters for such nanolattices as transparent conductive electrodes, elucidating relationships between the nanowire dimensions, defects, and the nanolattices’ conductivity andmore » transmissivity. We find that having an ordered nanowire network significantly decreases the length of nanowires required to attain both high transmissivity and high conductivity, and we quantify the network’s tolerance to defects in relation to other design constraints. Furthermore, we explore how both optical and electrical anisotropy can be introduced to such nanolattices, opening an even broader materials design space and possible set of applications.« less

  16. Design Parameters for Subwavelength Transparent Conductive Nanolattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz Leon, Juan J.; Feigenbaum, Eyal; Kobayashi, Nobuhiko P.

    Recent advancements with the directed assembly of block copolymers have enabled the fabrication over cm 2 areas of highly ordered metal nanowire meshes, or nanolattices, which are of significant interest as transparent electrodes. Compared to randomly dispersed metal nanowire networks that have long been considered the most promising next-generation transparent electrode material, such ordered nanolattices represent a new design paradigm that is yet to be optimized. Here in this paper, through optical and electrical simulations, we explore the potential design parameters for such nanolattices as transparent conductive electrodes, elucidating relationships between the nanowire dimensions, defects, and the nanolattices’ conductivity andmore » transmissivity. We find that having an ordered nanowire network significantly decreases the length of nanowires required to attain both high transmissivity and high conductivity, and we quantify the network’s tolerance to defects in relation to other design constraints. Furthermore, we explore how both optical and electrical anisotropy can be introduced to such nanolattices, opening an even broader materials design space and possible set of applications.« less

  17. Small Nuclear-powered Hot Air Balloons for the Exploration of the Deep Atmosphere of Uranus and Neptune

    NASA Astrophysics Data System (ADS)

    Van Cleve, J. E.; Grillmair, C. J.

    2001-01-01

    The Galileo probe gathered data in the Jovian atmosphere for about one hour before its destruction. For a wider perceptive on the atmospheres of the outer planets, multiple, long-lived observations platforms would be useful. In this paper we examine the basic physics of hot-air ballooning in a hydrogen atmosphere, using plutonium RTGs as a heat source. We find that such balloons are buoyant at a sufficiently great depth in these atmospheres, and derive equations for the balloon radius and mass of plutonium required as a function of atmospheric mass density and balloon material parameters. We solve for the buoyancy depth given the constraint that each probe may contain 1.0 kg of Pu, and find that the temperature at that depth is too great for conventional electronics (>70 C) for Jupiter and Saturn. However, the Pu mass constraint and the operating temperature constraint are consistent for Uranus and Neptune, and this concept may be applicable to those planets. Additional information is contained in the original extended abstract.

  18. Research on Bifurcation and Chaos in a Dynamic Mixed Game System with Oligopolies Under Carbon Emission Constraint

    NASA Astrophysics Data System (ADS)

    Ma, Junhai; Yang, Wenhui; Lou, Wandong

    This paper establishes an oligopolistic game model under the carbon emission reduction constraint and investigates its complex characteristics like bifurcation and chaos. Two oligopolistic manufacturers comprise three mixed game models, aiming to explore the variation in the status of operating system as per the upgrading of benchmark reward-penalty mechanism. Firstly, we set up these basic models that are respectively distinguished with carbon emission quantity and study these models using different game methods. Then, we concentrate on one typical game model to further study the dynamic complexity of variations in the system status, through 2D bifurcation diagrams and 4D parameter adjustment features based on the bounded rationality scheme for price, and the adaptive scheme for carbon emission. The results show that the carbon emission constraint has significant influence on the status variation of two-oligopolistic game operating systems no matter whether it is stable or chaotic. Besides, the new carbon emission regulation meets government supervision target and achieves the goal of being environment friendly by motivating the system to operate with lower carbon emission.

  19. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  20. Boom Minimization Framework for Supersonic Aircraft Using CFD Analysis

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Rallabhandi, Sriram K.

    2010-01-01

    A new framework is presented for shape optimization using analytical shape functions and high-fidelity computational fluid dynamics (CFD) via Cart3D. The focus of the paper is the system-level integration of several key enabling analysis tools and automation methods to perform shape optimization and reduce sonic boom footprint. A boom mitigation case study subject to performance, stability and geometrical requirements is presented to demonstrate a subset of the capabilities of the framework. Lastly, a design space exploration is carried out to assess the key parameters and constraints driving the design.

  1. A cosmological exclusion plot: towards model-independent constraints on modified gravity from current and future growth rate data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddei, Laura; Amendola, Luca, E-mail: laura.taddei@fis.unipr.it, E-mail: l.amendola@thphys.uni-heidelberg.de

    Most cosmological constraints on modified gravity are obtained assuming that the cosmic evolution was standard ΛCDM in the past and that the present matter density and power spectrum normalization are the same as in a ΛCDM model. Here we examine how the constraints change when these assumptions are lifted. We focus in particular on the parameter Y (also called G{sub eff}) that quantifies the deviation from the Poisson equation. This parameter can be estimated by comparing with the model-independent growth rate quantity fσ{sub 8}(z) obtained through redshift distortions. We reduce the model dependency in evaluating Y by marginalizing over σ{submore » 8} and over the initial conditions, and by absorbing the degenerate parameter Ω{sub m,0} into Y. We use all currently available values of fσ{sub 8}(z). We find that the combination Y-circumflex =YΩ{sub m,0}, assumed constant in the observed redshift range, can be constrained only very weakly by current data, Y-circumflex =0.28{sub −0.23}{sup +0.35} at 68% c.l. We also forecast the precision of a future estimation of Y-circumflex in a Euclid-like redshift survey. We find that the future constraints will reduce substantially the uncertainty, Y-circumflex =0.30{sub −0.09}{sup +0.08} , at 68% c.l., but the relative error on Y-circumflex around the fiducial remains quite high, of the order of 30%. The main reason for these weak constraints is that Y-circumflex is strongly degenerate with the initial conditions, so that large or small values of Y-circumflex are compensated by choosing non-standard initial values of the derivative of the matter density contrast. Finally, we produce a forecast of a cosmological exclusion plot on the Yukawa strength and range parameters, which complements similar plots on laboratory scales but explores scales and epochs reachable only with large-scale galaxy surveys. We find that future data can constrain the Yukawa strength to within 3% of the Newtonian one if the range is around a few Megaparsecs. In the particular case of f(R) models, we find that the Yukawa range will be constrained to be larger than 80 Mpc/h or smaller than 2 Mpc/h (95% c.l.), regardless of the specific f(R) model.« less

  2. Anatomy of the inert two-Higgs-doublet model in the light of the LHC and non-LHC dark matter searches

    NASA Astrophysics Data System (ADS)

    Belyaev, Alexander; Cacciapaglia, Giacomo; Ivanov, Igor P.; Rojas-Abatte, Felipe; Thomas, Marc

    2018-02-01

    The inert two-Higgs-doublet model (i2HDM) is a theoretically well-motivated example of a minimal consistent dark matter (DM) model which provides monojet, mono-Z , mono-Higgs, and vector-boson-fusion +ETmiss signatures at the LHC, complemented by signals in direct and indirect DM search experiments. In this paper we have performed a detailed analysis of the constraints in the full five-dimensional parameter space of the i2HDM, coming from perturbativity, unitarity, electroweak precision data, Higgs data from the LHC, DM relic density, direct/indirect DM detection, and LHC monojet analysis, as well as implications of experimental LHC studies on disappearing charged tracks relevant to a high DM mass region. We demonstrate the complementarity of the above constraints and present projections for future LHC data and direct DM detection experiments to probe further i2HDM parameter space. The model is implemented into the CalcHEP and micrOMEGAs packages, which are publicly available at the HEPMDB database, and it is ready for a further exploration in the context of the LHC, relic density, and DM direct detection.

  3. Rock Driller

    NASA Technical Reports Server (NTRS)

    Peterson, Thomas M.

    2001-01-01

    The next series of planetary exploration missions require a method of extracting rock and soil core samples. Therefore a prototype ultrasonic core driller (UTCD) was developed to meet the constraints of Small Bodies Exploration and Mars Sample Return Missions. The constraints in the design are size, weight, power, and axial loading. The ultrasonic transducer requires a relatively low axial load, which is one of the reasons this technology was chosen. The ultrasonic generator breadboard section can be contained within the 5x5x3 limits and weighs less than two pounds. Based on results attained the objectives for the first phase were achieved. A number of transducer probes were made and tested. One version only drills, and the other will actually provide a small core from a rock. Because of a more efficient transducer/probe, it will run at very low power (less than 5 Watts) and still drill/core. The prototype generator was built to allow for variation of all the performance-effecting elements of the transducer/probe/end effector, i.e., pulse, duty cycle, frequency, etc. The heart of the circuitry is what will be converted to a surface mounted board for the next phase, after all the parameters have been optimized and the microprocessor feedback can be installed.

  4. Cosmology with galaxy cluster phase spaces

    NASA Astrophysics Data System (ADS)

    Stark, Alejo; Miller, Christopher J.; Huterer, Dragan

    2017-07-01

    We present a novel approach to constrain accelerating cosmologies with galaxy cluster phase spaces. With the Fisher matrix formalism we forecast constraints on the cosmological parameters that describe the cosmological expansion history. We find that our probe has the potential of providing constraints comparable to, or even stronger than, those from other cosmological probes. More specifically, with 1000 (100) clusters uniformly distributed in the redshift range 0 ≤z ≤0.8 , after applying a conservative 80% mass scatter prior on each cluster and marginalizing over all other parameters, we forecast 1 σ constraints on the dark energy equation of state w and matter density parameter ΩM of σw=0.138 (0.431 ) and σΩM=0.007(0.025 ) in a flat universe. Assuming 40% mass scatter and adding a prior on the Hubble constant we can achieve a constraint on the Chevallier-Polarski-Linder parametrization of the dark energy equation of state parameters w0 and wa with 100 clusters in the same redshift range: σw 0=0.191 and σwa=2.712. Dropping the assumption of flatness and assuming w =-1 we also attain competitive constraints on the matter and dark energy density parameters: σΩ M=0.101 and σΩ Λ=0.197 for 100 clusters uniformly distributed in the range 0 ≤z ≤0.8 after applying a prior on the Hubble constant. We also discuss various observational strategies for tightening constraints in both the near and far future.

  5. New Boundary Constraints for Elliptic Systems used in Grid Generation Problems

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper discusses new boundary constraints for elliptic partial differential equations as used in grid generation problems in generalized curvilinear coordinate systems. These constraints, based on the principle of local conservation of thermal energy in the vicinity of the boundaries, are derived using the Green's Theorem. They uniquely determine the so called decay parameters in the source terms of these elliptic systems. These constraints' are designed for boundary clustered grids where large gradients in physical quantities need to be resolved adequately. It is observed that the present formulation also works satisfactorily for mild clustering. Therefore, a closure for the decay parameter specification for elliptic grid generation problems has been provided resulting in a fully automated elliptic grid generation technique. Thus, there is no need for a parametric study of these decay parameters since the new constraints fix them uniquely. It is also shown that for Neumann type boundary conditions, these boundary constraints uniquely determine the solution to the internal elliptic problem thus eliminating the non-uniqueness of the solution of an internal Neumann boundary value grid generation problem.

  6. Trade-off between Multiple Constraints Enables Simultaneous Formation of Modules and Hubs in Neural Systems

    PubMed Central

    Chen, Yuhan; Wang, Shengjun; Hilgetag, Claus C.; Zhou, Changsong

    2013-01-01

    The formation of the complex network architecture of neural systems is subject to multiple structural and functional constraints. Two obvious but apparently contradictory constraints are low wiring cost and high processing efficiency, characterized by short overall wiring length and a small average number of processing steps, respectively. Growing evidence shows that neural networks are results from a trade-off between physical cost and functional value of the topology. However, the relationship between these competing constraints and complex topology is not well understood quantitatively. We explored this relationship systematically by reconstructing two known neural networks, Macaque cortical connectivity and C. elegans neuronal connections, from combinatory optimization of wiring cost and processing efficiency constraints, using a control parameter , and comparing the reconstructed networks to the real networks. We found that in both neural systems, the reconstructed networks derived from the two constraints can reveal some important relations between the spatial layout of nodes and the topological connectivity, and match several properties of the real networks. The reconstructed and real networks had a similar modular organization in a broad range of , resulting from spatial clustering of network nodes. Hubs emerged due to the competition of the two constraints, and their positions were close to, and partly coincided, with the real hubs in a range of values. The degree of nodes was correlated with the density of nodes in their spatial neighborhood in both reconstructed and real networks. Generally, the rebuilt network matched a significant portion of real links, especially short-distant ones. These findings provide clear evidence to support the hypothesis of trade-off between multiple constraints on brain networks. The two constraints of wiring cost and processing efficiency, however, cannot explain all salient features in the real networks. The discrepancy suggests that there are further relevant factors that are not yet captured here. PMID:23505352

  7. A motion-constraint logic for moving-base simulators based on variable filter parameters

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.

    1974-01-01

    A motion-constraint logic for moving-base simulators has been developed that is a modification to the linear second-order filters generally employed in conventional constraints. In the modified constraint logic, the filter parameters are not constant but vary with the instantaneous motion-base position to increase the constraint as the system approaches the positional limits. With the modified constraint logic, accelerations larger than originally expected are limited while conventional linear filters would result in automatic shutdown of the motion base. In addition, the modified washout logic has frequency-response characteristics that are an improvement over conventional linear filters with braking for low-frequency pilot inputs. During simulated landing approaches of an externally blown flap short take-off and landing (STOL) transport using decoupled longitudinal controls, the pilots were unable to detect much difference between the modified constraint logic and the logic based on linear filters with braking.

  8. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Creminelli, Paolo; Gleyzes, Jérôme; Vernizzi, Filippo

    2014-06-01

    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a verymore » tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.« less

  9. Artificial Intelligence Support for Landing Site Selection on Mars

    NASA Astrophysics Data System (ADS)

    Rongier, G.; Pankratius, V.

    2017-12-01

    Mars is a key target for planetary exploration; a better understanding of its evolution and habitability requires roving in situ. Landing site selection is becoming more challenging for scientists as new instruments generate higher data volumes. The involved engineering and scientific constraints make site selection and the anticipation of possible onsite actions into a complex optimization problem: there may be multiple acceptable solutions depending on various goals and assumptions. Solutions must also account for missing data, errors, and potential biases. To address these problems, we propose an AI-informed decision support system that allows scientists, mission designers, engineers, and committees to explore alternative site selection choices based on data. In particular, we demonstrate first results of an exploratory case study using fuzzy logic and a simulation of a rover's mobility map based on the fast marching algorithm. Our system computes favorability maps of the entire planet to facilitate landing site selection and allows a definition of different configurations for rovers, science target priorities, landing ellipses, and other constraints. For a rover similar to NASA's Mars 2020 rover, we present results in form of a site favorability map as well as four derived exploration scenarios that depend on different prioritized scientific targets, all visualizing inherent tradeoffs. Our method uses the NASA PDS Geosciences Node and the NASA/ICA Integrated Database of Planetary Features. Under common assumptions, the data products reveal Eastern Margaritifer Terra and Meridiani Planum to be the most favorable sites due to a high concentration of scientific targets and a flat, easily navigable surface. Our method also allows mission designers to investigate which constraints have the highest impact on the mission exploration potential and to change parameter ranges. Increasing the elevation limit for landing, for example, provides access to many additional, more interesting sites on the southern terrains of Mars. The speed of current rovers is another limit to exploration capabilities: our system helps quantify how speed increases can improve the number of reachable targets in the search space. We acknowledge support from NASA AISTNNX15AG84G (PI Pankratius) and NSF ACI1442997 (PI Pankratius).

  10. Exploring short-GRB afterglow parameter space for observations in coincidence with gravitational waves

    NASA Astrophysics Data System (ADS)

    Saleem, M.; Resmi, L.; Misra, Kuntal; Pai, Archana; Arun, K. G.

    2018-03-01

    Short duration Gamma Ray Bursts (SGRB) and their afterglows are among the most promising electromagnetic (EM) counterparts of Neutron Star (NS) mergers. The afterglow emission is broad-band, visible across the entire electromagnetic window from γ-ray to radio frequencies. The flux evolution in these frequencies is sensitive to the multidimensional afterglow physical parameter space. Observations of gravitational wave (GW) from BNS mergers in spatial and temporal coincidence with SGRB and associated afterglows can provide valuable constraints on afterglow physics. We run simulations of GW-detected BNS events and assuming that all of them are associated with a GRB jet which also produces an afterglow, investigate how detections or non-detections in X-ray, optical and radio frequencies can be influenced by the parameter space. We narrow down the regions of afterglow parameter space for a uniform top-hat jet model, which would result in different detection scenarios. We list inferences which can be drawn on the physics of GRB afterglows from multimessenger astronomy with coincident GW-EM observations.

  11. The Downwind Hemisphere of the Heliosphere as Observed with IBEX-Lo from 2009 to 2015

    NASA Astrophysics Data System (ADS)

    Wurz, P.; Galli, A.; Schwadron, N.; Kucharek, H.; Moebius, E.; Bzowski, M.; Sokol, J. M.; Kubiak, M. A.; Funsten, H. O.; Fuselier, S. A.; McComas, D. J.

    2017-12-01

    The topic of this study is the vast region towards the tail of the heliosphere. To this end, we comprehensively analyzed energetic neutral hydrogen atoms (ENAs) of energies 10 eV to 2.5 keV from the downwind hemisphere of the heliosheath measured during the first 7 years of the IBEX (Interstellar Boundary Explorer) mission. Neutralized ions from the heliosheath (the region of slow solar wind plasma between termination shock and heliopause) can be remotely observed as ENAs down to 10 eV with the IBEX-Lo sensor onboard IBEX. This sensor covers those energies of the ion spectrum that dominate the total plasma pressure in the downwind region. So far, this region of the heliosphere has never been explored in-situ. Converting observations obtained near Earth orbit at these low energies to the original ion distributions in the heliocentric rest frame at 100 AU is very challenging, making the assessment of uncertainties and implicit assumptions crucial. From the maps of observed ENAs from the heliosheath and their uncertainties we derive observational constraints on heliospheric models for the downwind hemisphere. These constraints limit the possible range of 1) the distance of the termination shock, 2) the total plasma pressure across the termination shock, 3) the radial flow velocity of the heliosheath plasma, 4) the extinction length of said plasma, and finally 5) the dimension of the heliosheath in downwind directions. Because these parameters are coupled and because of observational limitations, we also need to characterize the degeneracy, i.e., the fact that different sets of parameters may reproduce the observations.

  12. Exploring CP violation in the MSSM.

    PubMed

    Arbey, Alexandre; Ellis, John; Godbole, Rohini M; Mahmoudi, Farvah

    We explore the prospects for observing CP violation in the minimal supersymmetric extension of the Standard Model (MSSM) with six CP-violating parameters, three gaugino mass phases and three phases in trilinear soft supersymmetry-breaking parameters, using the CPsuperH code combined with a geometric approach to maximise CP-violating observables subject to the experimental upper bounds on electric dipole moments. We also implement CP-conserving constraints from Higgs physics, flavour physics and the upper limits on the cosmological dark matter density and spin-independent scattering. We study possible values of observables within the constrained MSSM (CMSSM), the non-universal Higgs model (NUHM), the CPX scenario and a variant of the phenomenological MSSM (pMSSM). We find values of the CP-violating asymmetry [Formula: see text] in [Formula: see text] decay that may be as large as 3 %, so future measurements of [Formula: see text] may provide independent information about CP violation in the MSSM. We find that CP-violating MSSM contributions to the [Formula: see text] meson mass mixing term [Formula: see text] are in general below the present upper limit, which is dominated by theoretical uncertainties. If these could be reduced, [Formula: see text] could also provide an interesting and complementary constraint on the six CP-violating MSSM phases, enabling them all to be determined experimentally, in principle. We also find that CP violation in the [Formula: see text] and [Formula: see text] couplings can be quite large, and so may offer interesting prospects for future [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text] colliders.

  13. 6 DOF synchronized control for spacecraft formation flying with input constraint and parameter uncertainties.

    PubMed

    Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang

    2011-10-01

    This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Multiparameter elastic full waveform inversion with facies-based constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-06-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  15. Constraints on a scale-dependent bias from galaxy clustering

    NASA Astrophysics Data System (ADS)

    Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.

    2017-01-01

    We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.

  16. Structural constraints to wilderness: Impacts on visitation and experience

    Treesearch

    Ingrid E. Schneider; Sierra L. Schroeder; Ann. Schwaller

    2011-01-01

    A significant research body on recreation constraints exists, but wilderness constraints research is limited. Like other recreationists, wilderness visitors likely experience a number of constraints, factors that limit leisure preference formation or participation and enjoyment. This project explored how visitors' experiences with and in wilderness are constrained...

  17. Two-dimensional probabilistic inversion of plane-wave electromagnetic data: methodology, model constraints and joint inversion with electrical resistivity data

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, Marina; Linde, Niklas; Kalscheuer, Thomas; Vrugt, Jasper A.

    2014-03-01

    Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

  18. THE LITTLEST HIGGS MODEL AND ONE-LOOP ELECTROWEAK PRECISION CONSTRAINTS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHEN, M.C.; DAWSON,S.

    2004-06-16

    We present in this talk the one-loop electroweak precision constraints in the Littlest Higgs model, including the logarithmically enhanced contributions from both fermion and scalar loops. We find the one-loop contributions are comparable to the tree level corrections in some regions of parameter space. A low cutoff scale is allowed for a non-zero triplet VEV. Constraints on various other parameters in the model are also discussed. The role of triplet scalars in constructing a consistent renormalization scheme is emphasized.

  19. Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units.

    PubMed

    Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic

    2017-01-01

    Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction).

  20. Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units

    PubMed Central

    Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic

    2017-01-01

    Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction). PMID:28352243

  1. Asymptotic freedom in certain S O (N ) and S U (N ) models

    NASA Astrophysics Data System (ADS)

    Einhorn, Martin B.; Jones, D. R. Timothy

    2017-09-01

    We calculate the β -functions for S O (N ) and S U (N ) gauge theories coupled to adjoint and fundamental scalar representations, correcting longstanding, previous results. We explore the constraints on N resulting from requiring asymptotic freedom for all couplings. When we take into account the actual allowed behavior of the gauge coupling, the minimum value of N in both cases turns out to be larger than realized in earlier treatments. We also show that in the large N limit, both models have large regions of parameter space corresponding to total asymptotic freedom.

  2. Sterile neutrino searches via displaced vertices at LHCb

    NASA Astrophysics Data System (ADS)

    Antusch, Stefan; Cazzato, Eros; Fischer, Oliver

    2017-11-01

    We explore the sensitivity of displaced vertex searches at LHCb for testing sterile neutrino extensions of the Standard Model towards explaining the observed neutrino masses. We derive estimates for the constraints on sterile neutrino parameters from a recently published displaced vertex search at LHCb based on run 1 data. They yield the currently most stringent limit on active-sterile neutrino mixing in the sterile neutrino mass range between 4.5 GeV and 10 GeV. Furthermore, we present forecasts for the sensitivities that could be obtained from the run 2 data and also for the high-luminosity phase of the LHC.

  3. General gauge mediation at the weak scale

    DOE PAGES

    Knapen, Simon; Redigolo, Diego; Shih, David

    2016-03-09

    We completely characterize General Gauge Mediation (GGM) at the weak scale by solving all IR constraints over the full parameter space. This is made possible through a combination of numerical and analytical methods, based on a set of algebraic relations among the IR soft masses derived from the GGM boundary conditions in the UV. We show how tensions between just a few constraints determine the boundaries of the parameter space: electroweak symmetry breaking (EWSB), the Higgs mass, slepton tachyons, and left-handed stop/sbottom tachyons. While these constraints allow the left-handed squarks to be arbitrarily light, they place strong lower bounds onmore » all of the right-handed squarks. Meanwhile, light EW superpartners are generic throughout much of the parameter space. This is especially the case at lower messenger scales, where a positive threshold correction to m h coming from light Higgsinos and winos is essential in order to satisfy the Higgs mass constraint.« less

  4. Fundamental properties of Fanaroff-Riley type II radio galaxies investigated via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Kapińska, A. D.; Uttley, P.; Kaiser, C. R.

    2012-08-01

    Radio galaxies and quasars are among the largest and most powerful single objects known and are believed to have had a significant impact on the evolving Universe and its large-scale structure. We explore the intrinsic and extrinsic properties of the population of Fanaroff-Riley type II (FR II) objects, i.e. their kinetic luminosities, lifetimes and the central densities of their environments. In particular, the radio and kinetic luminosity functions of these powerful radio sources are investigated using the complete, flux-limited radio catalogues of the Third Cambridge Revised Revised Catalogue (3CRR) and Best et al. We construct multidimensional Monte Carlo simulations using semi-analytical models of FR II source time evolution to create artificial samples of radio galaxies. Unlike previous studies, we compare radio luminosity functions found with both the observed and simulated data to explore the best-fitting fundamental source parameters. The new Monte Carlo method we present here allows us to (i) set better limits on the predicted fundamental parameters of which confidence intervals estimated over broad ranges are presented and (ii) generate the most plausible underlying parent populations of these radio sources. Moreover, as has not been done before, we allow the source physical properties (kinetic luminosities, lifetimes and central densities) to co-evolve with redshift, and we find that all the investigated parameters most likely undergo cosmological evolution. Strikingly, we find that the break in the kinetic luminosity function must undergo redshift evolution of at least (1 + z)3. The fundamental parameters are strongly degenerate, and independent constraints are necessary to draw more precise conclusions. We use the estimated kinetic luminosity functions to set constraints on the duty cycles of these powerful radio sources. A comparison of the duty cycles of powerful FR IIs with those determined from radiative luminosities of active galactic nuclei of comparable black hole mass suggests a transition in behaviour from high to low redshifts, corresponding to either a drop in the typical black hole mass of powerful FR IIs at low redshifts, or a transition to a kinetically dominated, radiatively inefficient FR II population.

  5. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  6. Constraints on cosmological parameters in power-law cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rani, Sarita; Singh, J.K.; Altaibayeva, A.

    In this paper, we examine observational constraints on the power law cosmology; essentially dependent on two parameters H{sub 0} (Hubble constant) and q (deceleration parameter). We investigate the constraints on these parameters using the latest 28 points of H(z) data and 580 points of Union2.1 compilation data and, compare the results with the results of ΛCDM . We also forecast constraints using a simulated data set for the future JDEM, supernovae survey. Our studies give better insight into power law cosmology than the earlier done analysis by Kumar [arXiv:1109.6924] indicating it tuning well with Union2.1 compilation data but not withmore » H(z) data. However, the constraints obtained on and i.e. H{sub 0} average and q average using the simulated data set for the future JDEM, supernovae survey are found to be inconsistent with the values obtained from the H(z) and Union2.1 compilation data. We also perform the statefinder analysis and find that the power-law cosmological models approach the standard ΛCDM model as q → −1. Finally, we observe that although the power law cosmology explains several prominent features of evolution of the Universe, it fails in details.« less

  7. Definition ofthe Design Trajectory and Entry Flight Corridor for the NASA Orion Exploration Mission 1 Entry Trajectory Using an Integrated Approach and Optimization

    NASA Technical Reports Server (NTRS)

    McNamara, Luke W.; Braun, Robert D.

    2014-01-01

    One of the key design objectives of NASA's Orion Exploration Mission 1 (EM- 1) is to execute a guided entry trajectory demonstrating GN&C capability. The focus of this paper is defining the flyable entry corridor for EM-1 taking into account multiple subsystem constraints such as complex aerothermal heating constraints, aerothermal heating objectives, landing accuracy constraints, structural load limits, Human-System-Integration-Requirements, Service Module debris disposal limits and other flight test objectives. During the EM-1 Design Analysis Cycle 1 design challenges came up that made defining the flyable entry corridor for the EM-1 mission critical to mission success. This document details the optimization techniques that were explored to use with the 6-DOF ANTARES simulation to assist in defining the design entry interface state and entry corridor with respect to key flight test constraints and objectives.

  8. Fourier transform inequalities for phylogenetic trees.

    PubMed

    Matsen, Frederick A

    2009-01-01

    Phylogenetic invariants are not the only constraints on site-pattern frequency vectors for phylogenetic trees. A mutation matrix, by its definition, is the exponential of a matrix with non-negative off-diagonal entries; this positivity requirement implies non-trivial constraints on the site-pattern frequency vectors. We call these additional constraints "edge-parameter inequalities". In this paper, we first motivate the edge-parameter inequalities by considering a pathological site-pattern frequency vector corresponding to a quartet tree with a negative internal edge. This site-pattern frequency vector nevertheless satisfies all of the constraints described up to now in the literature. We next describe two complete sets of edge-parameter inequalities for the group-based models; these constraints are square-free monomial inequalities in the Fourier transformed coordinates. These inequalities, along with the phylogenetic invariants, form a complete description of the set of site-pattern frequency vectors corresponding to bona fide trees. Said in mathematical language, this paper explicitly presents two finite lists of inequalities in Fourier coordinates of the form "monomial < or = 1", each list characterizing the phylogenetically relevant semialgebraic subsets of the phylogenetic varieties.

  9. User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Coleman, Kayla; Gilkey, Lindsay N.

    Sandia’s Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. In its simplest mode, Dakota can automate typical parameter variation studies through a generic interface to a physics-based computational model. This can lend efficiency and rigor to manual parameter perturbation studies already being conducted by analysts. However, Dakota also delivers advanced parametric analysis techniques enabling design exploration, optimization, model calibration, riskmore » analysis, and quantification of margins and uncertainty with such models. It directly supports verification and validation activities. Dakota algorithms enrich complex science and engineering models, enabling an analyst to answer crucial questions of - Sensitivity: Which are the most important input factors or parameters entering the simulation, and how do they influence key outputs?; Uncertainty: What is the uncertainty or variability in simulation output, given uncertainties in input parameters? How safe, reliable, robust, or variable is my system? (Quantification of margins and uncertainty, QMU); Optimization: What parameter values yield the best performing design or operating condition, given constraints? Calibration: What models and/or parameters best match experimental data? In general, Dakota is the Consortium for Advanced Simulation of Light Water Reactors (CASL) delivery vehicle for verification, validation, and uncertainty quantification (VUQ) algorithms. It permits ready application of the VUQ methods described above to simulation codes by CASL researchers, code developers, and application engineers.« less

  10. Exploring the implication of climate process uncertainties within the Earth System Framework

    NASA Astrophysics Data System (ADS)

    Booth, B.; Lambert, F. H.; McNeal, D.; Harris, G.; Sexton, D.; Boulton, C.; Murphy, J.

    2011-12-01

    Uncertainties in the magnitude of future climate change have been a focus of a great deal of research. Much of the work with General Circulation Models has focused on the atmospheric response to changes in atmospheric composition, while other processes remain outside these frameworks. Here we introduce an ensemble of new simulations, based on an Earth System configuration of HadCM3C, designed to explored uncertainties in both physical (atmospheric, oceanic and aerosol physics) and carbon cycle processes, using perturbed parameter approaches previously used to explore atmospheric uncertainty. Framed in the context of the climate response to future changes in emissions, the resultant future projections represent significantly broader uncertainty than existing concentration driven GCM assessments. The systematic nature of the ensemble design enables interactions between components to be explored. For example, we show how metrics of physical processes (such as climate sensitivity) are also influenced carbon cycle parameters. The suggestion from this work is that carbon cycle processes represent a comparable contribution to uncertainty in future climate projections as contributions from atmospheric feedbacks more conventionally explored. The broad range of climate responses explored within these ensembles, rather than representing a reason for inaction, provide information on lower likelihood but high impact changes. For example while the majority of these simulations suggest that future Amazon forest extent is resilient to the projected climate changes, a small number simulate dramatic forest dieback. This ensemble represents a framework to examine these risks, breaking them down into physical processes (such as ocean temperature drivers of rainfall change) and vegetation processes (where uncertainties point towards requirements for new observational constraints).

  11. Observing binary black hole ringdowns by advanced gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Maselli, Andrea; Kokkotas, Kostas D.; Laguna, Pablo

    2017-05-01

    The direct discovery of gravitational waves from compact binary systems leads for the first time to explore the possibility of black hole spectroscopy. Newly formed black holes produced by coalescing events are copious emitters of gravitational radiation, in the form of damped sinusoids, the quasinormal modes. The latter provides a precious source of information on the nature of gravity in the strong field regime, as they represent a powerful tool to investigate the validity of the no-hair theorem. In this work we perform a systematic study on the accuracy with which current and future interferometers will measure the fundamental parameters of ringdown events, such as frequencies and damping times. We analyze how these errors affect the estimate of the mass and the angular momentum of the final black hole, constraining the parameter space which will lead to the most precise measurements. We explore both single and multimode events, showing how the uncertainties evolve when multiple detectors are available. We also prove that, for the second generation of interferometers, a network of instruments is a crucial and necessary ingredient to perform strong-gravity tests of the no-hair theorem. Finally, we analyze the constraints that a third generation of detectors may be able to set on the mode's parameters, comparing the projected bounds against those obtained for current facilities.

  12. Tokunaga river networks: New empirical evidence and applications to transport problems

    NASA Astrophysics Data System (ADS)

    Tejedor, A.; Zaliapin, I. V.

    2013-12-01

    The Tokunaga self-similarity has proven to be an important constraint for the observed river networks. Notably, various Horton laws are naturally satisfied by the Tokunaga networks, which makes this model of considerable interest for theoretical analysis and modeling of environmental transport. Recall that Horton self-similarity is a weaker property of a tree graph that addresses its principal branching; it is a counterpart of the power-law size distribution for system's elements. The stronger Tokunaga self-similarity addresses so-called side branching; it ensures that different levels of a hierarchy have the same probabilistic structure (in a sense that can be rigorously defined). We describe an improved statistical framework for testing self-similarity in a finite tree and estimating the related parameters. The developed inference is applied to the major river basins in continental United States and Iberian Peninsula. The results demonstrate the validity of the Tokunaga model for the majority of the examined networks with very narrow (universal) range of parameter values. Next, we explore possible relationships between the Tokunaga parameter anomalies (deviations from the universal values) and climatic and geomorphologic characteristics of a region. Finally, we apply the Tokunaga model to explore vulnerability of river networks, defined via reaction of the river discharge to a storm.

  13. Rational Design of Glucose-Responsive Insulin Using Pharmacokinetic Modeling.

    PubMed

    Bakh, Naveed A; Bisker, Gili; Lee, Michael A; Gong, Xun; Strano, Michael S

    2017-11-01

    A glucose responsive insulin (GRI) is a therapeutic that modulates its potency, concentration, or dosing of insulin in relation to a patient's dynamic glucose concentration, thereby approximating aspects of a normally functioning pancreas. Current GRI design lacks a theoretical basis on which to base fundamental design parameters such as glucose reactivity, dissociation constant or potency, and in vivo efficacy. In this work, an approach to mathematically model the relevant parameter space for effective GRIs is induced, and design rules for linking GRI performance to therapeutic benefit are developed. Well-developed pharmacokinetic models of human glucose and insulin metabolism coupled to a kinetic model representation of a freely circulating GRI are used to determine the desired kinetic parameters and dosing for optimal glycemic control. The model examines a subcutaneous dose of GRI with kinetic parameters in an optimal range that results in successful glycemic control within prescribed constraints over a 24 h period. Additionally, it is demonstrated that the modeling approach can find GRI parameters that enable stable glucose levels that persist through a skipped meal. The results provide a framework for exploring the parameter space of GRIs, potentially without extensive, iterative in vivo animal testing. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Earthquake sequence simulations with measured properties for JFAST core samples

    NASA Astrophysics Data System (ADS)

    Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro

    2017-08-01

    Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a-b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space. This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'.

  15. Cosmological Constraints from the Redshift Dependence of the Alcock–Paczynski Effect: Dynamical Dark Energy

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Dong; Sabiu, Cristiano G.; Park, Changbom; Wang, Yuting; Zhao, Gong-bo; Park, Hyunbae; Shafieloo, Arman; Kim, Juhan; Hong, Sungwook E.

    2018-04-01

    We perform an anisotropic clustering analysis of 1,133,326 galaxies from the Sloan Digital Sky Survey (SDSS-III) Baryon Oscillation Spectroscopic Survey Data Release 12 covering the redshift range 0.15 < z < 0.69. The geometrical distortions of the galaxy positions, caused by incorrect assumptions in the cosmological model, are captured in the anisotropic two-point correlation function on scales of 6–40 h ‑1 Mpc. The redshift evolution of this anisotropic clustering is used to place constraints on the cosmological parameters. We improve the methodology of Li et al. to enable efficient exploration of high-dimensional cosmological parameter spaces, and apply it to the Chevallier–Polarski–Linder parameterization of dark energy, w = w 0 + w a z/(1 + z). In combination with data on the cosmic microwave background, baryon acoustic oscillations, Type Ia supernovae, and H 0 from Cepheids, we obtain Ω m = 0.301 ± 0.008, w 0 = ‑1.042 ± 0.067, and w a = ‑0.07 ± 0.29 (68.3% CL). Adding our new Alcock–Paczynski measurements to the aforementioned results reduces the error bars by ∼30%–40% and improves the dark-energy figure of merit by a factor of ∼2. We check the robustness of the results using realistic mock galaxy catalogs.

  16. Behavioural variability and motor performance: Effect of practice specialization in front crawl swimming.

    PubMed

    Seifert, L; De Jesus, K; Komar, J; Ribeiro, J; Abraldes, J A; Figueiredo, P; Vilas-Boas, J P; Fernandes, R J

    2016-06-01

    The aim was to examine behavioural variability within and between individuals, especially in a swimming task, to explore how swimmers with various specialty (competitive short distance swimming vs. triathlon) adapt to repetitive events of sub-maximal intensity, controlled in speed but of various distances. Five swimmers and five triathletes randomly performed three variants (with steps of 200, 300 and 400m distances) of a front crawl incremental step test until exhaustion. Multi-camera system was used to collect and analyse eight kinematical and swimming efficiency parameters. Analysis of variance showed significant differences between swimmers and triathletes, with significant individual effect. Cluster analysis put these parameters together to investigate whether each individual used the same pattern(s) and one or several patterns to achieve the task goal. Results exhibited ten patterns for the whole population, with only two behavioural patterns shared between swimmers and triathletes. Swimmers tended to use higher hand velocity and index of coordination than triathletes. Mono-stability occurred in swimmers whatever the task constraint showing high stability, while triathletes revealed bi-stability because they switched to another pattern at mid-distance of the task. Finally, our analysis helped to explain and understand effect of specialty and more broadly individual adaptation to task constraint. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Earthquake sequence simulations with measured properties for JFAST core samples.

    PubMed

    Noda, Hiroyuki; Sawai, Michiyo; Shibazaki, Bunichiro

    2017-09-28

    Since the 2011 Tohoku-Oki earthquake, multi-disciplinary observational studies have promoted our understanding of both the coseismic and long-term behaviour of the Japan Trench subduction zone. We also have suggestions for mechanical properties of the fault from the experimental side. In the present study, numerical models of earthquake sequences are presented, accounting for the experimental outcomes and being consistent with observations of both long-term and coseismic fault behaviour and thermal measurements. Among the constraints, a previous study of friction experiments for samples collected in the Japan Trench Fast Drilling Project (JFAST) showed complex rate dependences: a and a - b values change with the slip rate. In order to express such complexity, we generalize a rate- and state-dependent friction law to a quadratic form in terms of the logarithmic slip rate. The constraints from experiments reduced the degrees of freedom of the model significantly, and we managed to find a plausible model by changing only a few parameters. Although potential scale effects between lab experiments and natural faults are important problems, experimental data may be useful as a guide in exploring the huge model parameter space.This article is part of the themed issue 'Faulting, friction and weakening: from slow to fast motion'. © 2017 The Author(s).

  18. Constraints on the Energy Content of the Universe from a Combination of Galaxy Cluster Observables

    NASA Technical Reports Server (NTRS)

    Molnar, Sandor M.; Haiman, Zoltan; Birkinshaw, Mark; Mushotzky, Richard F.

    2003-01-01

    We demonstrate that constraints on cosmological parameters from the distribution of clusters as a function of redshift (dN/dz) are complementary to accurate angular diameter distance (D(sub A)) measurements to clusters, and their combination significantly tightens constraints on the energy density content of the Universe. The number counts can be obtained from X-ray and/or SZ (Sunyaev-Ze'dovich effect) surveys, and the angular diameter distances can be determined from deep observations of the intra-cluster gas using their thermal bremsstrahlung X-ray emission and the SZ effect. We combine constraints from simulated cluster number counts expected from a 12 deg(sup 2) SZ cluster survey and constraints from simulated angular diameter distance measurements based on the X-ray/SZ method assuming a statistical accuracy of 10% in the angular diameter distance determination of 100 clusters with redshifts less than 1.5. We find that Omega(sub m), can be determined within about 25%, Omega(sub lambda) within 20% and w within 16%. We show that combined dN/dz+(sub lambda) constraints can be used to constrain the different energy densities in the Universe even in the presence of a few percent redshift dependent systematic error in D(sub lambda). We also address the question of how best to select clusters of galaxies for accurate diameter distance determinations. We show that the joint dN/dz+ D(lambda) constraints on cosmological parameters for a fixed target accuracy in the energy density parameters are optimized by selecting clusters with redshift upper cut-offs in the range 0.55 approx. less than 1. Subject headings: cosmological parameters - cosmology: theory - galaxies:clusters: general

  19. Constraints on dark matter annihilation in clusters of galaxies with the Fermi large area telescope

    DOE PAGES

    Ackermann, M.; Ajello, M.; Allafort, A.; ...

    2010-05-20

    Nearby clusters and groups of galaxies are potentially bright sources of high-energy gamma-ray emission resulting from the pair-annihilation of dark matter particles. However, no significant gamma-ray emission has been detected so far from clusters in the first 11 months of observations with the Fermi Large Area Telescope. We interpret this non-detection in terms of constraints on dark matter particle properties. In particular for leptonic annihilation final states and particle masses greater than ~ 200 GeV, gamma-ray emission from inverse Compton scattering of CMB photons is expected to dominate the dark matter annihilation signal from clusters, and our gamma-ray limits excludemore » large regions of the parameter space that would give a good fit to the recent anomalous Pamela and Fermi-LAT electron-positron measurements. We also present constraints on the annihilation of more standard dark matter candidates, such as the lightest neutralino of supersymmetric models. The constraints are particularly strong when including the fact that clusters are known to contain substructure at least on galaxy scales, increasing the expected gamma-ray flux by a factor of ~ 5 over a smooth-halo assumption. Here, we also explore the effect of uncertainties in cluster dark matter density profiles, finding a systematic uncertainty in the constraints of roughly a factor of two, but similar overall conclusions. Finally, in this work, we focus on deriving limits on dark matter models; a more general consideration of the Fermi-LAT data on clusters and clusters as gamma-ray sources is forthcoming.« less

  20. Advanced Health Management Algorithms for Crew Exploration Applications

    NASA Technical Reports Server (NTRS)

    Davidson, Matt; Stephens, John; Jones, Judit

    2005-01-01

    Achieving the goals of the President's Vision for Exploration will require new and innovative ways to achieve reliability increases of key systems and sub-systems. The most prominent approach used in current systems is to maintain hardware redundancy. This imposes constraints to the system and utilizes weight that could be used for payload for extended lunar, Martian, or other deep space missions. A technique to improve reliability while reducing the system weight and constraints is through the use of an Advanced Health Management System (AHMS). This system contains diagnostic algorithms and decision logic to mitigate or minimize the impact of system anomalies on propulsion system performance throughout the powered flight regime. The purposes of the AHMS are to increase the probability of successfully placing the vehicle into the intended orbit (Earth, Lunar, or Martian escape trajectory), increase the probability of being able to safely execute an abort after it has developed anomalous performance during launch or ascent phases of the mission, and to minimize or mitigate anomalies during the cruise portion of the mission. This is accomplished by improving the knowledge of the state of the propulsion system operation at any given turbomachinery vibration protection logic and an overall system analysis algorithm that utilizes an underlying physical model and a wide array of engine system operational parameters to detect and mitigate predefined engine anomalies. These algorithms are generic enough to be utilized on any propulsion system yet can be easily tailored to each application by changing input data and engine specific parameters. The key to the advancement of such a system is the verification of the algorithms. These algorithms will be validated through the use of a database of nominal and anomalous performance from a large propulsion system where data exists for catastrophic and noncatastrophic propulsion sytem failures.

  1. Radiative Transfer Photometric Analysis of Surface Materials at the Mars Exploration Rover Landing Sites

    NASA Astrophysics Data System (ADS)

    Seelos, F. P.; Arvidson, R. E.; Guinness, E. A.; Wolff, M. J.

    2004-12-01

    The Mars Exploration Rover (MER) Panoramic Camera (Pancam) observation strategy included the acquisition of multispectral data sets specifically designed to support the photometric analysis of Martian surface materials (J. R. Johnson, this conference). We report on the numerical inversion of observed Pancam radiance-on-sensor data to determine the best-fit surface bidirectional reflectance parameters as defined by Hapke theory. The model bidirectional reflectance parameters for the Martian surface provide constraints on physical and material properties and allow for the direct comparison of Pancam and orbital data sets. The parameter optimization procedure consists of a spatial multigridding strategy driving a Levenberg-Marquardt nonlinear least squares optimization engine. The forward radiance models and partial derivatives (via finite-difference approximation) are calculated using an implementation of the DIScrete Ordinate Radiative Transfer (DISORT) algorithm with the four-parameter Hapke bidirectional reflectance function and the two-parameter Henyey-Greenstein phase function defining the lower boundary. The DISORT implementation includes a plane-parallel model of the Martian atmosphere derived from a combination of Thermal Emission Spectrometer (TES), Pancam, and Mini-TES atmospheric data acquired near in time to the surface observations. This model accounts for bidirectional illumination from the attenuated solar beam and hemispherical-directional skylight illumination. The initial investigation was limited to treating the materials surrounding the rover as a single surface type, consistent with the spatial resolution of orbital observations. For more detailed analyses the observation geometry can be calculated from the correlation of Pancam stereo pairs (J. M. Soderblom et al., this conference). With improved geometric control, the radiance inversion can be applied to constituent surface material classes such as ripple and dune forms in addition to the soils on the Meridiani plain. Under the assumption of a Henyey-Greenstein phase function, initial results for the Opportunity site suggest a single scattering albedo on the order of 0.25 and a Henyey-Greenstein forward fraction approaching unity at an effective wavelength of 753 nm. As an extension of the photometric modeling, the radiance inversion also provides a means of calculating surface reflectance independent of the radiometric calibration target. This method for determining observed reflectance will provide an additional constraint on the dust deposition model for the calibration target.

  2. Skin-electrode circuit model for use in optimizing energy transfer in volume conduction systems.

    PubMed

    Hackworth, Steven A; Sun, Mingui; Sclabassi, Robert J

    2009-01-01

    The X-Delta model for through-skin volume conduction systems is introduced and analyzed. This new model has advantages over our previous X model in that it explicitly represents current pathways in the skin. A vector network analyzer is used to take measurements on pig skin to obtain data for use in finding the model's impedance parameters. An optimization method for obtaining this more complex model's parameters is described. Results show the model to accurately represent the impedance behavior of the skin system with error of generally less than one percent. Uses for the model include optimizing energy transfer across the skin in a volume conduction system with appropriate current exposure constraints, and exploring non-linear behavior of the electrode-skin system at moderate voltages (below ten) and frequencies (kilohertz to megahertz).

  3. Analysing the 21 cm signal from the epoch of reionization with artificial neural networks

    NASA Astrophysics Data System (ADS)

    Shimabukuro, Hayato; Semelin, Benoit

    2017-07-01

    The 21 cm signal from the epoch of reionization should be observed within the next decade. While a simple statistical detection is expected with Square Kilometre Array (SKA) pathfinders, the SKA will hopefully produce a full 3D mapping of the signal. To extract from the observed data constraints on the parameters describing the underlying astrophysical processes, inversion methods must be developed. For example, the Markov Chain Monte Carlo method has been successfully applied. Here, we test another possible inversion method: artificial neural networks (ANNs). We produce a training set that consists of 70 individual samples. Each sample is made of the 21 cm power spectrum at different redshifts produced with the 21cmFast code plus the value of three parameters used in the seminumerical simulations that describe astrophysical processes. Using this set, we train the network to minimize the error between the parameter values it produces as an output and the true values. We explore the impact of the architecture of the network on the quality of the training. Then we test the trained network on the new set of 54 test samples with different values of the parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameters at a given redshift, that including thermal noise and sample variance decreases the quality of the reconstruction and that using the power spectrum at several redshifts as an input to the ANN improves the quality of the reconstruction. We conclude that ANNs are a viable inversion method whose main strength is that they require a sparse exploration of the parameter space and thus should be usable with full numerical simulations.

  4. Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.

    PubMed

    Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S

    2018-02-05

    To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.

  5. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  6. Test Design and Speededness

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2011-01-01

    A critical component of test speededness is the distribution of the test taker's total time on the test. A simple set of constraints on the item parameters in the lognormal model for response times is derived that can be used to control the distribution when assembling a new test form. As the constraints are linear in the item parameters, they can…

  7. LHC constraints on color octet scalars

    NASA Astrophysics Data System (ADS)

    Hayreter, Alper; Valencia, German

    2017-08-01

    We extract constraints on the parameter space of the Manohar and Wise model by comparing the cross sections for dijet, top-pair, dijet-pair, t t ¯t t ¯ and b b ¯b b ¯ productions at the LHC with the strongest available experimental limits from ATLAS or CMS at 8 or 13 TeV. Overall we find mass limits around 1 TeV in the most sensitive regions of parameter space, and lower elsewhere. This is at odds with generic limits for color octet scalars often quoted in the literature where much larger production cross sections are assumed. The constraints that can be placed on coupling constants are typically weaker than those from existing theoretical considerations, with the exception of the parameter ηD.

  8. Precision constraints on the top-quark effective field theory at future lepton colliders

    NASA Astrophysics Data System (ADS)

    Durieux, G.

    We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.

  9. Updated observational constraints on quintessence dark energy models

    NASA Astrophysics Data System (ADS)

    Durrive, Jean-Baptiste; Ooba, Junpei; Ichiki, Kiyotomo; Sugiyama, Naoshi

    2018-02-01

    The recent GW170817 measurement favors the simplest dark energy models, such as a single scalar field. Quintessence models can be classified in two classes, freezing and thawing, depending on whether the equation of state decreases towards -1 or departs from it. In this paper, we put observational constraints on the parameters governing the equations of state of tracking freezing, scaling freezing, and thawing models using updated data, from the Planck 2015 release, joint light-curve analysis, and baryonic acoustic oscillations. Because of the current tensions on the value of the Hubble parameter H0, unlike previous authors, we let this parameter vary, which modifies significantly the results. Finally, we also derive constraints on neutrino masses in each of these scenarios.

  10. Pseudoscalar portal dark matter and new signatures of vector-like fermions

    DOE PAGES

    Fan, JiJi; Koushiappas, Savvas M.; Landsberg, Greg

    2016-01-19

    Fermionic dark matter interacting with the Standard Model sector through a pseudoscalar portal could evade the direct detection constraints while preserving a WIMP miracle. Here, we study the LHC constraints on the pseudoscalar production in simplified models with the pseudoscalar either dominantly coupled to b quarks ormore » $${{\\tau}}$$ leptons and explore their implications for the GeV excesses in gamma ray observations. We also investigate models with new vector-like fermions that could realize the simplfied models of pseudoscalar portal dark matter. Furthermore, these models yield new decay channels and signatures of vector-like fermions, for instance, bbb; b$${{\\tau}}$$ $${{\\tau}}$$, and $${{\\tau}}$$ $${{\\tau}}$$ $${{\\tau}}$$ resonances. Some of the signatures have already been strongly constrained by the existing LHC searches and the parameter space fitting the gamma ray excess is further restricted. Conversely, the pure $${{\\tau}}$$-rich final state is only weakly constrained so far due to the small electroweak production rate.« less

  11. Skew-flavored dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agrawal, Prateek; Chacko, Zackaria; Fortes, Elaine C. F. S.

    We explore a novel flavor structure in the interactions of dark matter with the Standard Model. We consider theories in which both the dark matter candidate, and the particles that mediate its interactions with the Standard Model fields, carry flavor quantum numbers. The interactions are skewed in flavor space, so that a dark matter particle does not directly couple to the Standard Model matter fields of the same flavor, but only to the other two flavors. This framework respects minimal flavor violation and is, therefore, naturally consistent with flavor constraints. We study the phenomenology of a benchmark model in whichmore » dark matter couples to right-handed charged leptons. In large regions of parameter space, the dark matter can emerge as a thermal relic, while remaining consistent with the constraints from direct and indirect detection. The collider signatures of this scenario include events with multiple leptons and missing energy. In conclusion, these events exhibit a characteristic flavor pattern that may allow this class of models to be distinguished from other theories of dark matter.« less

  12. Skew-flavored dark matter

    DOE PAGES

    Agrawal, Prateek; Chacko, Zackaria; Fortes, Elaine C. F. S.; ...

    2016-05-10

    We explore a novel flavor structure in the interactions of dark matter with the Standard Model. We consider theories in which both the dark matter candidate, and the particles that mediate its interactions with the Standard Model fields, carry flavor quantum numbers. The interactions are skewed in flavor space, so that a dark matter particle does not directly couple to the Standard Model matter fields of the same flavor, but only to the other two flavors. This framework respects minimal flavor violation and is, therefore, naturally consistent with flavor constraints. We study the phenomenology of a benchmark model in whichmore » dark matter couples to right-handed charged leptons. In large regions of parameter space, the dark matter can emerge as a thermal relic, while remaining consistent with the constraints from direct and indirect detection. The collider signatures of this scenario include events with multiple leptons and missing energy. In conclusion, these events exhibit a characteristic flavor pattern that may allow this class of models to be distinguished from other theories of dark matter.« less

  13. Observational constraints on transverse gravity: A generalization of unimodular gravity

    NASA Astrophysics Data System (ADS)

    Lopez-Villarejo, J. J.

    2010-04-01

    We explore the hypothesis that the set of symmetries enjoyed by the theory that describes gravity is not the full group of diffeomorphisms (Diff(M)), as in General Relativity, but a maximal subgroup of it (TransverseDiff(M)), with its elements having a jacobian equal to unity; at the infinitesimal level, the parameter describing the coordinate change xμ → xμ + ξμ(x) is transverse, i.e., δμξμ = 0. Incidentally, this is the smaller symmetry one needs to propagate consistently a graviton, which is a great theoretical motivation for considering these theories. Also, the determinant of the metric, g, behaves as a "transverse scalar", so that these theories can be seen as a generalization of the better-known unimodular gravity. We present our results on the observational constraints on transverse gravity, in close relation with the claim of equivalence with general scalar-tensor theory. We also comment on the structure of the divergences of the quantum theory to the one-loop order.

  14. Gamma-ray Signal from Dark Matter Annihilation Mediated by Mixing Slepton

    NASA Astrophysics Data System (ADS)

    Teng, Fei

    2016-03-01

    In order to reconcile the tension between the collider SUSY particle search and the dark matter relic density constraint, we free ourselves from the simplest CMSSM model and find a large parameter space in which a sub-TeV bino dark matter may comply with all the current experimental constraints. In this so-called incredible bulk region, dark matter mainly annihilates through the t channel exchange of a mixing slepton into a leptonic final state. We have explored this proposal and studied the resultant spectrum feature. We are going to show that the line signal produced by the γγ and γZ final state will give some indications to the mixing angle and CP-violation phase of the slepton sector. On the other hand, internal bremsstrahlung (IB) feature will be easier to get observed by future experiments, with sensitivity around 10-29cm3 /s . Unlike some other models, our IB signal is dominated by the collinear limit of the final state radiation amplitude and shows a bump-like feature.

  15. Implications of direct dark matter constraints for minimal supersymmetric standard model Higgs boson searches at the Tevatron.

    PubMed

    Carena, Marcela; Hooper, Dan; Skands, Peter

    2006-08-04

    In regions of large tanbeta and small mAlpha, searches for heavy neutral minimal supersymmetric standard model (MSSM) Higgs bosons at the Tevatron are promising. At the same time, rates in direct dark matter experiments, such as CDMS, are enhanced in the case of large tanbeta and small mAlpha. As a result, there is a natural interplay between the heavy, neutral Higgs searches at the Tevatron and the region of parameter space explored by CDMS. We show that if the lightest neutralino makes up the dark matter of our universe, current limits from CDMS strongly constrain the prospects of heavy, neutral MSSM Higgs discovery at the Tevatron unless |mu| greater or approximately 400 GeV. The limits of CDMS projected for 2007 will increase this constraint to |mu| greater or approximately 800 GeV. If CDMS does observe neutralinos in the near future, however, it will make the discovery of Higgs bosons at the Tevatron far more likely.

  16. Parameter and prediction uncertainty in an optimized terrestrial carbon cycle model: Effects of constraining variables and data record length

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.

    2011-03-01

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.

  17. Cosmological constraints from Galaxy Clusters in 2500 square-degree SPT-SZ survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haan, T. de; Benson, B. A.; Bleem, L. E.

    We present cosmological parameter constraints obtained from galaxy clusters identified by their SunyaevZel'dovich effect signature in the 2500 square-degree South Pole Telescope Sunyaev Zel'dovich (SPT-SZ) survey. We consider the 377 cluster candidates identified at z > 0.25 with a detection significance greater than five, corresponding to the 95% purity threshold for the survey. We compute constraints on cosmological models using the measured cluster abundance as a function of mass and redshift. We include additional constraints from multi-wavelength observations, including Chandra X-ray data for 82 clusters and a weak lensing-based prior on the normalization of the mass-observable scaling relations. Assuming amore » spatially flat Lambda CDM cosmology, we combine the cluster data with a prior on H-0 and find sigma(8)= 0.784. +/- 0.039 and Omega(m) = 0.289. +/- 0.042, with the parameter combination sigma(8) (Omega(m)/0.27)(0.3) = 0.797 +/- 0.031. These results are in good agreement with constraints from the cosmic microwave background (CMB) from SPT, WMAP, and Planck, as well as with constraints from other cluster data sets. We also consider several extensions to Lambda CDM, including models in which the equation of state of dark energy w, the species-summed neutrino mass, and/or the effective number of relativistic species (N-eff) are free parameters. When combined with constraints from the Planck CMB, H-0, baryon acoustic oscillation, and SNe, adding the SPT cluster data improves the w constraint by 14%, to w = -1.023 +/- 0.042.« less

  18. Supersymmetry without prejudice at the LHC

    NASA Astrophysics Data System (ADS)

    Conley, John A.; Gainer, James S.; Hewett, JoAnne L.; Le, My Phuong; Rizzo, Thomas G.

    2011-07-01

    The discovery and exploration of Supersymmetry in a model-independent fashion will be a daunting task due to the large number of soft-breaking parameters in the MSSM. In this paper, we explore the capability of the ATLAS detector at the LHC (sqrt{s}=14 TeV, 1 fb-1) to find SUSY within the 19-dimensional pMSSM subspace of the MSSM using their standard transverse missing energy and long-lived particle searches that were essentially designed for mSUGRA. To this end, we employ a set of ˜71k previously generated model points in the 19-dimensional parameter space that satisfy all of the existing experimental and theoretical constraints. Employing ATLAS-generated SM backgrounds and following their approach in each of 11 missing energy analyses as closely as possible, we explore all of these 71k model points for a possible SUSY signal. To test our analysis procedure, we first verify that we faithfully reproduce the published ATLAS results for the signal distributions for their benchmark mSUGRA model points. We then show that, requiring all sparticle masses to lie below 1(3) TeV, almost all (two-thirds) of the pMSSM model points are discovered with a significance S>5 in at least one of these 11 analyses assuming a 50% systematic error on the SM background. If this systematic error can be reduced to only 20% then this parameter space coverage is increased. These results are indicative that the ATLAS SUSY search strategy is robust under a broad class of Supersymmetric models. We then explore in detail the properties of the kinematically accessible model points which remain unobservable by these search analyses in order to ascertain problematic cases which may arise in general SUSY searches.

  19. Cosmological constraints on Brans-Dicke theory.

    PubMed

    Avilez, A; Skordis, C

    2014-07-04

    We report strong cosmological constraints on the Brans-Dicke (BD) theory of gravity using cosmic microwave background data from Planck. We consider two types of models. First, the initial condition of the scalar field is fixed to give the same effective gravitational strength Geff today as the one measured on Earth, GN. In this case, the BD parameter ω is constrained to ω>692 at the 99% confidence level, an order of magnitude improvement over previous constraints. In the second type, the initial condition for the scalar is a free parameter leading to a somewhat stronger constraint of ω>890, while Geff is constrained to 0.981

  20. Uncertainty relation based on unbiased parameter estimations

    NASA Astrophysics Data System (ADS)

    Sun, Liang-Liang; Song, Yong-Shun; Qiao, Cong-Feng; Yu, Sixia; Chen, Zeng-Bing

    2017-02-01

    Heisenberg's uncertainty relation has been extensively studied in spirit of its well-known original form, in which the inaccuracy measures used exhibit some controversial properties and don't conform with quantum metrology, where the measurement precision is well defined in terms of estimation theory. In this paper, we treat the joint measurement of incompatible observables as a parameter estimation problem, i.e., estimating the parameters characterizing the statistics of the incompatible observables. Our crucial observation is that, in a sequential measurement scenario, the bias induced by the first unbiased measurement in the subsequent measurement can be eradicated by the information acquired, allowing one to extract unbiased information of the second measurement of an incompatible observable. In terms of Fisher information we propose a kind of information comparison measure and explore various types of trade-offs between the information gains and measurement precisions, which interpret the uncertainty relation as surplus variance trade-off over individual perfect measurements instead of a constraint on extracting complete information of incompatible observables.

  1. Reforming Lao Teacher Education to Include Females and Ethnic Minorities--Exploring Possibilities and Constraints

    ERIC Educational Resources Information Center

    Berge, Britt-Marie; Chounlamany, Kongsy; Khounphilaphanh, Bounchanh; Silfver, Ann-Louise

    2017-01-01

    This article explores possibilities and constraints for the inclusion of female and ethnic minority students in Lao education in order to provide education for all. Females and ethnic minorities have traditionally been disadvantaged in Lao education and reforms for the inclusion of these groups are therefore welcome. The article provides rich…

  2. The Mission Assessment Post Processor (MAPP): A New Tool for Performance Evaluation of Human Lunar Missions

    NASA Technical Reports Server (NTRS)

    Williams, Jacob; Stewart, Shaun M.; Lee, David E.; Davis, Elizabeth C.; Condon, Gerald L.; Senent, Juan

    2010-01-01

    The National Aeronautics and Space Administration s (NASA) Constellation Program paves the way for a series of lunar missions leading to a sustained human presence on the Moon. The proposed mission design includes an Earth Departure Stage (EDS), a Crew Exploration Vehicle (Orion) and a lunar lander (Altair) which support the transfer to and from the lunar surface. This report addresses the design, development and implementation of a new mission scan tool called the Mission Assessment Post Processor (MAPP) and its use to provide insight into the integrated (i.e., EDS, Orion, and Altair based) mission cost as a function of various mission parameters and constraints. The Constellation architecture calls for semiannual launches to the Moon and will support a number of missions, beginning with 7-day sortie missions, culminating in a lunar outpost at a specified location. The operational lifetime of the Constellation Program can cover a period of decades over which the Earth-Moon geometry (particularly, the lunar inclination) will go through a complete cycle (i.e., the lunar nodal cycle lasting 18.6 years). This geometry variation, along with other parameters such as flight time, landing site location, and mission related constraints, affect the outbound (Earth to Moon) and inbound (Moon to Earth) translational performance cost. The mission designer must determine the ability of the vehicles to perform lunar missions as a function of this complex set of interdependent parameters. Trade-offs among these parameters provide essential insights for properly assessing the ability of a mission architecture to meet desired goals and objectives. These trades also aid in determining the overall usable propellant required for supporting nominal and off-nominal missions over the entire operational lifetime of the program, thus they support vehicle sizing.

  3. A new model to predict weak-lensing peak counts. II. Parameter constraint strategies

    NASA Astrophysics Data System (ADS)

    Lin, Chieh-An; Kilbinger, Martin

    2015-11-01

    Context. Peak counts have been shown to be an excellent tool for extracting the non-Gaussian part of the weak lensing signal. Recently, we developed a fast stochastic forward model to predict weak-lensing peak counts. Our model is able to reconstruct the underlying distribution of observables for analysis. Aims: In this work, we explore and compare various strategies for constraining a parameter using our model, focusing on the matter density Ωm and the density fluctuation amplitude σ8. Methods: First, we examine the impact from the cosmological dependency of covariances (CDC). Second, we perform the analysis with the copula likelihood, a technique that makes a weaker assumption than does the Gaussian likelihood. Third, direct, non-analytic parameter estimations are applied using the full information of the distribution. Fourth, we obtain constraints with approximate Bayesian computation (ABC), an efficient, robust, and likelihood-free algorithm based on accept-reject sampling. Results: We find that neglecting the CDC effect enlarges parameter contours by 22% and that the covariance-varying copula likelihood is a very good approximation to the true likelihood. The direct techniques work well in spite of noisier contours. Concerning ABC, the iterative process converges quickly to a posterior distribution that is in excellent agreement with results from our other analyses. The time cost for ABC is reduced by two orders of magnitude. Conclusions: The stochastic nature of our weak-lensing peak count model allows us to use various techniques that approach the true underlying probability distribution of observables, without making simplifying assumptions. Our work can be generalized to other observables where forward simulations provide samples of the underlying distribution.

  4. Domain General Constraints on Statistical Learning

    ERIC Educational Resources Information Center

    Thiessen, Erik D.

    2011-01-01

    All theories of language development suggest that learning is constrained. However, theories differ on whether these constraints arise from language-specific processes or have domain-general origins such as the characteristics of human perception and information processing. The current experiments explored constraints on statistical learning of…

  5. Constrained creation of poetic forms during theme-driven exploration of a domain defined by an N-gram model

    NASA Astrophysics Data System (ADS)

    Gervás, Pablo

    2016-04-01

    Most poetry-generation systems apply opportunistic approaches where algorithmic procedures are applied to explore the conceptual space defined by a given knowledge resource in search of solutions that might be aesthetically valuable. Aesthetical value is assumed to arise from compliance to a given poetic form - such as rhyme or metrical regularity - or from evidence of semantic relations between the words in the resulting poems that can be interpreted as rhetorical tropes - such as similes, analogies, or metaphors. This approach tends to fix a priori the aesthetic parameters of the results, and imposes no constraints on the message to be conveyed. The present paper describes an attempt to initiate a shift in this balance, introducing means for constraining the output to certain topics and allowing a looser mechanism for constraining form. This goal arose as a result of the need to produce poems for a themed collection commissioned to be included in a book. The solution adopted explores an approach to creativity where the goals are not solely aesthetic and where the results may be surprising in their poetic form. An existing computer poet, originally developed to produce poems in a given form but with no specific constraints on their content, is put to the task of producing a set of poems with explicit restrictions on content, and allowing for an exploration of poetic form. Alternative generation methods are devised to overcome the difficulties, and the various insights arising from these new methods and the impact they have on the set of resulting poems are discussed in terms of their potential contribution to better poetry-generation systems.

  6. Exploring cosmic origins with CORE: Inflation

    NASA Astrophysics Data System (ADS)

    Finelli, F.; Bucher, M.; Achúcarro, A.; Ballardini, M.; Bartolo, N.; Baumann, D.; Clesse, S.; Errard, J.; Handley, W.; Hindmarsh, M.; Kiiveri, K.; Kunz, M.; Lasenby, A.; Liguori, M.; Paoletti, D.; Ringeval, C.; Väliviita, J.; van Tent, B.; Vennin, V.; Ade, P.; Allison, R.; Arroja, F.; Ashdown, M.; Banday, A. J.; Banerji, R.; Bartlett, J. G.; Basak, S.; de Bernardis, P.; Bersanelli, M.; Bonaldi, A.; Borril, J.; Bouchet, F. R.; Boulanger, F.; Brinckmann, T.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C. S.; Castellano, G.; Challinor, A.; Chluba, J.; Colantoni, I.; Coppolecchia, A.; Crook, M.; D'Alessandro, G.; D'Amico, G.; Delabrouille, J.; Desjacques, V.; De Zotti, G.; Diego, J. M.; Di Valentino, E.; Feeney, S.; Fergusson, J. R.; Fernandez-Cobos, R.; Ferraro, S.; Forastieri, F.; Galli, S.; García-Bellido, J.; de Gasperis, G.; Génova-Santos, R. T.; Gerbino, M.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Hazra, D. K.; Hernández-Monteagudo, C.; Hervias-Caimapo, C.; Hills, M.; Hivon, E.; Hu, B.; Kisner, T.; Kitching, T.; Kovetz, E. D.; Kurki-Suonio, H.; Lamagna, L.; Lattanzi, M.; Lesgourgues, J.; Lewis, A.; Lindholm, V.; Lizarraga, J.; López-Caniego, M.; Luzzi, G.; Maffei, B.; Mandolesi, N.; Martínez-González, E.; Martins, C. J. A. P.; Masi, S.; McCarthy, D.; Matarrese, S.; Melchiorri, A.; Melin, J.-B.; Molinari, D.; Monfardini, A.; Natoli, P.; Negrello, M.; Notari, A.; Oppizzi, F.; Paiella, A.; Pajer, E.; Patanchon, G.; Patil, S. P.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Poulin, V.; Quartin, M.; Ravenni, A.; Remazeilles, M.; Renzi, A.; Roest, D.; Roman, M.; Rubiño-Martin, J. A.; Salvati, L.; Starobinsky, A. A.; Tartari, A.; Tasinato, G.; Tomasi, M.; Torrado, J.; Trappe, N.; Trombetti, T.; Tucci, M.; Tucker, C.; Urrestilla, J.; van de Weygaert, R.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.

    2018-04-01

    We forecast the scientific capabilities to improve our understanding of cosmic inflation of CORE, a proposed CMB space satellite submitted in response to the ESA fifth call for a medium-size mission opportunity. The CORE satellite will map the CMB anisotropies in temperature and polarization in 19 frequency channels spanning the range 60–600 GHz. CORE will have an aggregate noise sensitivity of 1.7 μKṡ arcmin and an angular resolution of 5' at 200 GHz. We explore the impact of telescope size and noise sensitivity on the inflation science return by making forecasts for several instrumental configurations. This study assumes that the lower and higher frequency channels suffice to remove foreground contaminations and complements other related studies of component separation and systematic effects, which will be reported in other papers of the series "Exploring Cosmic Origins with CORE." We forecast the capability to determine key inflationary parameters, to lower the detection limit for the tensor-to-scalar ratio down to the 10‑3 level, to chart the landscape of single field slow-roll inflationary models, to constrain the epoch of reheating, thus connecting inflation to the standard radiation-matter dominated Big Bang era, to reconstruct the primordial power spectrum, to constrain the contribution from isocurvature perturbations to the 10‑3 level, to improve constraints on the cosmic string tension to a level below the presumptive GUT scale, and to improve the current measurements of primordial non-Gaussianities down to the fNLlocal < 1 level. For all the models explored, CORE alone will improve significantly on the present constraints on the physics of inflation. Its capabilities will be further enhanced by combining with complementary future cosmological observations.

  7. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less

  9. Exploring Black Hole Accretion in Active Galactic Nuclei with Simbol-X

    NASA Astrophysics Data System (ADS)

    Goosmann, R. W.; Dovčiak, M.; Mouchet, M.; Czerny, B.; Karas, V.; Gonçalves, A.

    2009-05-01

    A major goal of the Simbol-X mission is to improve our knowledge about black hole accretion. By opening up the X-ray window above 10 keV with unprecedented sensitivity and resolution we obtain new constraints on the X-ray spectral and variability properties of active galactic nuclei. To interpret the future data, detailed X-ray modeling of the dynamics and radiation processes in the black hole vicinity is required. Relativistic effects must be taken into account, which then allow to constrain the fundamental black hole parameters and the emission pattern of the accretion disk from the spectra that will be obtained with Simbol-X.

  10. Are there reliable constitutive laws for dynamic friction?

    PubMed

    Woodhouse, Jim; Putelat, Thibaut; McKay, Andrew

    2015-09-28

    Structural vibration controlled by interfacial friction is widespread, ranging from friction dampers in gas turbines to the motion of violin strings. To predict, control or prevent such vibration, a constitutive description of frictional interactions is inevitably required. A variety of friction models are discussed to assess their scope and validity, in the light of constraints provided by different experimental observations. Three contrasting case studies are used to illustrate how predicted behaviour can be extremely sensitive to the choice of frictional constitutive model, and to explore possible experimental paths to discriminate between and calibrate dynamic friction models over the full parameter range needed for real applications. © 2015 The Author(s).

  11. Exploring the roles of cannot-link constraint in community detection via Multi-variance Mixed Gaussian Generative Model.

    PubMed

    Yang, Liang; Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun

    2017-01-01

    Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection.

  12. Exploring the roles of cannot-link constraint in community detection via Multi-variance Mixed Gaussian Generative Model

    PubMed Central

    Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun

    2017-01-01

    Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection. PMID:28678864

  13. Model-independent cosmological constraints from growth and expansion

    NASA Astrophysics Data System (ADS)

    L'Huillier, Benjamin; Shafieloo, Arman; Kim, Hyungjin

    2018-05-01

    Reconstructing the expansion history of the Universe from Type Ia supernovae data, we fit the growth rate measurements and put model-independent constraints on some key cosmological parameters, namely, Ωm, γ, and σ8. The constraints are consistent with those from the concordance model within the framework of general relativity, but the current quality of the data is not sufficient to rule out modified gravity models. Adding the condition that dark energy density should be positive at all redshifts, independently of its equation of state, further constrains the parameters and interestingly supports the concordance model.

  14. Model-independent indirect detection constraints on hidden sector dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elor, Gilly; Rodd, Nicholas L.; Slatyer, Tracy R.

    2016-06-10

    If dark matter inhabits an expanded “hidden sector”, annihilations may proceed through sequential decays or multi-body final states. We map out the potential signals and current constraints on such a framework in indirect searches, using a model-independent setup based on multi-step hierarchical cascade decays. While remaining agnostic to the details of the hidden sector model, our framework captures the generic broadening of the spectrum of secondary particles (photons, neutrinos, e{sup +}e{sup −} and p-barp) relative to the case of direct annihilation to Standard Model particles. We explore how indirect constraints on dark matter annihilation limit the parameter space for suchmore » cascade/multi-particle decays. We investigate limits from the cosmic microwave background by Planck, the Fermi measurement of photons from the dwarf galaxies, and positron data from AMS-02. The presence of a hidden sector can change the constraints on the dark matter by up to an order of magnitude in either direction (although the effect can be much smaller). We find that generally the bound from the Fermi dwarfs is most constraining for annihilations to photon-rich final states, while AMS-02 is most constraining for electron and muon final states; however in certain instances the CMB bounds overtake both, due to their approximate independence on the details of the hidden sector cascade. We provide the full set of cascade spectra considered here as publicly available code with examples at http://web.mit.edu/lns/research/CascadeSpectra.html.« less

  15. Model-independent indirect detection constraints on hidden sector dark matter

    DOE PAGES

    Elor, Gilly; Rodd, Nicholas L.; Slatyer, Tracy R.; ...

    2016-06-10

    If dark matter inhabits an expanded ``hidden sector'', annihilations may proceed through sequential decays or multi-body final states. We map out the potential signals and current constraints on such a framework in indirect searches, using a model-independent setup based on multi-step hierarchical cascade decays. While remaining agnostic to the details of the hidden sector model, our framework captures the generic broadening of the spectrum of secondary particles (photons, neutrinos, e +e - andmore » $$\\overline{p}$$ p) relative to the case of direct annihilation to Standard Model particles. We explore how indirect constraints on dark matter annihilation limit the parameter space for such cascade/multi-particle decays. We investigate limits from the cosmic microwave background by Planck, the Fermi measurement of photons from the dwarf galaxies, and positron data from AMS-02. The presence of a hidden sector can change the constraints on the dark matter by up to an order of magnitude in either direction (although the effect can be much smaller). We find that generally the bound from the Fermi dwarfs is most constraining for annihilations to photon-rich final states, while AMS-02 is most constraining for electron and muon final states; however in certain instances the CMB bounds overtake both, due to their approximate independence on the details of the hidden sector cascade. We provide the full set of cascade spectra considered here as publicly available code with examples at http://web.mit.edu/lns/research/CascadeSpectra.html.« less

  16. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks

    PubMed Central

    Burroni, Javier; Taylor, P.; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T.

    2017-01-01

    Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications. PMID:28289370

  17. An Adaptive Methodological Inquiry: Exploring a TESOL Teacher Education Program's Affordances and Constraints in Libya as a Conflict Zone

    ERIC Educational Resources Information Center

    Elsherif, Entisar

    2017-01-01

    This adaptive methodological inquiry explored the affordances and constraints of one TESOL teacher education program in Libya as a conflict zone. Data was collected through seven documents and 33 questionnaires. Questionnaires were gathered from the investigated program's teacher-educators, student-teachers, and graduates, who were in-service…

  18. Recent Advances in Stellarator Optimization

    NASA Astrophysics Data System (ADS)

    Gates, David; Brown, T.; Breslau, J.; Landreman, M.; Lazerson, S. A.; Mynick, H.; Neilson, G. H.; Pomphrey, N.

    2016-10-01

    Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. One criticism that has been levelled at this method of design is the complexity of the resultant field coils. Recently, a new coil optimization code, COILOPT + + , was written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. We have also explored possibilities for generating an experimental database that could check whether the reduction in turbulent transport that is predicted by GENE as a function of local shear would be consistent with experiments. To this end, a series of equilibria that can be made in the now latent QUASAR experiment have been identified. This work was supported by U.S. DoE Contract #DE-AC02-09CH11466.

  19. Figures of merit for present and future dark energy probes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mortonson, Michael J.; Huterer, Dragan; Hu, Wayne

    2010-09-15

    We compare current and forecasted constraints on dynamical dark energy models from Type Ia supernovae and the cosmic microwave background using figures of merit based on the volume of the allowed dark energy parameter space. For a two-parameter dark energy equation of state that varies linearly with the scale factor, and assuming a flat universe, the area of the error ellipse can be reduced by a factor of {approx}10 relative to current constraints by future space-based supernova data and CMB measurements from the Planck satellite. If the dark energy equation of state is described by a more general basis ofmore » principal components, the expected improvement in volume-based figures of merit is much greater. While the forecasted precision for any single parameter is only a factor of 2-5 smaller than current uncertainties, the constraints on dark energy models bounded by -1{<=}w{<=}1 improve for approximately 6 independent dark energy parameters resulting in a reduction of the total allowed volume of principal component parameter space by a factor of {approx}100. Typical quintessence models can be adequately described by just 2-3 of these parameters even given the precision of future data, leading to a more modest but still significant improvement. In addition to advances in supernova and CMB data, percent-level measurement of absolute distance and/or the expansion rate is required to ensure that dark energy constraints remain robust to variations in spatial curvature.« less

  20. Constraints of Motor Skill Acquisition: Implications for Teaching and Learning.

    ERIC Educational Resources Information Center

    Hamilton, Michelle L.; Pankey, Robert; Kinnunen, David

    This article presents various solutions to possible problems associated with providing skill-based instruction in physical education. It explores and applies Newell's (1986) constraints model to the analysis and teaching of motor skills in physical education, describing the role of individual, task, and environmental constraints in physical…

  1. Cosmological constraints from galaxy clusters in the 2500 square-degree SPT-SZ survey

    DOE PAGES

    Haan, T. de; Benson, B. A.; Bleem, L. E.; ...

    2016-11-18

    Here, we present cosmological parameter constraints obtained from galaxy clusters identified by their Sunyaev–Zel’dovich effect signature in the 2500 square-degree South Pole Telescope Sunyaev Zel’dovich (SPT-SZ) survey. We consider the 377 cluster candidates identified atmore » $$z\\gt 0.25$$ with a detection significance greater than five, corresponding to the 95% purity threshold for the survey. We compute constraints on cosmological models using the measured cluster abundance as a function of mass and redshift. We include additional constraints from multi-wavelength observations, including Chandra X-ray data for 82 clusters and a weak lensing-based prior on the normalization of the mass-observable scaling relations. Assuming a spatially flat ΛCDM cosmology, we combine the cluster data with a prior on H (0) and find $${\\sigma }_{8}=0.784\\pm 0.039$$ and $${{\\rm{\\Omega }}}_{m}=0.289\\pm 0.042$$, with the parameter combination $${\\sigma }_{8}{({{\\rm{\\Omega }}}_{m}/0.27)}^{0.3}=0.797\\pm 0.031$$. These results are in good agreement with constraints from the cosmic microwave background (CMB) from SPT, WMAP, and Planck, as well as with constraints from other cluster data sets. We also consider several extensions to ΛCDM, including models in which the equation of state of dark energy w, the species-summed neutrino mass, and/or the effective number of relativistic species ($${N}_{\\mathrm{eff}}$$) are free parameters. When combined with constraints from the Planck CMB, H (0), baryon acoustic oscillation, and SNe, adding the SPT cluster data improves the w constraint by 14%, to $$w=-1.023\\pm 0.042$$.« less

  2. Cosmological constraints from galaxy clusters in the 2500 square-degree SPT-SZ survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haan, T. de; Benson, B. A.; Bleem, L. E.

    Here, we present cosmological parameter constraints obtained from galaxy clusters identified by their Sunyaev–Zel’dovich effect signature in the 2500 square-degree South Pole Telescope Sunyaev Zel’dovich (SPT-SZ) survey. We consider the 377 cluster candidates identified atmore » $$z\\gt 0.25$$ with a detection significance greater than five, corresponding to the 95% purity threshold for the survey. We compute constraints on cosmological models using the measured cluster abundance as a function of mass and redshift. We include additional constraints from multi-wavelength observations, including Chandra X-ray data for 82 clusters and a weak lensing-based prior on the normalization of the mass-observable scaling relations. Assuming a spatially flat ΛCDM cosmology, we combine the cluster data with a prior on H (0) and find $${\\sigma }_{8}=0.784\\pm 0.039$$ and $${{\\rm{\\Omega }}}_{m}=0.289\\pm 0.042$$, with the parameter combination $${\\sigma }_{8}{({{\\rm{\\Omega }}}_{m}/0.27)}^{0.3}=0.797\\pm 0.031$$. These results are in good agreement with constraints from the cosmic microwave background (CMB) from SPT, WMAP, and Planck, as well as with constraints from other cluster data sets. We also consider several extensions to ΛCDM, including models in which the equation of state of dark energy w, the species-summed neutrino mass, and/or the effective number of relativistic species ($${N}_{\\mathrm{eff}}$$) are free parameters. When combined with constraints from the Planck CMB, H (0), baryon acoustic oscillation, and SNe, adding the SPT cluster data improves the w constraint by 14%, to $$w=-1.023\\pm 0.042$$.« less

  3. Throughput and latency programmable optical transceiver by using DSP and FEC control.

    PubMed

    Tanimura, Takahito; Hoshida, Takeshi; Kato, Tomoyuki; Watanabe, Shigeki; Suzuki, Makoto; Morikawa, Hiroyuki

    2017-05-15

    We propose and experimentally demonstrate a proof-of-concept of a programmable optical transceiver that enables simultaneous optimization of multiple programmable parameters (modulation format, symbol rate, power allocation, and FEC) for satisfying throughput, signal quality, and latency requirements. The proposed optical transceiver also accommodates multiple sub-channels that can transport different optical signals with different requirements. Multi-degree-of-freedom of the parameters often leads to difficulty in finding the optimum combination among the parameters due to an explosion of the number of combinations. The proposed optical transceiver reduces the number of combinations and finds feasible sets of programmable parameters by using constraints of the parameters combined with a precise analytical model. For precise BER prediction with the specified set of parameters, we model the sub-channel BER as a function of OSNR, modulation formats, symbol rates, and power difference between sub-channels. Next, we formulate simple constraints of the parameters and combine the constraints with the analytical model to seek feasible sets of programmable parameters. Finally, we experimentally demonstrate the end-to-end operation of the proposed optical transceiver with offline manner including low-density parity-check (LDPC) FEC encoding and decoding under a specific use case with latency-sensitive application and 40-km transmission.

  4. Finding Mass Constraints Through Third Neutrino Mass Eigenstate Decay

    NASA Astrophysics Data System (ADS)

    Gangolli, Nakul; de Gouvêa, André; Kelly, Kevin

    2018-01-01

    In this paper we aim to constrain the decay parameter for the third neutrino mass utilizing already accepted constraints on the other mixing parameters from the Pontecorvo-Maki-Nakagawa-Sakata matrix (PMNS). The main purpose of this project is to determine the parameters that will allow the Jiangmen Underground Neutrino Observatory (JUNO) to observe a decay parameter with some statistical significance. Another goal is to determine the parameters that JUNO could detect in the case that the third neutrino mass is lighter than the first two neutrino species. We also replicate the results that were found in the JUNO Conceptual Design Report (CDR). By utilizing Χ2-squared analysis constraints have been put on the mixing angles, mass squared differences, and the third neutrino decay parameter. These statistical tests take into account background noise and normalization corrections and thus the finalized bounds are a good approximation for the true bounds that JUNO can detect. If the decay parameter is not included in our models, the 99% confidence interval lies within The bounds 0s to 2.80x10-12s. However, if we account for a decay parameter of 3x10-5 ev2, then 99% confidence interval lies within 8.73x10-12s to 8.73x10-11s.

  5. Tune-stabilized, non-scaling, fixed-field, alternating gradient accelerator

    DOEpatents

    Johnstone, Carol J [Warrenville, IL

    2011-02-01

    A FFAG is a particle accelerator having turning magnets with a linear field gradient for confinement and a large edge angle to compensate for acceleration. FODO cells contain focus magnets and defocus magnets that are specified by a number of parameters. A set of seven equations, called the FFAG equations relate the parameters to one another. A set of constraints, call the FFAG constraints, constrain the FFAG equations. Selecting a few parameters, such as injection momentum, extraction momentum, and drift distance reduces the number of unknown parameters to seven. Seven equations with seven unknowns can be solved to yield the values for all the parameters and to thereby fully specify a FFAG.

  6. In situ acoustic-based analysis system for physical and chemical properties of the lower Martian atmosphere

    NASA Astrophysics Data System (ADS)

    Farrelly, F. A.; Petri, A.; Pitolli, L.; Pontuale, G.

    2004-01-01

    The environmental acoustic reconnaissance and sounding experiment (EARS), is composed of two parts: the environmental acoustic reconnaissance (EAR) instrument and the environmental acoustic sounding experiment (EASE). They are distinct, but have the common objective of characterizing the acoustic environment of Mars. The principal goal of the EAR instrument is "listening" to Mars. This could be a most significant experiment if one thinks of everyday life experience where hearing is possibly the most important sense after sight. Not only will this contribute to opening up this important area of planetary exploration, which has been essentially ignored until now, but will also bring the general public closer in contact with our most proximate planet. EASE is directed at characterizing acoustic propagation parameters, specifically sound velocity and absorption, and will provide information regarding important physical and chemical parameters of the lower Martian atmosphere; in particular, water vapor content, specific heat capacity, heat conductivity and shear viscosity, which will provide specific constraints in determining its composition. This would enable one to gain a deeper understanding of Mars and its analogues on Earth. Furthermore, the knowledge of the physical and chemical parameters of the Martian atmosphere, which influence its circulation, will improve the comprehension of its climate now and in the past, so as to gain insight on the possibility of the past presence of life on Mars. These aspect are considered strategic in the contest of its exploration, as is clearly indicated in NASA's four main objectives on "Long Term Mars Exploration Program" (http://marsweb.jpl.nasa.gov/mer/science).

  7. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  8. Modeling Real-Time Human-Automation Collaborative Scheduling of Unmanned Vehicles

    DTIC Science & Technology

    2013-06-01

    that they can only take into account those quantifiable variables, parameters, objectives, and constraints identified in the design stages that were... account those quantifiable variables, parameters, objectives, and constraints identified in the design stages that were deemed to be critical. Previous...increased training and operating costs (Haddal & Gertler, 2010) and challenges in meeting the ever-increasing demand for more UV operations (U.S. Air

  9. High scale flavor alignment in two-Higgs doublet models and its phenomenology

    DOE PAGES

    Gori, Stefania; Haber, Howard E.; Santos, Edward

    2017-06-21

    The most general two-Higgs doublet model (2HDM) includes potentially large sources of flavor changing neutral currents (FCNCs) that must be suppressed in order to achieve a phenomenologically viable model. The flavor alignment ansatz postulates that all Yukawa coupling matrices are diagonal when expressed in the basis of mass-eigenstate fermion fields, in which case tree-level Higgs-mediated FCNCs are eliminated. In this work, we explore models with the flavor alignment condition imposed at a very high energy scale, which results in the generation of Higgs-mediated FCNCs via renormalization group running from the high energy scale to the electroweak scale. Using the currentmore » experimental bounds on flavor changing observables, constraints are derived on the aligned 2HDM parameter space. In the favored parameter region, we analyze the implications for Higgs boson phenomenology.« less

  10. Isocurvature forecast in the anthropic axion window

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamann, J.; Hannestad, S.; Raffelt, G.G.

    2009-06-01

    We explore the cosmological sensitivity to the amplitude of isocurvature fluctuations that would be caused by axions in the ''anthropic window'' where the axion decay constant f{sub a} >> 10{sup 12} GeV and the initial misalignment angle Θ{sub i} << 1. In a minimal ΛCDM cosmology extended with subdominant scale-invariant isocurvature fluctuations, existing data constrain the isocurvature fraction to α < 0.09 at 95% C.L. If no signal shows up, Planck can improve this constraint to 0.042 while an ultimate CMB probe limited only by cosmic variance in both temperature and E-polarisation can reach 0.017, about a factor of fivemore » better than the current limit. In the parameter space of f{sub a} and H{sub I} (Hubble parameter during inflation) we identify a small region where axion detection remains within the reach of realistic cosmological probes.« less

  11. Speededness and Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Xiong, Xinhui

    2013-01-01

    Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…

  12. Three-dimensional elastic-plastic finite-element analyses of constraint variations in cracked bodies

    NASA Technical Reports Server (NTRS)

    Newman, J. C., Jr.; Bigelow, C. A.; Shivakumar, K. N.

    1993-01-01

    Three-dimensional elastic-plastic (small-strain) finite-element analyses were used to study the stresses, deformations, and constraint variations around a straight-through crack in finite-thickness plates for an elastic-perfectly plastic material under monotonic and cyclic loading. Middle-crack tension specimens were analyzed for thicknesses ranging from 1.25 to 20 mm with various crack lengths. Three local constraint parameters, related to the normal, tangential, and hydrostatic stresses, showed similar variations along the crack front for a given thickness and applied stress level. Numerical analyses indicated that cyclic stress history and crack growth reduced the local constraint parameters in the interior of a plate, especially at high applied stress levels. A global constraint factor alpha(sub g) was defined to simulate three-dimensional effects in two-dimensional crack analyses. The global constraint factor was calculated as an average through-the-thickness value over the crack-front plastic region. Values of alpha(sub g) were found to be nearly independent of crack length and were related to the stress-intensity factor for a given thickness.

  13. Model-independent constraints on modified gravity from current data and from the Euclid and SKA future surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddei, Laura; Martinelli, Matteo; Amendola, Luca, E-mail: taddei@thphys.uni-heidelberg.de, E-mail: martinelli@lorentz.leidenuniv.nl, E-mail: amendola@thphys.uni-heidelberg.de

    2016-12-01

    The aim of this paper is to constrain modified gravity with redshift space distortion observations and supernovae measurements. Compared with a standard ΛCDM analysis, we include three additional free parameters, namely the initial conditions of the matter perturbations, the overall perturbation normalization, and a scale-dependent modified gravity parameter modifying the Poisson equation, in an attempt to perform a more model-independent analysis. First, we constrain the Poisson parameter Y (also called G {sub eff}) by using currently available f σ{sub 8} data and the recent SN catalog JLA. We find that the inclusion of the additional free parameters makes the constraintsmore » significantly weaker than when fixing them to the standard cosmological value. Second, we forecast future constraints on Y by using the predicted growth-rate data for Euclid and SKA missions. Here again we point out the weakening of the constraints when the additional parameters are included. Finally, we adopt as modified gravity Poisson parameter the specific Horndeski form, and use scale-dependent forecasts to build an exclusion plot for the Yukawa potential akin to the ones realized in laboratory experiments, both for the Euclid and the SKA surveys.« less

  14. Rate-gyro-integral constraint for ambiguity resolution in GNSS attitude determination applications.

    PubMed

    Zhu, Jiancheng; Li, Tao; Wang, Jinling; Hu, Xiaoping; Wu, Meiping

    2013-06-21

    In the field of Global Navigation Satellite System (GNSS) attitude determination, the constraints usually play a critical role in resolving the unknown ambiguities quickly and correctly. Many constraints such as the baseline length, the geometry of multi-baselines and the horizontal attitude angles have been used extensively to improve the performance of ambiguity resolution. In the GNSS/Inertial Navigation System (INS) integrated attitude determination systems using low grade Inertial Measurement Unit (IMU), the initial heading parameters of the vehicle are usually worked out by the GNSS subsystem instead of by the IMU sensors independently. However, when a rotation occurs, the angle at which vehicle has turned within a short time span can be measured accurately by the IMU. This measurement will be treated as a constraint, namely the rate-gyro-integral constraint, which can aid the GNSS ambiguity resolution. We will use this constraint to filter the candidates in the ambiguity search stage. The ambiguity search space shrinks significantly with this constraint imposed during the rotation, thus it is helpful to speeding up the initialization of attitude parameters under dynamic circumstances. This paper will only study the applications of this new constraint to land vehicles. The impacts of measurement errors on the effect of this new constraint will be assessed for different grades of IMU and current average precision level of GNSS receivers. Simulations and experiments in urban areas have demonstrated the validity and efficacy of the new constraint in aiding GNSS attitude determinations.

  15. The impact of temporal sampling resolution on parameter inference for biological transport models.

    PubMed

    Harrison, Jonathan U; Baker, Ruth E

    2018-06-25

    Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.

  16. Teaching Australian Football in Physical Education: Constraints Theory in Practice

    ERIC Educational Resources Information Center

    Pill, Shane

    2013-01-01

    This article outlines a constraints-led process of exploring, modifying, experimenting, adapting, and developing game appreciation known as Game Sense (Australian Sports Commission, 1997; den Duyn, 1996, 1997) for the teaching of Australian football. The game acts as teacher in this constraints-led process. Rather than a linear system that…

  17. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  18. Towards the blackbox computation of magnetic exchange coupling parameters in polynuclear transition-metal complexes: theory, implementation, and application.

    PubMed

    Phillips, Jordan J; Peralta, Juan E

    2013-05-07

    We present a method for calculating magnetic coupling parameters from a single spin-configuration via analytic derivatives of the electronic energy with respect to the local spin direction. This method does not introduce new approximations beyond those found in the Heisenberg-Dirac Hamiltonian and a standard Kohn-Sham Density Functional Theory calculation, and in the limit of an ideal Heisenberg system it reproduces the coupling as determined from spin-projected energy-differences. Our method employs a generalized perturbative approach to constrained density functional theory, where exact expressions for the energy to second order in the constraints are obtained by analytic derivatives from coupled-perturbed theory. When the relative angle between magnetization vectors of metal atoms enters as a constraint, this allows us to calculate all the magnetic exchange couplings of a system from derivatives with respect to local spin directions from the high-spin configuration. Because of the favorable computational scaling of our method with respect to the number of spin-centers, as compared to the broken-symmetry energy-differences approach, this opens the possibility for the blackbox exploration of magnetic properties in large polynuclear transition-metal complexes. In this work we outline the motivation, theory, and implementation of this method, and present results for several model systems and transition-metal complexes with a variety of density functional approximations and Hartree-Fock.

  19. Constraints on single-field inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico

    2016-06-28

    Many alternatives to canonical slow-roll inflation have been proposed over the years, one of the main motivations being to have a model, capable of generating observable values of non-Gaussianity. In this work, we (re-)explore the physical implications of a great majority of such models within a single, effective field theory framework (including novel models with large non-Gaussianity discussed for the first time below). The constraints we apply — both theoretical and experimental — are found to be rather robust, determined to a great extent by just three parameters: the coefficients of the quadratic EFT operators (δN){sup 2} and δNδE, andmore » the slow-roll parameter ε. This allows to significantly limit the majority of single-field alternatives to canonical slow-roll inflation. While the existing data still leaves some room for most of the considered models, the situation would change dramatically if the current upper limit on the tensor-to-scalar ratio decreased down to r<10{sup −2}. Apart from inflationary models driven by plateau-like potentials, the single-field model that would have a chance of surviving this bound is the recently proposed slow-roll inflation with weakly-broken galileon symmetry. In contrast to canonical slow-roll inflation, the latter model can support r<10{sup −2} even if driven by a convex potential, as well as generate observable values for the amplitude of non-Gaussianity.« less

  20. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  1. European Train Control System: A Case Study in Formal Verification

    NASA Astrophysics Data System (ADS)

    Platzer, André; Quesel, Jan-David

    Complex physical systems have several degrees of freedom. They only work correctly when their control parameters obey corresponding constraints. Based on the informal specification of the European Train Control System (ETCS), we design a controller for its cooperation protocol. For its free parameters, we successively identify constraints that are required to ensure collision freedom. We formally prove the parameter constraints to be sharp by characterizing them equivalently in terms of reachability properties of the hybrid system dynamics. Using our deductive verification tool KeYmaera, we formally verify controllability, safety, liveness, and reactivity properties of the ETCS protocol that entail collision freedom. We prove that the ETCS protocol remains correct even in the presence of perturbation by disturbances in the dynamics. We verify that safety is preserved when a PI controlled speed supervision is used.

  2. Multi-objective control of nonlinear boiler-turbine dynamics with actuator magnitude and rate constraints.

    PubMed

    Chen, Pang-Chia

    2013-01-01

    This paper investigates multi-objective controller design approaches for nonlinear boiler-turbine dynamics subject to actuator magnitude and rate constraints. System nonlinearity is handled by a suitable linear parameter varying system representation with drum pressure as the system varying parameter. Variation of the drum pressure is represented by suitable norm-bounded uncertainty and affine dependence on system matrices. Based on linear matrix inequality algorithms, the magnitude and rate constraints on the actuator and the deviations of fluid density and water level are formulated while the tracking abilities on the drum pressure and power output are optimized. Variation ranges of drum pressure and magnitude tracking commands are used as controller design parameters, determined according to the boiler-turbine's operation range. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Adaptive control of a quadrotor aerial vehicle with input constraints and uncertain parameters

    NASA Astrophysics Data System (ADS)

    Tran, Trong-Toan; Ge, Shuzhi Sam; He, Wei

    2018-05-01

    In this paper, we address the problem of adaptive bounded control for the trajectory tracking of a Quadrotor Aerial Vehicle (QAV) while the input saturations and uncertain parameters with the known bounds are simultaneously taken into account. First, to deal with the underactuated property of the QAV model, we decouple and construct the QAV model as a cascaded structure which consists of two fully actuated subsystems. Second, to handle the input constraints and uncertain parameters, we use a combination of the smooth saturation function and smooth projection operator in the control design. Third, to ensure the stability of the overall system of the QAV, we develop the technique for the cascaded system in the presence of both the input constraints and uncertain parameters. Finally, the region of stability of the closed-loop system is constructed explicitly, and our design ensures the asymptotic convergence of the tracking errors to the origin. The simulation results are provided to illustrate the effectiveness of the proposed method.

  4. Constraints on the production of primordial magnetic seeds in pre-big bang cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasperini, M., E-mail: gasperini@ba.infn.it

    We study the amplification of the electromagnetic fluctuations, and the production of 'seeds' for the cosmic magnetic fields, in a class of string cosmology models whose scalar and tensor perturbations reproduce current observations and satisfy known phenomenological constraints. We find that the condition of efficient seeds production can be satisfied and compatible with all constraints only in a restricted region of parameter space, but we show that such a region has significant intersections with the portions of parameter space where the produced background of relic gravitational waves is strong enough to be detectable by aLIGO/Virgo and/or by eLISA.

  5. Constraints on the production of primordial magnetic seeds in pre-big bang cosmology

    NASA Astrophysics Data System (ADS)

    Gasperini, M.

    2017-06-01

    We study the amplification of the electromagnetic fluctuations, and the production of "seeds" for the cosmic magnetic fields, in a class of string cosmology models whose scalar and tensor perturbations reproduce current observations and satisfy known phenomenological constraints. We find that the condition of efficient seeds production can be satisfied and compatible with all constraints only in a restricted region of parameter space, but we show that such a region has significant intersections with the portions of parameter space where the produced background of relic gravitational waves is strong enough to be detectable by aLIGO/Virgo and/or by eLISA.

  6. Blind Deconvolution of Astronomical Images with a Constraint on Bandwidth Determined by the Parameters of the Optical System

    NASA Astrophysics Data System (ADS)

    Luo, Lin; Fan, Min; Shen, Mang-zuo

    2008-01-01

    Atmospheric turbulence severely restricts the spatial resolution of astronomical images obtained by a large ground-based telescope. In order to reduce effectively this effect, we propose a method of blind deconvolution, with a bandwidth constraint determined by the parameters of the telescope's optical system based on the principle of maximum likelihood estimation, in which the convolution error function is minimized by using the conjugate gradient algorithm. A relation between the parameters of the telescope optical system and the image's frequency-domain bandwidth is established, and the speed of convergence of the algorithm is improved by using the positivity constraint on the variables and the limited-bandwidth constraint on the point spread function. To avoid the effective Fourier frequencies exceed the cut-off frequency, it is required that each single image element (e.g., the pixel in the CCD imaging) in the sampling focal plane should be smaller than one fourth of the diameter of the diffraction spot. In the algorithm, no object-centered constraint was used, so the proposed method is suitable for the image restoration of a whole field of objects. By the computer simulation and by the restoration of an actually-observed image of α Piscium, the effectiveness of the proposed method is demonstrated.

  7. Improving the Efficiency and Effectiveness of Community Detection via Prior-Induced Equivalent Super-Network.

    PubMed

    Yang, Liang; Jin, Di; He, Dongxiao; Fu, Huazhu; Cao, Xiaochun; Fogelman-Soulie, Francoise

    2017-03-29

    Due to the importance of community structure in understanding network and a surge of interest aroused on community detectability, how to improve the community identification performance with pairwise prior information becomes a hot topic. However, most existing semi-supervised community detection algorithms only focus on improving the accuracy but ignore the impacts of priors on speeding detection. Besides, they always require to tune additional parameters and cannot guarantee pairwise constraints. To address these drawbacks, we propose a general, high-speed, effective and parameter-free semi-supervised community detection framework. By constructing the indivisible super-nodes according to the connected subgraph of the must-link constraints and by forming the weighted super-edge based on network topology and cannot-link constraints, our new framework transforms the original network into an equivalent but much smaller Super-Network. Super-Network perfectly ensures the must-link constraints and effectively encodes cannot-link constraints. Furthermore, the time complexity of super-network construction process is linear in the original network size, which makes it efficient. Meanwhile, since the constructed super-network is much smaller than the original one, any existing community detection algorithm is much faster when using our framework. Besides, the overall process will not introduce any additional parameters, making it more practical.

  8. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton

    NASA Astrophysics Data System (ADS)

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-01

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10-12 eV (i.e., range larger than a few 1 05 m ), we improve existing constraints by one order of magnitude to |α |<10-11 if the scalar field couples to the baryon number and to |α |<10-12 if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10-12 eV , the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  9. Constraints on supersymmetric dark matter for heavy scalar superpartners

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Peisi; Roglans, Roger A.; Spiegel, Daniel D.

    2017-05-01

    We study the constraints on neutralino dark matter in minimal low energy supersymmetry models and the case of heavy lepton and quark scalar superpartners. For values of the Higgsino and gaugino mass parameters of the order of the weak scale, direct detection experiments are already putting strong bounds on models in which the dominant interactions between the dark matter candidates and nuclei are governed by Higgs boson exchange processes, particularly for positive values of the Higgsino mass parameter mu. For negative values of mu, there can be destructive interference between the amplitudes associated with the exchange of the standard CP-evenmore » Higgs boson and the exchange of the nonstandard one. This leads to specific regions of parameter space which are consistent with the current experimental constraints and a thermal origin of the observed relic density. In this article, we study the current experimental constraints on these scenarios, as well as the future experimental probes, using a combination of direct and indirect dark matter detection and heavy Higgs and electroweakino searches at hadron colliders« less

  10. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton.

    PubMed

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-06

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10^{-12}  eV (i.e., range larger than a few 10^{5}  m), we improve existing constraints by one order of magnitude to |α|<10^{-11} if the scalar field couples to the baryon number and to |α|<10^{-12} if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10^{-12}  eV, the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  11. 3D galaxy clustering with future wide-field surveys: Advantages of a spherical Fourier-Bessel analysis

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Rassat, A.; Starck, J.-L.

    2015-06-01

    Context. Upcoming spectroscopic galaxy surveys are extremely promising to help in addressing the major challenges of cosmology, in particular in understanding the nature of the dark universe. The strength of these surveys, naturally described in spherical geometry, comes from their unprecedented depth and width, but an optimal extraction of their three-dimensional information is of utmost importance to best constrain the properties of the dark universe. Aims: Although there is theoretical motivation and novel tools to explore these surveys using the 3D spherical Fourier-Bessel (SFB) power spectrum of galaxy number counts Cℓ(k,k'), most survey optimisations and forecasts are based on the tomographic spherical harmonics power spectrum C(ij)_ℓ. The goal of this paper is to perform a new investigation of the information that can be extracted from these two analyses in the context of planned stage IV wide-field galaxy surveys. Methods: We compared tomographic and 3D SFB techniques by comparing the forecast cosmological parameter constraints obtained from a Fisher analysis. The comparison was made possible by careful and coherent treatment of non-linear scales in the two analyses, which makes this study the first to compare 3D SFB and tomographic constraints on an equal footing. Nuisance parameters related to a scale- and redshift-dependent galaxy bias were also included in the computation of the 3D SFB and tomographic power spectra for the first time. Results: Tomographic and 3D SFB methods can recover similar constraints in the absence of systematics. This requires choosing an optimal number of redshift bins for the tomographic analysis, which we computed to be N = 26 for zmed ≃ 0.4, N = 30 for zmed ≃ 1.0, and N = 42 for zmed ≃ 1.7. When marginalising over nuisance parameters related to the galaxy bias, the forecast 3D SFB constraints are less affected by this source of systematics than the tomographic constraints. In addition, the rate of increase of the figure of merit as a function of median redshift is higher for the 3D SFB method than for the 2D tomographic method. Conclusions: Constraints from the 3D SFB analysis are less sensitive to unavoidable systematics stemming from a redshift- and scale-dependent galaxy bias. Even for surveys that are optimised with tomography in mind, a 3D SFB analysis is more powerful. In addition, for survey optimisation, the figure of merit for the 3D SFB method increases more rapidly with redshift, especially at higher redshifts, suggesting that the 3D SFB method should be preferred for designing and analysing future wide-field spectroscopic surveys. CosmicPy, the Python package developed for this paper, is freely available at https://cosmicpy.github.io. Appendices are available in electronic form at http://www.aanda.org

  12. A New Limit on Planck Scale Lorentz Violation from Gamma-ray Burst Polarization

    NASA Technical Reports Server (NTRS)

    Stecker, Floyd W.

    2011-01-01

    Constraints on possible Lorentz invariance violation (UV) to first order in E/M(sub Plank) for photons in the framework of effective field theory (EFT) are discussed, taking cosmological factors into account. Then. using the reported detection of polarized soft gamma-ray emission from the gamma-ray burst GRB041219a that is indicative' of an absence of vacuum birefringence, together with a very recent improved method for estimating the redshift of the burst, we derive constraints on the dimension 5 Lorentz violating modification to the Lagrangian of an effective local QFT for QED. Our new constraints are more than five orders of magnitude better than recent constraints from observations of the Crab Nebula.. We obtain the upper limit on the Lorentz violating dimension 5 EFT parameter absolute value of zeta of 2.4 x 10(exp -15), corresponding to a constraint on the dimension 5 standard model extension parameter. Kappa (sup 5) (sub (v)oo) much less than 4.2 X 10(exp -3)4 / GeV.

  13. Thermodynamically consistent model calibration in chemical kinetics

    PubMed Central

    2011-01-01

    Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948

  14. An implicit adaptation algorithm for a linear model reference control system

    NASA Technical Reports Server (NTRS)

    Mabius, L.; Kaufman, H.

    1975-01-01

    This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.

  15. Statistical Techniques to Explore the Quality of Constraints in Constraint-Based Modeling Environments

    ERIC Educational Resources Information Center

    Gálvez, Jaime; Conejo, Ricardo; Guzmán, Eduardo

    2013-01-01

    One of the most popular student modeling approaches is Constraint-Based Modeling (CBM). It is an efficient approach that can be easily applied inside an Intelligent Tutoring System (ITS). Even with these characteristics, building new ITSs requires carefully designing the domain model to be taught because different sources of errors could affect…

  16. Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  17. Practical input optimization for aircraft parameter estimation experiments. Ph.D. Thesis, 1990

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1993-01-01

    The object of this research was to develop an algorithm for the design of practical, optimal flight test inputs for aircraft parameter estimation experiments. A general, single pass technique was developed which allows global optimization of the flight test input design for parameter estimation using the principles of dynamic programming with the input forms limited to square waves only. Provision was made for practical constraints on the input, including amplitude constraints, control system dynamics, and selected input frequency range exclusions. In addition, the input design was accomplished while imposing output amplitude constraints required by model validity and considerations of safety during the flight test. The algorithm has multiple input design capability, with optional inclusion of a constraint that only one control move at a time, so that a human pilot can implement the inputs. It is shown that the technique can be used to design experiments for estimation of open loop model parameters from closed loop flight test data. The report includes a new formulation of the optimal input design problem, a description of a new approach to the solution, and a summary of the characteristics of the algorithm, followed by three example applications of the new technique which demonstrate the quality and expanded capabilities of the input designs produced by the new technique. In all cases, the new input design approach showed significant improvement over previous input design methods in terms of achievable parameter accuracies.

  18. Cosmology with photometric weak lensing surveys: Constraints with redshift tomography of convergence peaks and moments

    NASA Astrophysics Data System (ADS)

    Petri, Andrea; May, Morgan; Haiman, Zoltán

    2016-09-01

    Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w . When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ωm,w ,σ8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. We find that redshift tomography with the power spectrum reduces the area of the 1 σ confidence interval in (Ωm,w ) space by a factor of 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ωm,w ) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. We find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.

  19. Least Squares Approach to the Alignment of the Generic High Precision Tracking System

    NASA Astrophysics Data System (ADS)

    de Renstrom, Pawel Brückman; Haywood, Stephen

    2006-04-01

    A least squares method to solve a generic alignment problem of a high granularity tracking system is presented. The algorithm is based on an analytical linear expansion and allows for multiple nested fits, e.g. imposing a common vertex for groups of particle tracks is of particular interest. We present a consistent and complete recipe to impose constraints on either implicit or explicit parameters. The method has been applied to the full simulation of a subset of the ATLAS silicon tracking system. The ultimate goal is to determine ≈35,000 degrees of freedom (DoF's). We present a limited scale exercise exploring various aspects of the solution.

  20. CONSTRAINTS ON HYBRID METRIC-PALATINI GRAVITY FROM BACKGROUND EVOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lima, N. A.; Barreto, V. S., E-mail: ndal@roe.ac.uk, E-mail: vsm@roe.ac.uk

    2016-02-20

    In this work, we introduce two models of the hybrid metric-Palatini theory of gravitation. We explore their background evolution, showing explicitly that one recovers standard General Relativity with an effective cosmological constant at late times. This happens because the Palatini Ricci scalar evolves toward and asymptotically settles at the minimum of its effective potential during cosmological evolution. We then use a combination of cosmic microwave background, supernovae, and baryonic accoustic oscillations background data to constrain the models’ free parameters. For both models, we are able to constrain the maximum deviation from the gravitational constant G one can have at earlymore » times to be around 1%.« less

  1. Microwave anisotropies in the light of the data from the COBE satellite

    NASA Technical Reports Server (NTRS)

    Dodelson, Scott; Jubas, Jay M.

    1993-01-01

    The recent measurement of anisotropies in the cosmic microwave background by the Cosmic Background Explorer (COBE) satellite and the recent South Pole experiment offer an excellent opportunity to probe cosmological theories. We test a class of theories in which the universe today is flat and matter dominated, and primordial perturbations are adiabatic parameterized by an index n. In this class of theories the predicted signal in the South Pole experiment depends on n, the Hubble constant, and the baryon density. For n = 1 a large region of this parameter space is ruled out, but there is still a window open which satisfies constraints from COBE, the South Pole experiment, and big bang nucleosynthesis.

  2. The ΩDE-ΩM Plane in Dark Energy Cosmology

    NASA Astrophysics Data System (ADS)

    Qiang, Yuan; Zhang, Tong-Jie

    The dark energy cosmology with the equation of state w=const. is considered in this paper. The ΩDE-ΩM plane has been used to study the present state and expansion history of the universe. Through the mathematical analysis, we give the theoretical constraint of cosmological parameters. Together with some observations such as the transition redshift from deceleration to acceleration, more precise constraint on cosmological parameters can be acquired.

  3. Equation of state for dense nucleonic matter from metamodeling. I. Foundational aspects

    NASA Astrophysics Data System (ADS)

    Margueron, Jérôme; Hoffmann Casali, Rudiney; Gulminelli, Francesca

    2018-02-01

    Metamodeling for the nucleonic equation of state (EOS), inspired from a Taylor expansion around the saturation density of symmetric nuclear matter, is proposed and parameterized in terms of the empirical parameters. The present knowledge of nuclear empirical parameters is first reviewed in order to estimate their average values and associated uncertainties, and thus defining the parameter space of the metamodeling. They are divided into isoscalar and isovector types, and ordered according to their power in the density expansion. The goodness of the metamodeling is analyzed against the predictions of the original models. In addition, since no correlation among the empirical parameters is assumed a priori, all arbitrary density dependences can be explored, which might not be accessible in existing functionals. Spurious correlations due to the assumed functional form are also removed. This meta-EOS allows direct relations between the uncertainties on the empirical parameters and the density dependence of the nuclear equation of state and its derivatives, and the mapping between the two can be done with standard Bayesian techniques. A sensitivity analysis shows that the more influential empirical parameters are the isovector parameters Lsym and Ksym, and that laboratory constraints at supersaturation densities are essential to reduce the present uncertainties. The present metamodeling for the EOS for nuclear matter is proposed for further applications in neutron stars and supernova matter.

  4. Patchy screening of the cosmic microwave background by inhomogeneous reionization

    NASA Astrophysics Data System (ADS)

    Gluscevic, Vera; Kamionkowski, Marc; Hanson, Duncan

    2013-02-01

    We derive a constraint on patchy screening of the cosmic microwave background from inhomogeneous reionization using off-diagonal TB and TT correlations in WMAP-7 temperature/polarization data. We interpret this as a constraint on the rms optical-depth fluctuation Δτ as a function of a coherence multipole LC. We relate these parameters to a comoving coherence scale, of bubble size RC, in a phenomenological model where reionization is instantaneous but occurs on a crinkly surface, and also to the bubble size in a model of “Swiss cheese” reionization where bubbles of fixed size are spread over some range of redshifts. The current WMAP data are still too weak, by several orders of magnitude, to constrain reasonable models, but forthcoming Planck and future EPIC data should begin to approach interesting regimes of parameter space. We also present constraints on the parameter space imposed by the recent results from the EDGES experiment.

  5. Managing temporal relations

    NASA Technical Reports Server (NTRS)

    Britt, Daniel L.; Geoffroy, Amy L.; Gohring, John R.

    1990-01-01

    Various temporal constraints on the execution of activities are described, and their representation in the scheduling system MAESTRO is discussed. Initial examples are presented using a sample activity described. Those examples are expanded to include a second activity, and the types of temporal constraints that can obtain between two activities are explored. Soft constraints, or preferences, in activity placement are discussed. Multiple performances of activities are considered, with respect to both hard and soft constraints. The primary methods used in MAESTRO to handle temporal constraints are described as are certain aspects of contingency handling with respect to temporal constraints. A discussion of the overall approach, with indications of future directions for this research, concludes the study.

  6. Allosteric Control of Icosahedral Capsid Assembly

    PubMed Central

    Lazaro, Guillermo R.

    2017-01-01

    During the lifecycle of a virus, viral proteins and other components self-assemble to form an ordered protein shell called a capsid. This assembly process is subject to multiple competing constraints, including the need to form a thermostable shell while avoiding kinetic traps. It has been proposed that viral assembly satisfies these constraints through allosteric regulation, including the interconversion of capsid proteins among conformations with different propensities for assembly. In this article we use computational and theoretical modeling to explore how such allostery affects the assembly of icosahedral shells. We simulate assembly under a wide range of protein concentrations, protein binding affinities, and two different mechanisms of allosteric control. We find that, above a threshold strength of allosteric control, assembly becomes robust over a broad range of subunit binding affinities and concentrations, allowing the formation of highly thermostable capsids. Our results suggest that allostery can significantly shift the range of protein binding affinities that lead to successful assembly, and thus should be accounted for in models that are used to estimate interaction parameters from experimental data. PMID:27117092

  7. Limits to the primordial helium abundance in the baryon-inhomogeneous big bang

    NASA Technical Reports Server (NTRS)

    Mathews, G. J.; Schramm, D. N.; Meyer, B. S.

    1993-01-01

    The parameter space for baryon inhomogeneous big bang models is explored with the goal of determining the minimum helium abundance obtainable in such models while still satisfying the other light-element constraints. We find that the constraint of (D + He-3)/H less than 10 exp -4 restricts the primordial helium mass fraction from baryon-inhomogeneous big bang models to be greater than 0.231 even for a scenario which optimizes the effects of the inhomogeneities and destroys the excess lithium production. Thus, this modification to the standard big bang as well as the standard homogeneous big bang model itself would be falsifiable by observation if the primordial He-4 abundance were observed to be less than 0.231. Furthermore, a present upper limit to the observed helium mass fraction of Y(obs)(p) less than 0.24 implies that the maximum baryon-to-photon ratio allowable in the inhomogeneous models corresponds to eta less than 2.3 x 10 exp -9 (omega(b) h-squared less than 0.088) even if all conditions are optimized.

  8. EEHG Performance and Scaling Laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Penn, Gregory

    This note will calculate the idealized performance of echo-enabled harmonic generation performance (EEHG), explore the parameter settings, and look at constraints determined by incoherent synchrotron radiation (ISR) and intrabeam scattering (IBS). Another important effect, time-of-flight variations related to transverse emittance, is included here but without detailed explanation because it has been described previously. The importance of ISR and IBS is that they lead to random energy shifts that lead to temporal shifts after the various beam manipulations required by the EEHG scheme. These effects give competing constraints on the beamline. For chicane magnets which are too compact for a givenmore » R56, the magnetic fields will be sufficiently strong that ISR will blur out the complex phase space structure of the echo scheme to the point where the bunching is strongly suppressed. The effect of IBS is more omnipresent, and requires an overall compact beamline. It is particularly challenging for the second pulse in a two-color attosecond beamline, due to the long delay between the first energy modulation and the modulator for the second pulse.« less

  9. Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks.

    PubMed

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney J

    2018-05-01

    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.

  10. Population Synthesis of Radio & Gamma-Ray Millisecond Pulsars

    NASA Astrophysics Data System (ADS)

    Frederick, Sara; Gonthier, P. L.; Harding, A. K.

    2014-01-01

    In recent years, the number of known gamma-ray millisecond pulsars (MSPs) in the Galactic disk has risen substantially thanks to confirmed detections by Fermi Gamma-ray Space Telescope (Fermi). We have developed a new population synthesis of gamma-ray and radio MSPs in the galaxy which uses Markov Chain Monte Carlo techniques to explore the large and small worlds of the model parameter space and allows for comparisons of the simulated and detected MSP distributions. The simulation employs empirical radio and gamma-ray luminosity models that are dependent upon the pulsar period and period derivative with freely varying exponents. Parameters associated with the birth distributions are also free to vary. The computer code adjusts the magnitudes of the model luminosities to reproduce the number of MSPs detected by a group of ten radio surveys, thus normalizing the simulation and predicting the MSP birth rates in the Galaxy. Computing many Markov chains leads to preferred sets of model parameters that are further explored through two statistical methods. Marginalized plots define confidence regions in the model parameter space using maximum likelihood methods. A secondary set of confidence regions is determined in parallel using Kuiper statistics calculated from comparisons of cumulative distributions. These two techniques provide feedback to affirm the results and to check for consistency. Radio flux and dispersion measure constraints have been imposed on the simulated gamma-ray distributions in order to reproduce realistic detection conditions. The simulated and detected distributions agree well for both sets of radio and gamma-ray pulsar characteristics, as evidenced by our various comparisons.

  11. Optimization of structures to satisfy a flutter velocity constraint by use of quadratic equation fitting. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Motiwalla, S. K.

    1973-01-01

    Using the first and the second derivative of flutter velocity with respect to the parameters, the velocity hypersurface is made quadratic. This greatly simplifies the numerical procedure developed for determining the values of the design parameters such that a specified flutter velocity constraint is satisfied and the total structural mass is near a relative minimum. A search procedure is presented utilizing two gradient search methods and a gradient projection method. The procedure is applied to the design of a box beam, using finite-element representation. The results indicate that the procedure developed yields substantial design improvement satisfying the specified constraint and does converge to near a local optimum.

  12. Contextual Shaping of Student Design Practices: The Role of Constraint in First-Year Engineering Design

    NASA Astrophysics Data System (ADS)

    Goncher, Andrea M.

    thResearch on engineering design is a core area of concern within engineering education, and a fundamental understanding of how engineering students approach and undertake design is necessary in order to develop effective design models and pedagogies. This dissertation contributes to scholarship on engineering design by addressing a critical, but as yet underexplored, problem: how does the context in which students design shape their design practices? Using a qualitative study comprising of video data of design sessions, focus group interviews with students, and archives of their design work, this research explored how design decisions and actions are shaped by context, specifically the context of higher education. To develop a theoretical explanation for observed behavior, this study used the nested structuration. framework proposed by Perlow, Gittell, & Katz (2004). This framework explicated how teamwork is shaped by mutually reinforcing relationships at the individual, organizational, and institutional levels. I appropriated this framework to look specifically at how engineering students working on a course-related design project identify constraints that guide their design and how these constraints emerge as students interact while working on the project. I first identified and characterized the parameters associated with the design project from the student perspective and then, through multi-case studies of four design teams, I looked at the role these parameters play in student design practices. This qualitative investigation of first-year engineering student design teams revealed mutual and interconnected relationships between students and the organizations and institutions that they are a part of. In addition to contributing to research on engineering design, this work provides guidelines and practices to help design educators develop more effective design projects by incorporating constraints that enable effective design and learning. Moreover, I found that when appropriated in the context of higher education, multiple sublevels existed within nested structuration's organizational context and included course-level and project-level factors. The implications of this research can be used to improve the design of engineering course projects as well as the design of research efforts related to design in engineering education.

  13. Likelihood analysis of the pMSSM11 in light of LHC 13-TeV data

    NASA Astrophysics Data System (ADS)

    Bagnaschi, E.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Citron, M.; Costa, J. C.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Lucio, M.; Martínez Santos, D.; Olive, K. A.; Richards, A.; Spanos, V. C.; Suárez Fernández, I.; Weiglein, G.

    2018-03-01

    We use MasterCode to perform a frequentist analysis of the constraints on a phenomenological MSSM model with 11 parameters, the pMSSM11, including constraints from ˜ 36/fb of LHC data at 13 TeV and PICO, XENON1T and PandaX-II searches for dark matter scattering, as well as previous accelerator and astrophysical measurements, presenting fits both with and without the (g-2)_μ constraint. The pMSSM11 is specified by the following parameters: 3 gaugino masses M_{1,2,3}, a common mass for the first-and second-generation squarks m_{\\tilde{q}} and a distinct third-generation squark mass m_{\\tilde{q}_3}, a common mass for the first-and second-generation sleptons m_{\\tilde{ℓ }} and a distinct third-generation slepton mass m_{\\tilde{τ }}, a common trilinear mixing parameter A, the Higgs mixing parameter μ , the pseudoscalar Higgs mass M_A and tan β . In the fit including (g-2)_μ , a Bino-like \\tilde{χ }^01 is preferred, whereas a Higgsino-like \\tilde{χ }^01 is mildly favoured when the (g-2)_μ constraint is dropped. We identify the mechanisms that operate in different regions of the pMSSM11 parameter space to bring the relic density of the lightest neutralino, \\tilde{χ }^01, into the range indicated by cosmological data. In the fit including (g-2)_μ , coannihilations with \\tilde{χ }^02 and the Wino-like \\tilde{χ }^± 1 or with nearly-degenerate first- and second-generation sleptons are active, whereas coannihilations with the \\tilde{χ }^02 and the Higgsino-like \\tilde{χ }^± 1 or with first- and second-generation squarks may be important when the (g-2)_μ constraint is dropped. In the two cases, we present χ ^2 functions in two-dimensional mass planes as well as their one-dimensional profile projections and best-fit spectra. Prospects remain for discovering strongly-interacting sparticles at the LHC, in both the scenarios with and without the (g-2)_μ constraint, as well as for discovering electroweakly-interacting sparticles at a future linear e^+ e^- collider such as the ILC or CLIC.

  14. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  15. Effective Majorana mass matrix from tau and pseudoscalar meson lepton number violating decays

    NASA Astrophysics Data System (ADS)

    Abada, Asmaa; De Romeri, Valentina; Lucente, Michele; Teixeira, Ana M.; Toma, Takashi

    2018-02-01

    An observation of any lepton number violating process will undoubtedly point towards the existence of new physics and indirectly to the clear Majorana nature of the exchanged fermion. In this work, we explore the potential of a minimal extension of the Standard Model via heavy sterile fermions with masses in the [0.1 - 10] GeV range concerning an extensive array of "neutrinoless" meson and tau decay processes. We assume that the Majorana neutrinos are produced on-shell, and focus on three-body decays. We conduct an update on the bounds on the active-sterile mixing elements, |{U}_{ℓ }{{}{_{α}}}_4{U}_{ℓ }{{}{_{β}}}_4| , taking into account the most recent experimental bounds (and constraints) and new theoretical inputs, as well as the effects of a finite detector, imposing that the heavy neutrino decay within the detector. This allows to establish up-to-date comprehensive constraints on the sterile fermion parameter space. Our results suggest that the branching fractions of several decays are close to current sensitivities (likely within reach of future facilities), some being already in conflict with current data (as is the case of K + → ℓ α + ℓ β + π -, and τ - → μ +π-π-). We use these processes to extract constraints on all entries of an enlarged definition of a 3 × 3 "effective" Majorana neutrino mass matrix m ν αβ .

  16. Fundamental Activity Constraints Lead to Specific Interpretations of the Connectome.

    PubMed

    Schuecker, Jannis; Schmidt, Maximilian; van Albada, Sacha J; Diesmann, Markus; Helias, Moritz

    2017-02-01

    The continuous integration of experimental data into coherent models of the brain is an increasing challenge of modern neuroscience. Such models provide a bridge between structure and activity, and identify the mechanisms giving rise to experimental observations. Nevertheless, structurally realistic network models of spiking neurons are necessarily underconstrained even if experimental data on brain connectivity are incorporated to the best of our knowledge. Guided by physiological observations, any model must therefore explore the parameter ranges within the uncertainty of the data. Based on simulation results alone, however, the mechanisms underlying stable and physiologically realistic activity often remain obscure. We here employ a mean-field reduction of the dynamics, which allows us to include activity constraints into the process of model construction. We shape the phase space of a multi-scale network model of the vision-related areas of macaque cortex by systematically refining its connectivity. Fundamental constraints on the activity, i.e., prohibiting quiescence and requiring global stability, prove sufficient to obtain realistic layer- and area-specific activity. Only small adaptations of the structure are required, showing that the network operates close to an instability. The procedure identifies components of the network critical to its collective dynamics and creates hypotheses for structural data and future experiments. The method can be applied to networks involving any neuron model with a known gain function.

  17. Constraining Galactic cosmic-ray parameters with Z ≤ 2 nuclei

    NASA Astrophysics Data System (ADS)

    Coste, B.; Derome, L.; Maurin, D.; Putze, A.

    2012-03-01

    Context. The secondary-to-primary B/C ratio is widely used for studying Galactic cosmic-ray propagation processes. The 2H/4He and 3He/4He ratios probe a different Z/A regime, which provides a test for the "universality" of propagation. Aims: We revisit the constraints on diffusion-model parameters set by the quartet (1H, 2H, 3He, 4He), using the most recent data as well as updated formulae for the inelastic and production cross-sections. Methods: Our analysis relies on the USINE propagation package and a Markov Chain Monte Carlo technique to estimate the probability density functions of the parameters. Simulated data were also used to validate analysis strategies. Results: The fragmentation of CNO cosmic rays (resp. NeMgSiFe) on the interstellar medium during their propagation contributes to 20% (resp. 20%) of the 2H and 15% (resp. 10%) of the 3He flux at high energy. The C to Fe elements are also responsible for up to 10% of the 4He flux measured at 1 GeV/n. The analysis of 3He/4He (and to a lesser extent 2H/4He) data shows that the transport parameters are consistent with those from the B/C analysis: the diffusion model with δ ~ 0.7 (diffusion slope), Vc ~ 20 km s-1 (galactic wind), Va ~ 40 km s-1 (reacceleration) is favoured, but the combination δ ~ 0.2, Vc ~ 0, and Va ~ 80 km s-1 is a close second. The confidence intervals on the parameters show that the constraints set by the quartet data can compete with those derived from the B/C data. These constraints are tighter when adding the 3He (or 2H) flux measurements, and the tightest when the He flux is added as well. For the latter, the analysis of simulated and real data shows an increased sensitivity to biases. Using the secondary-to-primary ratio along with a loose prior on the source parameters is recommended to obtain the most robust constraints on the transport parameters. Conclusions: Light nuclei should be systematically considered in the analysis of transport parameters. They provide independent constraints that can compete with those obtained from the B/C analysis.

  18. Primordial black hole production in Critical Higgs Inflation

    NASA Astrophysics Data System (ADS)

    Ezquiaga, Jose María; García-Bellido, Juan; Ruiz Morales, Ester

    2018-01-01

    Primordial Black Holes (PBH) arise naturally from high peaks in the curvature power spectrum of near-inflection-point single-field inflation, and could constitute today the dominant component of the dark matter in the universe. In this letter we explore the possibility that a broad spectrum of PBH is formed in models of Critical Higgs Inflation (CHI), where the near-inflection point is related to the critical value of the RGE running of both the Higgs self-coupling λ (μ) and its non-minimal coupling to gravity ξ (μ). We show that, for a wide range of model parameters, a half-domed-shaped peak in the matter spectrum arises at sufficiently small scales that it passes all the constraints from large scale structure observations. The predicted cosmic microwave background spectrum at large scales is in agreement with Planck 2015 data, and has a relatively large tensor-to-scalar ratio that may soon be detected by B-mode polarization experiments. Moreover, the wide peak in the power spectrum gives an approximately lognormal PBH distribution in the range of masses 0.01- 100M⊙, which could explain the LIGO merger events, while passing all present PBH observational constraints. The stochastic background of gravitational waves coming from the unresolved black-hole-binary mergers could also be detected by LISA or PTA. Furthermore, the parameters of the CHI model are consistent, within 2σ, with the measured Higgs parameters at the LHC and their running. Future measurements of the PBH mass spectrum could allow us to obtain complementary information about the Higgs couplings at energies well above the EW scale, and thus constrain new physics beyond the Standard Model.

  19. The clustering of galaxies in the completed SDSS-III Baryon Oscillation Spectroscopic Survey: Cosmological implications of the Fourier space wedges of the final sample

    NASA Astrophysics Data System (ADS)

    Grieb, Jan Niklas; Sánchez, Ariel G.; Salazar-Albornoz, Salvador; Scoccimarro, Román; Crocce, Martín; Dalla Vecchia, Claudio; Montesano, Francesco; Gil-Marín, Héctor; Ross, Ashley J.; Beutler, Florian; Rodríguez-Torres, Sergio; Chuang, Chia-Hsun; Prada, Francisco; Kitaura, Francisco-Shu; Cuesta, Antonio J.; Eisenstein, Daniel J.; Percival, Will J.; Vargas-Magaña, Mariana; Tinker, Jeremy L.; Tojeiro, Rita; Brownstein, Joel R.; Maraston, Claudia; Nichol, Robert C.; Olmstead, Matthew D.; Samushia, Lado; Seo, Hee-Jong; Streblyanska, Alina; Zhao, Gong-bo

    2017-05-01

    We extract cosmological information from the anisotropic power-spectrum measurements from the recently completed Baryon Oscillation Spectroscopic Survey (BOSS), extending the concept of clustering wedges to Fourier space. Making use of new fast-Fourier-transform-based estimators, we measure the power-spectrum clustering wedges of the BOSS sample by filtering out the information of Legendre multipoles ℓ > 4. Our modelling of these measurements is based on novel approaches to describe non-linear evolution, bias and redshift-space distortions, which we test using synthetic catalogues based on large-volume N-body simulations. We are able to include smaller scales than in previous analyses, resulting in tighter cosmological constraints. Using three overlapping redshift bins, we measure the angular-diameter distance, the Hubble parameter and the cosmic growth rate, and explore the cosmological implications of our full-shape clustering measurements in combination with cosmic microwave background and Type Ia supernova data. Assuming a Λ cold dark matter (ΛCDM) cosmology, we constrain the matter density to Ω M= 0.311_{-0.010}^{+0.009} and the Hubble parameter to H_0 = 67.6_{-0.6}^{+0.7} km s^{-1 Mpc^{-1}}, at a confidence level of 68 per cent. We also allow for non-standard dark energy models and modifications of the growth rate, finding good agreement with the ΛCDM paradigm. For example, we constrain the equation-of-state parameter to w = -1.019_{-0.039}^{+0.048}. This paper is part of a set that analyses the final galaxy-clustering data set from BOSS. The measurements and likelihoods presented here are combined with others in Alam et al. to produce the final cosmological constraints from BOSS.

  20. Sensitivities of Earth's core and mantle compositions to accretion and differentiation processes

    NASA Astrophysics Data System (ADS)

    Fischer, Rebecca A.; Campbell, Andrew J.; Ciesla, Fred J.

    2017-01-01

    The Earth and other terrestrial planets formed through the accretion of smaller bodies, with their core and mantle compositions primarily set by metal-silicate interactions during accretion. The conditions of these interactions are poorly understood, but could provide insight into the mechanisms of planetary core formation and the composition of Earth's core. Here we present modeling of Earth's core formation, combining results of 100 N-body accretion simulations with high pressure-temperature metal-silicate partitioning experiments. We explored how various aspects of accretion and core formation influence the resulting core and mantle chemistry: depth of equilibration, amounts of metal and silicate that equilibrate, initial distribution of oxidation states in the disk, temperature distribution in the planet, and target:impactor ratio of equilibrating silicate. Virtually all sets of model parameters that are able to reproduce the Earth's mantle composition result in at least several weight percent of both silicon and oxygen in the core, with more silicon than oxygen. This implies that the core's light element budget may be dominated by these elements, and is consistent with ≤1-2 wt% of other light elements. Reproducing geochemical and geophysical constraints requires that Earth formed from reduced materials that equilibrated at temperatures near or slightly above the mantle liquidus during accretion. The results indicate a strong tradeoff between the compositional effects of the depth of equilibration and the amounts of metal and silicate that equilibrate, so these aspects should be targeted in future studies aiming to better understand core formation conditions. Over the range of allowed parameter space, core and mantle compositions are most sensitive to these factors as well as stochastic variations in what the planet accreted as a function of time, so tighter constraints on these parameters will lead to an improved understanding of Earth's core composition.

  1. Mechanistic Explanations for Restricted Evolutionary Paths That Emerge from Gene Regulatory Networks

    PubMed Central

    Cotterell, James; Sharpe, James

    2013-01-01

    The extent and the nature of the constraints to evolutionary trajectories are central issues in biology. Constraints can be the result of systems dynamics causing a non-linear mapping between genotype and phenotype. How prevalent are these developmental constraints and what is their mechanistic basis? Although this has been extensively explored at the level of epistatic interactions between nucleotides within a gene, or amino acids within a protein, selection acts at the level of the whole organism, and therefore epistasis between disparate genes in the genome is expected due to their functional interactions within gene regulatory networks (GRNs) which are responsible for many aspects of organismal phenotype. Here we explore epistasis within GRNs capable of performing a common developmental function – converting a continuous morphogen input into discrete spatial domains. By exploring the full complement of GRN wiring designs that are able to perform this function, we analyzed all possible mutational routes between functional GRNs. Through this study we demonstrate that mechanistic constraints are common for GRNs that perform even a simple function. We demonstrate a common mechanistic cause for such a constraint involving complementation between counter-balanced gene-gene interactions. Furthermore we show how such constraints can be bypassed by means of “permissive” mutations that buffer changes in a direct route between two GRN topologies that would normally be unviable. We show that such bypasses are common and thus we suggest that unlike what was observed in protein sequence-function relationships, the “tape of life” is less reproducible when one considers higher levels of biological organization. PMID:23613807

  2. Exploring the Relationship between Beginning Science Teachers' Practices, Institutional Constraints, and Adult Development

    NASA Astrophysics Data System (ADS)

    Wilcox, Jesse Lee

    This year-long study explored how ten teachers--five first year, five second year--acclimated to their new school environment after leaving a master's level university science teacher preparation program known for being highly effective. Furthermore, this study sought to explore if a relationship existed between teachers' understanding and implementation of research-based science teaching practices, the barriers to enacting these practices--known as institutional constraints, and the constructive-developmental theory which explores meaning-making systems known as orders of consciousness. As a naturalistic inquiry mixed methods study, data were collected using both qualitative (e.g., semi-structured interviews, field notes) as well as quantitative methods (e.g., observation protocols, subject/object protocol). These data sources were used to construct participant summaries and a cross-case analysis. The findings from provide evidence that teachers' orders of consciousness might help to explain why understanding research-based science teaching practices are maintained by some new teachers and not others. Additionally, this study found the orders of consciousness of teachers relates to the perceptions of institutional constraints as well as how a teacher chooses to navigate those constraints. Finally, the extent to which teachers implement research-based science teaching practices is related to orders of consciousness. While many studies have focused on what meaning teachers make, this study highlights the importance of considering how teachers make meaning.

  3. Experimental constraints on metric and non-metric theories of gravity

    NASA Technical Reports Server (NTRS)

    Will, Clifford M.

    1989-01-01

    Experimental constraints on metric and non-metric theories of gravitation are reviewed. Tests of the Einstein Equivalence Principle indicate that only metric theories of gravity are likely to be viable. Solar system experiments constrain the parameters of the weak field, post-Newtonian limit to be close to the values predicted by general relativity. Future space experiments will provide further constraints on post-Newtonian gravity.

  4. Cosmology with photometric weak lensing surveys: Constraints with redshift tomography of convergence peaks and moments

    DOE PAGES

    Petri, Andrea; May, Morgan; Haiman, Zoltán

    2016-09-30

    Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w. When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ω m,w,σ 8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. Here we find that redshift tomography with the power spectrum reduces the area of the 1σ confidence interval in (Ω m,w) space by a factor ofmore » 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ω m,w) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. In conclusion, we find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.« less

  5. Cosmic shear results from the deep lens survey. II. Full cosmological parameter constraints from tomography

    DOE PAGES

    Jee, M. James; Tyson, J. Anthony; Hilbert, Stefan; ...

    2016-06-15

    Here, we present a tomographic cosmic shear study from the Deep Lens Survey (DLS), which, providing a limiting magnitudemore » $${r}_{\\mathrm{lim}}\\sim 27$$ ($$5\\sigma $$), is designed as a precursor Large Synoptic Survey Telescope (LSST) survey with an emphasis on depth. Using five tomographic redshift bins, we study their auto- and cross-correlations to constrain cosmological parameters. We use a luminosity-dependent nonlinear model to account for the astrophysical systematics originating from intrinsic alignments of galaxy shapes. We find that the cosmological leverage of the DLS is among the highest among existing $$\\gt 10$$ deg2 cosmic shear surveys. Combining the DLS tomography with the 9 yr results of the Wilkinson Microwave Anisotropy Probe (WMAP9) gives $${{\\rm{\\Omega }}}_{m}={0.293}_{-0.014}^{+0.012}$$, $${\\sigma }_{8}={0.833}_{-0.018}^{+0.011}$$, $${H}_{0}={68.6}_{-1.2}^{+1.4}\\;{\\text{km s}}^{-1}\\;{{\\rm{Mpc}}}^{-1}$$, and $${{\\rm{\\Omega }}}_{b}=0.0475\\pm 0.0012$$ for ΛCDM, reducing the uncertainties of the WMAP9-only constraints by ~50%. When we do not assume flatness for ΛCDM, we obtain the curvature constraint $${{\\rm{\\Omega }}}_{k}=-{0.010}_{-0.015}^{+0.013}$$ from the DLS+WMAP9 combination, which, however, is not well constrained when WMAP9 is used alone. The dark energy equation-of-state parameter w is tightly constrained when baryonic acoustic oscillation (BAO) data are added, yielding $$w=-{1.02}_{-0.09}^{+0.10}$$ with the DLS+WMAP9+BAO joint probe. The addition of supernova constraints further tightens the parameter to $$w=-1.03\\pm 0.03$$. Our joint constraints are fully consistent with the final Planck results and also with the predictions of a ΛCDM universe.« less

  6. Fluid convection, constraint and causation

    PubMed Central

    Bishop, Robert C.

    2012-01-01

    Complexity—nonlinear dynamics for my purposes in this essay—is rich with metaphysical and epistemological implications but is receiving sustained philosophical analysis only recently. I will explore some of the subtleties of causation and constraint in Rayleigh–Bénard convection as an example of a complex phenomenon, and extract some lessons for further philosophical reflection on top-down constraint and causation particularly with respect to causal foundationalism. PMID:23386955

  7. Constraints for transonic black hole accretion

    NASA Technical Reports Server (NTRS)

    Abramowicz, Marek A.; Kato, Shoji

    1989-01-01

    Regularity conditions and global topological constraints leave some forbidden regions in the parameter space of the transonic isothermal, rotating matter onto black holes. Unstable flows occupy regions touching the boundaries of the forbidden regions. The astrophysical consequences of these results are discussed.

  8. Interstellar He Flow Analysis over the Past 9 Years with Observations over the Full IBEX-Lo Energy Range

    NASA Astrophysics Data System (ADS)

    Moebius, E.; Bower, E.; Bzowski, M.; Fuselier, S. A.; Heirtzler, D.; Kubiak, M. A.; Kucharek, H.; Lee, M. A.; McComas, D. J.; Schwadron, N.; Swaczyna, P.; Sokol, J. M.; Wurz, P.

    2017-12-01

    The Sun's motion relative to the surrounding interstellar medium leads to an interstellar neutral (ISN) wind through the heliosphere. This wind is moderately depleted by ionization and can be analyzed in-situ with pickup ions and direct neutral atom imaging. Since 2009, observations of the ISN wind at 1 AU with the Interstellar Boundary Explorer (IBEX) have returned a very precise 4-dimensional parameter tube for the flow vector (speed VISN, longitude λISN, and latitude βISN) and temperature TISN of interstellar He in the local cloud, which organizes VISN, βISN, and TISN as a function of λISN, and the local flow Mach number (VThISN/VISN). Typically, the uncertainties along this functional dependence are larger than across it. Here we present important refinements of the determination of this parameter tube by analyzing the spin-integrated ISN flux for its maximum as a function of ecliptic longitude for each year through 2017. In particular, we include a weak energy dependence of the sensor efficiency by comparing the response in all four energy steps that record the ISN He flow. In addition, a recent operational extension of letting the spin axis pointing of IBEX drift to the maximum offset west of the Sun, results in an additional constraint that helps breaking the degeneracy of the ISN parameters along the 4D tube. This constraint is part of the complement of drivers for the determination of all four ISN parameters effective in the full χ2-minimization by comparing the observed count rate distribution with detailed modeling of the ISN flow (e.g. Bzowski et al., 2015, ApJS, 220:28; Schwadron et al., 2015, ApJS, 220:25) and is complementary to the independent determination of λISN using the longitude dependence of the He+ pickup ion cut-off speed with STEREO PLASTIC and ACE SWICS (Möbius et al., 2015, ApJ 815:20).

  9. Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.

    PubMed

    Capozziello, S; Lambiase, G; Saridakis, E N

    2017-01-01

    We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.

  10. Exploring the climate of Proxima B with the Met Office Unified Model

    NASA Astrophysics Data System (ADS)

    Boutle, Ian A.; Mayne, Nathan J.; Drummond, Benjamin; Manners, James; Goyal, Jayesh; Hugo Lambert, F.; Acreman, David M.; Earnshaw, Paul D.

    2017-05-01

    We present results of simulations of the climate of the newly discovered planet Proxima Centauri B, performed using the Met Office Unified Model (UM). We examine the responses of both an "Earth-like" atmosphere and simplified nitrogen and trace carbon dioxide atmosphere to the radiation likely received by Proxima Centauri B. Additionally, we explore the effects of orbital eccentricity on the planetary conditions using a range of eccentricities guided by the observational constraints. Overall, our results are in agreement with previous studies in suggesting Proxima Centauri B may well have surface temperatures conducive to the presence of liquid water. Moreover, we have expanded the parameter regime over which the planet may support liquid water to higher values of eccentricity (≳0.1) and lower incident fluxes (881.7 W m-2) than previous work. This increased parameter space arises because of the low sensitivity of the planet to changes in stellar flux, a consequence of the stellar spectrum and orbital configuration. However, we also find interesting differences from previous simulations, such as cooler mean surface temperatures for the tidally-locked case. Finally, we have produced high-resolution planetary emission and reflectance spectra, and highlight signatures of gases vital to the evolution of complex life on Earth (oxygen, ozone and carbon dioxide).

  11. Stability of the Kepler-11 system and its origin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahajan, Nikhil; Wu, Yanqin

    2014-11-01

    A significant fraction of Kepler systems are closely packed, largely coplanar, and circular. We study the stability of a six-planet system, Kepler-11, to gain insights on the dynamics and formation history of such systems. Using a technique called 'frequency maps' as fast indicators of long-term stability, we explore the stability of the Kepler-11 system by analyzing the neighborhood space around its orbital parameters. Frequency maps provide a visual representation of chaos and stability, and their dependence on orbital parameters. We find that the current system is stable, but lies within a few percent of several dynamically dangerous two-body mean-motion resonances.more » Planet eccentricities are restricted below a small value, ∼0.04, for long-term stability, but planet masses can be more than twice their reported values (thus allowing for the possibility of mass loss by past photoevaporation). Based on our frequency maps, we speculate on the origin of instability in closely packed systems. We then proceed to investigate how the system could have been assembled. The stability constraints on Kepler-11 (mainly eccentricity constraints) suggest that if the system were assembled in situ, a dissipation mechanism must have been at work to neutralize the eccentricity excitation. On the other hand, if migration was responsible for assembling the planets, there has to be little differential migration among the planets to avoid them either getting trapped into mean motion resonances, or crashing into each other.« less

  12. Constraints on the Sunyaev-Zel'dovich signal from the warm-hot intergalactic medium from WMAP and SPT data

    NASA Astrophysics Data System (ADS)

    Génova-Santos, Ricardo; Suárez-Velásquez, I.; Atrio-Barandela, F.; Mücket, J. P.

    2013-07-01

    The fraction of ionized gas in the warm-hot intergalactic medium induces temperature anisotropies on the cosmic microwave background similar to those of clusters of galaxies. The Sunyaev-Zel'dovich (SZ) anisotropies due to these low-density, weakly non-linear, baryon filaments cannot be distinguished from that of clusters using frequency information, but they can be separated since their angular scales are very different. To determine the relative contribution of the WHIM SZ signal to the radiation power spectrum of temperature anisotropies, we explore the parameter space of the concordance Λ cold dark matter model using Monte Carlo Markov chains and the Wilkinson Microwave Anisotropy Probe 7 yr and South Pole Telescope data. We find marginal evidence of a contribution by diffuse gas, with amplitudes of AWHIM = 10-20 μK2, but the results are also compatible with a null contribution from the WHIM, allowing us to set an upper limit of AWHIM < 43 μK2 (95.4 per cent CL). The signal produced by galaxy clusters remains at ACL = 4.5 μK2, a value similar to what is obtained when no WHIM is included. From the measured WHIM amplitude, we constrain the temperature-density phase diagram of the diffuse gas, and find it to be compatible with numerical simulations. The corresponding baryon fraction in the WHIM varies from 0.43 to 0.47, depending on model parameters. The forthcoming Planck data could set tighter constraints on the temperature-density relation.

  13. Designing a space-based galaxy redshift survey to probe dark energy

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Percival, Will; Cimatti, Andrea; Mukherjee, Pia; Guzzo, Luigi; Baugh, Carlton M.; Carbone, Carmelita; Franzetti, Paolo; Garilli, Bianca; Geach, James E.; Lacey, Cedric G.; Majerotto, Elisabetta; Orsi, Alvaro; Rosati, Piero; Samushia, Lado; Zamorani, Giovanni

    2010-12-01

    A space-based galaxy redshift survey would have enormous power in constraining dark energy and testing general relativity, provided that its parameters are suitably optimized. We study viable space-based galaxy redshift surveys, exploring the dependence of the Dark Energy Task Force (DETF) figure-of-merit (FoM) on redshift accuracy, redshift range, survey area, target selection and forecast method. Fitting formulae are provided for convenience. We also consider the dependence on the information used: the full galaxy power spectrum P(k), P(k) marginalized over its shape, or just the Baryon Acoustic Oscillations (BAO). We find that the inclusion of growth rate information (extracted using redshift space distortion and galaxy clustering amplitude measurements) leads to a factor of ~3 improvement in the FoM, assuming general relativity is not modified. This inclusion partially compensates for the loss of information when only the BAO are used to give geometrical constraints, rather than using the full P(k) as a standard ruler. We find that a space-based galaxy redshift survey covering ~20000deg2 over with σz/(1 + z) <= 0.001 exploits a redshift range that is only easily accessible from space, extends to sufficiently low redshifts to allow both a vast 3D map of the universe using a single tracer population, and overlaps with ground-based surveys to enable robust modelling of systematic effects. We argue that these parameters are close to their optimal values given current instrumental and practical constraints.

  14. Computing Intrinsic Images.

    DTIC Science & Technology

    1986-08-01

    most of the algorithms fail when applied to real images. (2) Usually the constraints from the geometry and the physics of the problem are not enough...large subset of real images), and so most of the algorithms fail when applied to real images. (2) Usually the constraints from the geometry and the...constraints from the geometry and the physics of the problem are not enough to guarantee uniqueness of the computed parameters. In this case, strong

  15. An effective parameter optimization with radiation balance constraints in the CAM5

    NASA Astrophysics Data System (ADS)

    Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.

    2017-12-01

    Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.

  16. Cosmological parameter constraints with the Deep Lens Survey using galaxy-shear correlations and galaxy clustering properties

    NASA Astrophysics Data System (ADS)

    Yoon, Mijin; Jee, Myungkook James; Tyson, Tony

    2018-01-01

    The Deep Lens Survey (DLS), a precursor to the Large Synoptic Survey Telescope (LSST), is a 20 sq. deg survey carried out with NOAO’s Blanco and Mayall telescopes. The strength of the survey lies in its depth reaching down to ~27th mag in BVRz bands. This enables a broad redshift baseline study and allows us to investigate cosmological evolution of the large-scale structure. In this poster, we present the first cosmological analysis from the DLS using galaxy-shear correlations and galaxy clustering signals. Our DLS shear calibration accuracy has been validated through the most recent public weak-lensing data challenge. Photometric redshift systematic errors are tested by performing lens-source flip tests. Instead of real-space correlations, we reconstruct band-limited power spectra for cosmological parameter constraints. Our analysis puts a tight constraint on the matter density and the power spectrum normalization parameters. Our results are highly consistent with our previous cosmic shear analysis and also with the Planck CMB results.

  17. Constraints on short-term mantle rheology from the J2 observation and the dispersion of the 18.6 y tidal Love number

    NASA Technical Reports Server (NTRS)

    Sabadini, R.; Yuen, D. A.; Widmer, R.

    1985-01-01

    Information derived from data recently acquired from the LAGEOS satellite is used to place some constraints on the rheological parameters of short-term mantle rheology. The validity of Lambeck and Nakiboglu's (1983) rheological model is assessed by formally developing an expression for the transformed shear modulus using a truncated retardation spectrum. This analytical formula is used to show that the parameters of the above mentioned model are not consistent at all with the amount of anelastic dispersion expected in the Chandler wobble and with the attenuation of seismic normal modes. The feasibility of a standard linear solid (SLS) rheology operating over intermediate timescales between 1 and 100 yr is investigated to determine whether the tidal dispersion at 18.6 yr can be explained by this model. An attempt is made to place some constraints on the parameters of the SLS model and the nature of short-term mantle rheology for timescales of less than 100 yr is discussed.

  18. Constraining Secluded Dark Matter models with the public data from the 79-string IceCube search for dark matter in the Sun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ardid, M.; Felis, I.; Martínez-Mora, J.A.

    The 79-string IceCube search for dark matter in the Sun public data is used to test Secluded Dark Matter models. No significant excess over background is observed and constraints on the parameters of the models are derived. Moreover, the search is also used to constrain the dark photon model in the region of the parameter space with dark photon masses between 0.22 and ∼ 1 GeV and a kinetic mixing parameter ε ∼ 10{sup −9}, which remains unconstrained. These are the first constraints of dark photons from neutrino telescopes. It is expected that neutrino telescopes will be efficient tools tomore » test dark photons by means of different searches in the Sun, Earth and Galactic Center, which could complement constraints from direct detection, accelerators, astrophysics and indirect detection with other messengers, such as gamma rays or antiparticles.« less

  19. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  20. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.

    PubMed

    Karthikeyan, M; Raja, T Sree Ranga

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.

  1. Effect of tartarate and citrate based food additives on the micellar properties of sodium dodecylsulfate for prospective use as food emulsifier.

    PubMed

    Banipal, Tarlok S; Kaur, Harjinder; Kaur, Amanpreet; Banipal, Parampaul K

    2016-01-01

    Citrate and tartarate based food preservatives can be used to enhance the emulsifying properties of sodium dodecylsulfate (SDS) based micellar system and thus making it appropriate for food applications. Exploration of interactions between the two species is the key constraint for execution of such ideas. In this work various micellar and thermodynamic parameters of SDS like critical micellar concentration (CMC), standard Gibbs free energy of micellization (ΔG(0)mic.) etc. have been calculated in different concentrations of disodium tartarate (DST) and trisodium citrate (TSC) in the temperature range (288.15-318.15)K from the conductivity and surface tension measurements. The parameters obtained from these studies reveal the competitive nature of both the additives with SDS for available positions at the air/water interface. TSC is found to be more effective additive in order to make SDS micellar system better for its potential applications as food emulsifier. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Curvature constraints from large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dio, Enea Di; Montanari, Francesco; Raccanelli, Alvise

    We modified the CLASS code in order to include relativistic galaxy number counts in spatially curved geometries; we present the formalism and study the effect of relativistic corrections on spatial curvature. The new version of the code is now publicly available. Using a Fisher matrix analysis, we investigate how measurements of the spatial curvature parameter Ω {sub K} with future galaxy surveys are affected by relativistic effects, which influence observations of the large scale galaxy distribution. These effects include contributions from cosmic magnification, Doppler terms and terms involving the gravitational potential. As an application, we consider angle and redshift dependentmore » power spectra, which are especially well suited for model independent cosmological constraints. We compute our results for a representative deep, wide and spectroscopic survey, and our results show the impact of relativistic corrections on spatial curvature parameter estimation. We show that constraints on the curvature parameter may be strongly biased if, in particular, cosmic magnification is not included in the analysis. Other relativistic effects turn out to be subdominant in the studied configuration. We analyze how the shift in the estimated best-fit value for the curvature and other cosmological parameters depends on the magnification bias parameter, and find that significant biases are to be expected if this term is not properly considered in the analysis.« less

  3. Geometrically constrained kinematic global navigation satellite systems positioning: Implementation and performance

    NASA Astrophysics Data System (ADS)

    Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza

    2015-09-01

    GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.

  4. Planck 2015 results. XXIV. Cosmology from Sunyaev-Zeldovich cluster counts

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Church, S.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Comis, B.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Melin, J.-B.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Popa, L.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Roman, M.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Weller, J.; White, S. D. M.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing of background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. Improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.

  5. Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.; ...

    2016-09-20

    In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less

  6. Planck 2015 results: XXIV. Cosmology from Sunyaev-Zeldovich cluster counts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Arnaud, M.

    In this work, we present cluster counts and corresponding cosmological constraints from the Planck full mission data set. Our catalogue consists of 439 clusters detected via their Sunyaev-Zeldovich (SZ) signal down to a signal-to-noise ratio of 6, and is more than a factor of 2 larger than the 2013 Planck cluster cosmology sample. The counts are consistent with those from 2013 and yield compatible constraints under the same modelling assumptions. Taking advantage of the larger catalogue, we extend our analysis to the two-dimensional distribution in redshift and signal-to-noise. We use mass estimates from two recent studies of gravitational lensing ofmore » background galaxies by Planck clusters to provide priors on the hydrostatic bias parameter, (1-b). In addition, we use lensing of cosmic microwave background (CMB) temperature fluctuations by Planck clusters as an independent constraint on this parameter. These various calibrations imply constraints on the present-day amplitude of matter fluctuations in varying degrees of tension with those from the Planck analysis of primary fluctuations in the CMB; for the lowest estimated values of (1-b) the tension is mild, only a little over one standard deviation, while it remains substantial (3.7σ) for the largest estimated value. We also examine constraints on extensions to the base flat ΛCDM model by combining the cluster and CMB constraints. The combination appears to favour non-minimal neutrino masses, but this possibility does little to relieve the overall tension because it simultaneously lowers the implied value of the Hubble parameter, thereby exacerbating the discrepancy with most current astrophysical estimates. In conclusion, improving the precision of cluster mass calibrations from the current 10%-level to 1% would significantly strengthen these combined analyses and provide a stringent test of the base ΛCDM model.« less

  7. Evaluation of the pollution load discharged at an upstream industry--Egypt--and methods for its reduction.

    PubMed

    El-Dars, F M S E; Mohammed, H A; Farag, A B

    2011-01-01

    Oil exploration in Egypt is a major contributor to the national Gross Domestic Product (GDP). With 50-65% of the oil resources located in the Gulf of Suez (GoS) region, the impact of such activity upon the region's water environment and its quality cannot be overlooked because of the volume of effluent generated. The objective of this study (September 2000-September 2001) was to assess the impact of a 650,000 barrels/day (bl/d) (100,000 m3/d) effluent arising from a major oil exploration site located south of GoS upon the local water environment. Another objective was to identify the pollutant contents amenable for reduction relative to the new Egyptian regulations. This was achieved by the characterization of the main contributing streams and the identification of the final effluent parameter constraints relative to the type of injection waters used. Subsequent investigations for the reduction of these contents were conducted on site and the results obtained are reviewed herewith.

  8. SU-E-T-13: A Comparative Dosimetric Study On Radio-Dynamic Therapy for Pelvic Cancer Treatment: Strategies for Bone Marrow Dose and Volume Reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C; Renmin Hospital of Wuhan University, Wuhan, Hubei Province; Wang, B

    Purpose: Radio-dynamic therapy (RDT) is a potentially effective modality for local and systemic cancer treatment. Using RDT, the administration of a radio-sensitizer enhances the biological effect of high-energy photons. Although the sensitizer uptake ratio of tumor to normal tissue is normally high, one cannot simply neglect its effect on critical structures. In this study, we aim to explore planning strategies to improve bone marrow sparing without compromising the plan quality for RDT treatment of pelvic cancers. Methods: Ten cervical and ten prostate cancer patients who previously received radiotherapy at our institution were selected for this study. For each patient, ninemore » plans were created using the Varian Eclipse treatmentplanning-system (TPS) with 3D-CRT, IMRT, and VMAT delivery techniques containing various gantry angle combinations and optimization parameters (dose constraints to the bone marrow). To evaluate the plans for bone marrow sparing, the dose-volume parameters V5, V10, V15, V20, V30, and V40 for bone marrow were examined. Effective doseenhancement factors for the sensitizer were used to weigh the dose-volume histograms for various tissues from individual fractions. Results: The planning strategies had different impacts on bone marrow sparing for the cervical and prostate cases. For the cervical cases, provided the bone marrow constraints were properly set during optimization, the dose to bone marrow sparing was found to be comparable between different IMRT and VMAT plans regardless of the gantry angle selection. For the prostate cases, however, careful selection of gantry angles could dramatically improve the bone marrow sparing, although the dose distribution in bone marrow was clinically acceptable for all prostate plans that we created. Conclusion: For intensity-modulated RDT planning for cervical cancer, planners should set bone marrow constraints properly to avoid any adverse damage, while for prostate cancer one can carefully select gantry angles to improve bone marrow sparing when necessary.« less

  9. Exploring fermionic dark matter via Higgs boson precision measurements at the Circular Electron Positron Collider

    NASA Astrophysics Data System (ADS)

    Xiang, Qian-Fei; Bi, Xiao-Jun; Yin, Peng-Fei; Yu, Zhao-Huan

    2018-03-01

    We study the impact of fermionic dark matter (DM) on projected Higgs precision measurements at the Circular Electron Positron Collider (CEPC), including the one-loop effects on the e+e-→Z h cross section and the Higgs boson diphoton decay, as well as the tree-level effects on the Higgs boson invisible decay. As illuminating examples, we discuss two UV-complete DM models, whose dark sector contains electroweak multiplets that interact with the Higgs boson via Yukawa couplings. The CEPC sensitivity to these models and current constraints from DM detection and collider experiments are investigated. We find that there exist some parameter regions where the Higgs measurements at the CEPC will be complementary to current DM searches.

  10. Hybrid Genetic Agorithms and Line Search Method for Industrial Production Planning with Non-Linear Fitness Function

    NASA Astrophysics Data System (ADS)

    Vasant, Pandian; Barsoum, Nader

    2008-10-01

    Many engineering, science, information technology and management optimization problems can be considered as non linear programming real world problems where the all or some of the parameters and variables involved are uncertain in nature. These can only be quantified using intelligent computational techniques such as evolutionary computation and fuzzy logic. The main objective of this research paper is to solve non linear fuzzy optimization problem where the technological coefficient in the constraints involved are fuzzy numbers which was represented by logistic membership functions by using hybrid evolutionary optimization approach. To explore the applicability of the present study a numerical example is considered to determine the production planning for the decision variables and profit of the company.

  11. On the stellar rotation-activity connection

    NASA Technical Reports Server (NTRS)

    Rosner, R.

    1983-01-01

    The relationship between rotation rates and surface activity in late-type dwarf stars is explored in a survey of recent theoretical and observational studies. Current theoretical models of stellar-magnetic-field production and coronal activity are examined, including linear kinematic dynamo theory, nonlinear dynamos using approximations, and full numerical simulations of the MHD equations; and some typical results are presented graphically. The limitations of the modeling procedures and the constraints imposed by the physics are indicated. The statistical techniques used in establishing correlations between various observational parameters are analyzed critically, and the methods developed for quasar luminosity functions by Avni et al. (1980) are used to evaluate the effects of upper detection bounds, incomplete samples, and missing data for the case of rotation and X-ray flux data.

  12. High Density Memory Based on Quantum Device Technology

    NASA Technical Reports Server (NTRS)

    vanderWagt, Paul; Frazier, Gary; Tang, Hao

    1995-01-01

    We explore the feasibility of ultra-high density memory based on quantum devices. Starting from overall constraints on chip area, power consumption, access speed, and noise margin, we deduce boundaries on single cell parameters such as required operating voltage and standby current. Next, the possible role of quantum devices is examined. Since the most mature quantum device, the resonant tunneling diode (RTD) can easily be integrated vertically, it naturally leads to the issue of 3D integrated memory. We propose a novel method of addressing vertically integrated bistable two-terminal devices, such as resonant tunneling diodes (RTD) and Esaki diodes, that avoids individual physical contacts. The new concept has been demonstrated experimentally in memory cells of field effect transistors (FET's) and stacked RTD's.

  13. The N2HDM under theoretical and experimental scrutiny

    NASA Astrophysics Data System (ADS)

    Mühlleitner, Margarete; Sampaio, Marco O. P.; Santos, Rui; Wittbrodt, Jonas

    2017-03-01

    The N2HDM is based on the CP-conserving 2HDM extended by a real scalar singlet field. Its enlarged parameter space and its fewer symmetry conditions as compared to supersymmetric models allow for an interesting phenomenology compatible with current experimental constraints, while adding to the 2HDM sector the possibility of Higgs-to-Higgs decays with three different Higgs bosons. In this paper the N2HDM is subjected to detailed scrutiny. Regarding the theoretical constraints we implement tests of tree-level perturbativity and vacuum stability. Moreover, we present, for the first time, a thorough analysis of the global minimum of the N2HDM. The model and the theoretical constraints have been implemented in ScannerS, and we provide N2HDECAY, a code based on HDECAY, for the computation of the N2HDM branching ratios and total widths including the state-of-the-art higher order QCD corrections and off-shell decays. We then perform an extensive parameter scan in the N2HDM parameter space, with all theoretical and experimental constraints applied, and analyse its allowed regions. We find that large singlet admixtures are still compatible with the Higgs data and investigate which observables will allow to restrict the singlet nature most effectively in the next runs of the LHC. Similarly to the 2HDM, the N2HDM exhibits a wrong-sign parameter regime, which will be constrained by future Higgs precision measurements.

  14. Lower Tropospheric Ozone Retrievals from Infrared Satellite Observations Using a Self-Adapting Regularization Method

    NASA Astrophysics Data System (ADS)

    Eremenko, M.; Sgheri, L.; Ridolfi, M.; Dufour, G.; Cuesta, J.

    2017-12-01

    Lower tropospheric ozone (O3) retrievals from nadir sounders is challenging due to the lack of vertical sensitivity of the measurements and towards the lowest layers. If improvements have been made during the last decade, it is still important to explore possibilities to improve the retrieval algorithms themselves. O3 retrieval from nadir satellite observations is an ill-conditioned problem, which requires regularization using constraint matrices. Up to now, most of the retrieval algorithms rely on a fixed constraint. The constraint is determined and fixed beforehand, on the basis of sensitivity tests. This does not allow ones to take advantage of the entire capabilities of the satellite measurements, which vary with the thermal conditions of the observed scenes. To overcome this limitation, we developed a self-adapting and altitude-dependent regularization scheme. A crucial step is the choice of the strength of the constraint. This choice is done during an iterative process and depends on the measurement errors and on the sensitivity of the measurements to the target parameters at the different altitudes. The challenge is to limit the use of a priori constraints to the minimal amount needed to perform the inversion. The algorithm has been tested on synthetic observations matching the future IASI-NG satellite instrument. IASI-NG measurements are simulated on the basis of O3 concentrations taken from an atmospheric model and retrieved using two retrieval schemes (the standard and self-adapting ones). Comparison of the results shows that the sensitivity of the observations to the O3 amount in the lowest layers (given by the degrees of freedom for the solution) is increased, which allows a better description of the ozone distribution, especially in the case of large ozone plumes. Biases are reduced and the spatial correlation is improved. Tentative of application to real observations from IASI, currently onboard the Metop satellite will also be presented.

  15. A parallel implementation of the Wuchty algorithm with additional experimental filters to more thoroughly explore RNA conformational space.

    PubMed

    Stone, Jonathan W; Bleckley, Samuel; Lavelle, Sean; Schroeder, Susan J

    2015-01-01

    We present new modifications to the Wuchty algorithm in order to better define and explore possible conformations for an RNA sequence. The new features, including parallelization, energy-independent lonely pair constraints, context-dependent chemical probing constraints, helix filters, and optional multibranch loops, provide useful tools for exploring the landscape of RNA folding. Chemical probing alone may not necessarily define a single unique structure. The helix filters and optional multibranch loops are global constraints on RNA structure that are an especially useful tool for generating models of encapsidated viral RNA for which cryoelectron microscopy or crystallography data may be available. The computations generate a combinatorially complete set of structures near a free energy minimum and thus provide data on the density and diversity of structures near the bottom of a folding funnel for an RNA sequence. The conformational landscapes for some RNA sequences may resemble a low, wide basin rather than a steep funnel that converges to a single structure.

  16. Construction of optimal 3-node plate bending triangles by templates

    NASA Astrophysics Data System (ADS)

    Felippa, C. A.; Militello, C.

    A finite element template is a parametrized algebraic form that reduces to specific finite elements by setting numerical values to the free parameters. The present study concerns Kirchhoff Plate-Bending Triangles (KPT) with 3 nodes and 9 degrees of freedom. A 37-parameter template is constructed using the Assumed Natural Deviatoric Strain (ANDES). Specialization of this template includes well known elements such as DKT and HCT. The question addressed here is: can these parameters be selected to produce high performance elements? The study is carried out by staged application of constraints on the free parameters. The first stage produces element families satisfying invariance and aspect ratio insensitivity conditions. Application of energy balance constraints produces specific elements. The performance of such elements in benchmark tests is presently under study.

  17. A Case Study of Obsolescence Management Constraints During Development of Sustainment-Dominated Systems

    NASA Astrophysics Data System (ADS)

    Welch, Jonathan

    This case study focused on obsolescence management constraints that occur during development of sustainment-dominated systems. Obsolescence management constraints were explored in systems expected to last 20 years or more and that tend to use commercial off-the-shelf products. The field of obsolescence has received little study, but obsolescence has a large cost for military systems. Because developing complex systems takes an average of 3 to 8 years, and commercial off-the-shelf components are typically obsolete within 3 to 5 years, military systems are often deployed with obsolescence issues that are transferred to the sustainment community to determine solutions. The main problem addressed in the study was to identify the constraints that have caused 70% of military systems under development to be obsolete when they are delivered. The purpose of the study was to use a qualitative case study to identify constraints that interfered with obsolescence management occurring during the development stages of a program. The participants of this case study were managers, subordinates, and end-users who were logistics and obsolescence experts. Researchers largely agree that proactive obsolescence management is a lower cost solution for sustainment-dominated systems. Program managers must understand the constraints and understand the impact of not implementing proactive solutions early in the development program lifecycle. The conclusion of the study found several constraints that prevented the development program from early adoption of obsolescence management theories, specifically pro-active theories. There were three major themes identified: (a) management commitment, (b) lack of details in the statement of work, and (c) vendor management. Each major theme includes several subthemes. The recommendation is future researchers should explore two areas: (a) comparing the cost of managing obsolescence early in the development process versus the costs of managing later, (b) exploring the costs and value to start a centralized obsolescence group at each major defense contractor location.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakstein, Jeremy; Wilcox, Harry; Bacon, David

    The Beyond Horndeski class of alternative gravity theories allow for Self-accelerating de-Sitter cosmologies with no need for a cosmological constant. This makes them viable alternatives to ΛCDM and so testing their small-scale predictions against General Relativity is of paramount importance. These theories generically predict deviations in both the Newtonian force law and the gravitational lensing of light inside extended objects. Therefore, by simultaneously fitting the X-ray and lensing profiles of galaxy clusters new constraints can be obtained. In this work, we apply this methodology to the stacked profiles of 58 high-redshift (0.1 < z < 1.2) clusters using X-ray surfacemore » brightness profiles from the XMM Cluster Survey and weak lensing profiles from CFHTLenS. By performing a multi-parameter Markov chain Monte Carlo analysis, we are able to place new constraints on the parameters governing deviations from Newton's law Υ{sub 1} = −0.11{sup +0.93}{sub −0.67} and light bending Υ{sub 2} = −0.22{sup +1.22}{sub −1.19}. Both constraints are consistent with General Relativity, for which Υ{sub 1} = Υ{sub 2} = 0. We present here the first observational constraints on Υ{sub 2}, as well as the first extragalactic measurement of both parameters.« less

  19. Constraints on interquark interaction parameters with GW170817 in a binary strange star scenario

    NASA Astrophysics Data System (ADS)

    Zhou, En-Ping; Zhou, Xia; Li, Ang

    2018-04-01

    The LIGO/VIRGO detection of the gravitational waves from a binary merger system, GW170817, has put a clean and strong constraint on the tidal deformability of the merging objects. From this constraint, deep insights can be obtained in compact star equation of states, which has been one of the most puzzling problems for nuclear physicists and astrophysicists. Employing one of the most widely used quark star EOS models, we characterize the star properties by the strange quark mass (ms ), an effective bag constant (Beff), the perturbative QCD correction (a4), as well as the gap parameter (Δ ) when considering quark pairing, and investigate the dependences of the tidal deformablity on them. We find that the tidal deformability is dominated by Beff and insensitive to ms, a4. We discuss the correlation between the tidal deformability and the maximum mass (MTOV) of a static quark star, which allows the model possibility to rule out the existence of quark stars with future gravitational wave observations and mass measurements. The current tidal deformability measurement implies MTOV≤2.18 M⊙ (2.32 M⊙ when pairing is considered) for quark stars. Combining with two-solar-mass pulsar observations, we also make constraints on the poorly known gap parameter Δ for color-flavor-locked quark matter.

  20. Model independent constraints on transition redshift

    NASA Astrophysics Data System (ADS)

    Jesus, J. F.; Holanda, R. F. L.; Pereira, S. H.

    2018-05-01

    This paper aims to put constraints on the transition redshift zt, which determines the onset of cosmic acceleration, in cosmological-model independent frameworks. In order to perform our analyses, we consider a flat universe and assume a parametrization for the comoving distance DC(z) up to third degree on z, a second degree parametrization for the Hubble parameter H(z) and a linear parametrization for the deceleration parameter q(z). For each case, we show that type Ia supernovae and H(z) data complement each other on the parameter space and tighter constrains for the transition redshift are obtained. By combining the type Ia supernovae observations and Hubble parameter measurements it is possible to constrain the values of zt, for each approach, as 0.806± 0.094, 0.870± 0.063 and 0.973± 0.058 at 1σ c.l., respectively. Then, such approaches provide cosmological-model independent estimates for this parameter.

  1. Characterizing Exoplanets with WFIRST

    NASA Astrophysics Data System (ADS)

    Robinson, Tyler D.; Stapelfeldt, Karl R.; Marley, Mark S.; Marchis, Franck; Fortney, Jonathan J.

    2017-01-01

    The Wide-Field Infrared Survey Telescope (WFIRST) mission is expected to be equipped with a Coronagraph Instrument (CGI) that will study and explore a diversity of exoplanets in reflected light. Beyond being a technology demonstration, the CGI will provide our first glimpses of temperate worlds around our nearest stellar neighbors. In this presentation, we explore how instrumental and astrophysical parameters will affect the ability of the WFIRST/CGI to obtain spectral and photometric observations that are useful for characterizing its planetary targets. We discuss the development of an instrument noise model suitable for studying the spectral characterization potential of a coronagraph-equipped, space-based telescope. To be consistent with planned technologies, we assume a baseline set of telescope and instrument parameters that include a 2.4 meter diameter primary aperture, an up-to-date filter set spanning the visible wavelength range, a spectroscopic wavelength range of 600-970 nm, and an instrument spectral resolution of 70. We present applications of our baseline model to a variety of spectral models of different planet types, emphasizing warm jovian exoplanets. With our exoplanet spectral models, we explore wavelength-dependent planet-star flux ratios for main sequence stars of various effective temperatures, and discuss how coronagraph inner and outer working angle constraints will influence the potential to study different types of planets. For planets most favorable to spectroscopic characterization—gas giants with extensive water vapor clouds—we study the integration times required to achieve moderate signal-to-noise ratio spectra. We also explore the sensitivity of the integration times required to detect key methane absorption bands to exozodiacal light levels. We conclude with a discussion of the opportunities for characterizing smaller, potentially rocky, worlds under a “rendezvous” scenario, where an external starshade is later paired with the WFIRST spacecraft.

  2. Relationship between participation in leisure activities and constraints on Taiwanese breastfeeding mothers during leisure activities

    PubMed Central

    2013-01-01

    Background Participation in leisure activities strongly associates with health and well-being. Little research has explored the relationship between participation in leisure activities and constraints on breastfeeding mothers during leisure activities. The purposes of this study are: 1) to investigate constraints on breastfeeding mothers during leisure activities and participation in leisure activities; 2) to investigate the differences between preferences for leisure activities and actual participation by breastfeeding mothers; 3) to segment breastfeeding mothers with similar patterns, using a cluster analysis based on the delineated participation in leisure activities and leisure preferences; 4) to explore any differences between clusters of breastfeeding mothers with respect to socio-demographic characteristics, breastfeeding behaviours and leisure constraints. Methods This study has a cross-sectional design using an online survey conducted among mothers having breastfeeding experiences of more than four months. The questionnaire includes demographic variables, breastfeeding behaviours, preferences for leisure activities participation, and constraints on leisure activities. Collection of data occurred between March and July 2011, producing 415 valid responses for analysis. Results For breastfeeding mothers, this study identifies constraints on breastfeeding related to leisure activities in addition to the three traditional factors for constraints in the model. This study demonstrates that reports of constraints related to children, family, and nursing environments are the most frequent. Breastfeeding mothers in Taiwan participate regularly in family activities or activities related to their children. Cluster analysis classified breastfeeding mothers into Action and Contemplation groups, and found that mothers within the latter group participate less in leisure activities and experienced more constraints related to breastfeeding. Conclusions Implications provide a developmental design for public health policies for nursing-friendly environments to increase opportunities for breastfeeding mothers to engage in leisure activities and suggest various types of activities to increase participation of that population. PMID:23627993

  3. Positive signs in massive gravity

    NASA Astrophysics Data System (ADS)

    Cheung, Clifford; Remmen, Grant N.

    2016-04-01

    We derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. The high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small island in the parameter space of ghost-free massive gravity. While the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, Clifford; Remmen, Grant N.

    Here, we derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. Furthermore, the high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small islandmore » in the parameter space of ghost-free massive gravity. And while the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.« less

  5. Constraints on the pre-impact orbits of Solar system giant impactors

    NASA Astrophysics Data System (ADS)

    Jackson, Alan P.; Gabriel, Travis S. J.; Asphaug, Erik I.

    2018-03-01

    We provide a fast method for computing constraints on impactor pre-impact orbits, applying this to the late giant impacts in the Solar system. These constraints can be used to make quick, broad comparisons of different collision scenarios, identifying some immediately as low-probability events, and narrowing the parameter space in which to target follow-up studies with expensive N-body simulations. We benchmark our parameter space predictions, finding good agreement with existing N-body studies for the Moon. We suggest that high-velocity impact scenarios in the inner Solar system, including all currently proposed single impact scenarios for the formation of Mercury, should be disfavoured. This leaves a multiple hit-and-run scenario as the most probable currently proposed for the formation of Mercury.

  6. Traverse Planning with Temporal-Spatial Constraints

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Morris, Paul H.; Deans, Mathew C.; Cohen, Tamar E.; Lees, David S.

    2017-01-01

    We present an approach to planning rover traverses in a domain that includes temporal-spatial constraints. We are using the NASA Resource Prospector mission as a reference mission in our research. The objective of this mission is to explore permanently shadowed regions at a Lunar pole. Most of the time the rover is required to avoid being in shadow. This requirement depends on where the rover is located and when it is at that location. Such a temporal-spatial constraint makes traverse planning more challenging for both humans and machines. We present a mixed-initiative traverse planner which addresses this challenge. This traverse planner is part of the Exploration Ground Data Systems (xGDS), which we have enhanced with new visualization features, new analysis tools, and new automation for path planning, in order to be applicable to the Re-source Prospector mission. The key concept that is the basis of the analysis tools and that supports the automated path planning is reachability in this dynamic environment due to the temporal-spatial constraints.

  7. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  8. Reducing bias in survival under non-random temporary emigration

    USGS Publications Warehouse

    Peñaloza, Claudia L.; Kendall, William L.; Langtimm, Catherine Ann

    2014-01-01

    Despite intensive monitoring, temporary emigration from the sampling area can induce bias severe enough for managers to discard life-history parameter estimates toward the terminus of the times series (terminal bias). Under random temporary emigration unbiased parameters can be estimated with CJS models. However, unmodeled Markovian temporary emigration causes bias in parameter estimates and an unobservable state is required to model this type of emigration. The robust design is most flexible when modeling temporary emigration, and partial solutions to mitigate bias have been identified, nonetheless there are conditions were terminal bias prevails. Long-lived species with high adult survival and highly variable non-random temporary emigration present terminal bias in survival estimates, despite being modeled with the robust design and suggested constraints. Because this bias is due to uncertainty about the fate of individuals that are undetected toward the end of the time series, solutions should involve using additional information on survival status or location of these individuals at that time. Using simulation, we evaluated the performance of models that jointly analyze robust design data and an additional source of ancillary data (predictive covariate on temporary emigration, telemetry, dead recovery, or auxiliary resightings) in reducing terminal bias in survival estimates. The auxiliary resighting and predictive covariate models reduced terminal bias the most. Additional telemetry data was effective at reducing terminal bias only when individuals were tracked for a minimum of two years. High adult survival of long-lived species made the joint model with recovery data ineffective at reducing terminal bias because of small-sample bias. The naïve constraint model (last and penultimate temporary emigration parameters made equal), was the least efficient, though still able to reduce terminal bias when compared to an unconstrained model. Joint analysis of several sources of data improved parameter estimates and reduced terminal bias. Efforts to incorporate or acquire such data should be considered by researchers and wildlife managers, especially in the years leading up to status assessments of species of interest. Simulation modeling is a very cost effective method to explore the potential impacts of using different sources of data to produce high quality demographic data to inform management.

  9. Neighboring extremals of dynamic optimization problems with path equality constraints

    NASA Technical Reports Server (NTRS)

    Lee, A. Y.

    1988-01-01

    Neighboring extremals of dynamic optimization problems with path equality constraints and with an unknown parameter vector are considered in this paper. With some simplifications, the problem is reduced to solving a linear, time-varying two-point boundary-value problem with integral path equality constraints. A modified backward sweep method is used to solve this problem. Two example problems are solved to illustrate the validity and usefulness of the solution technique.

  10. Influence of Hatha yoga on physical activity constraints, physical fitness, and body image of breast cancer survivors: a pilot study.

    PubMed

    Van Puymbroeck, Marieke; Schmid, Arlene; Shinew, Kimberly J; Hsieh, Pei-Chun

    2011-01-01

    Breast cancer survivors often experience changes in their perception of their bodies following surgical treatment. These changes in body image may increase self-consciousness and perceptions of physical activity constraints and reduce participation in physical activity. While the number of studies examining different types of yoga targeting women with breast cancer has increased, studies thus far have not studied the influence that Hatha yoga has on body image and physical activity constraints. The objective of this study was to explore the changes that occur in breast cancer survivors in terms of body image, perceived constraints, and physical fitness following an 8-week Hatha yoga intervention. This study used a nonrandomized two-group pilot study, comparing an 8-week Hatha yoga intervention with a light exercise group, both designed for women who were at least nine months post-treatment for breast cancer. Both quantitative and qualitative data were collected in the areas of body image, physical activity constraints, and physical fitness. Findings indicated that quantitatively, yoga participants experienced reductions in physical activity constraints and improvements in lower- and upper-body strength and flexibility, while control participants experienced improvements in abdominal strength and lower-body strength. Qualitative findings support changes in body image, physical activity constraints, and physical fitness for the participants in the yoga group. In conclusion, Hatha yoga may reduce constraints to physical activity and improve fitness in breast cancer survivors. More research is needed to explore the relationship between Hatha yoga and improvements in body image.

  11. Biological optimization of simultaneous boost on intra-prostatic lesions (DILs): sensitivity to TCP parameters.

    PubMed

    Azzeroni, R; Maggio, A; Fiorino, C; Mangili, P; Cozzarini, C; De Cobelli, F; Di Muzio, N G; Calandrino, R

    2013-11-01

    The aim of this investigation was to explore the potential of biological optimization in the case of simultaneous integrated boost on intra-prostatic dominant lesions (DIL) and evaluating the impact of TCP parameters uncertainty. Different combination of TCP parameters (TD50 and γ50 in the Poisson-like model), were considered for DILs and the prostate outside DILs (CTV) for 7 intermediate/high-risk prostate patients. The aim was to maximize TCP while constraining NTCPs below 5% for all organs at risk. TCP values were highly depending on the parameters used and ranged between 38.4% and 99.9%; the optimized median physical doses were in the range 94-116 Gy and 69-77 Gy for DIL and CTV respectively. TCP values were correlated with the overlap PTV-rectum and the minimum distance between rectum and DIL. In conclusion, biological optimization for selective dose escalation is feasible and suggests prescribed dose around 90-120 Gy to the DILs. The obtained result is critically depending on the assumptions concerning the higher radioresistence in the DILs. In case of very resistant clonogens into the DIL, it may be difficult to maximize TCP to acceptable levels without violating NTCP constraints. Copyright © 2012 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Observational constraints on Hubble parameter in viscous generalized Chaplygin gas

    NASA Astrophysics Data System (ADS)

    Thakur, P.

    2018-04-01

    Cosmological model with viscous generalized Chaplygin gas (in short, VGCG) is considered here to determine observational constraints on its equation of state parameters (in short, EoS) from background data. These data consists of H(z)-z (OHD) data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter and SN Ia data (Union 2.1). Best-fit values of the EoS parameters including present Hubble parameter (H0) and their acceptable range at different confidence limits are determined. In this model the permitted range for the present Hubble parameter and the transition redshift (zt) at 1σ confidence limits are H0= 70.24^{+0.34}_{-0.36} and zt=0.76^{+0.07}_{-0.07} respectively. These EoS parameters are then compared with those of other models. Present age of the Universe (t0) have also been determined here. Akaike information criterion and Bayesian information criterion for the model selection have been adopted for comparison with other models. It is noted that VGCG model satisfactorily accommodates the present accelerating phase of the Universe.

  14. Effective theories of universal theories

    DOE PAGES

    Wells, James D.; Zhang, Zhengkang

    2016-01-20

    It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less

  15. Effective theories of universal theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, James D.; Zhang, Zhengkang

    It is well-known but sometimes overlooked that constraints on the oblique parameters (most notably S and T parameters) are generally speaking only applicable to a special class of new physics scenarios known as universal theories. The oblique parameters should not be associated with Wilson coefficients in a particular operator basis in the effective field theory (EFT) framework, unless restrictions have been imposed on the EFT so that it describes universal theories. Here, we work out these restrictions, and present a detailed EFT analysis of universal theories. We find that at the dimension-6 level, universal theories are completely characterized by 16more » parameters. They are conveniently chosen to be: 5 oblique parameters that agree with the commonly-adopted ones, 4 anomalous triple-gauge couplings, 3 rescaling factors for the h 3, hff, hV V vertices, 3 parameters for hV V vertices absent in the Standard Model, and 1 four-fermion coupling of order yf 2. Furthermore, all these parameters are defined in an unambiguous and basis-independent way, allowing for consistent constraints on the universal theories parameter space from precision electroweak and Higgs data.« less

  16. Transoptr — A second order beam transport design code with optimization and constraints

    NASA Astrophysics Data System (ADS)

    Heighway, E. A.; Hutcheon, R. M.

    1981-08-01

    This code was written initially to design an achromatic and isochronous reflecting magnet and has been extended to compete in capability (for constrained problems) with TRANSPORT. Its advantage is its flexibility in that the user writes a routine to describe his transport system. The routine allows the definition of general variables from which the system parameters can be derived. Further, the user can write any constraints he requires as algebraic equations relating the parameters. All variables may be used in either a first or second order optimization.

  17. Estimating free-body modal parameters from tests of a constrained structure

    NASA Technical Reports Server (NTRS)

    Cooley, Victor M.

    1993-01-01

    Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.

  18. Simplicity constraints: A 3D toy model for loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Charles, Christoph

    2018-05-01

    In loop quantum gravity, tremendous progress has been made using the Ashtekar-Barbero variables. These variables, defined in a gauge fixing of the theory, correspond to a parametrization of the solutions of the so-called simplicity constraints. Their geometrical interpretation is however unsatisfactory as they do not constitute a space-time connection. It would be possible to resolve this point by using a full Lorentz connection or, equivalently, by using the self-dual Ashtekar variables. This leads however to simplicity constraints or reality conditions which are notoriously difficult to implement in the quantum theory. We explore in this paper the possibility of using completely degenerate actions to impose such constraints at the quantum level in the context of canonical quantization. To do so, we define a simpler model, in 3D, with similar constraints by extending the phase space to include an independent vielbein. We define the classical model and show that a precise quantum theory by gauge unfixing can be defined out of it, completely equivalent to the standard 3D Euclidean quantum gravity. We discuss possible future explorations around this model as it could help as a stepping stone to define full-fledged covariant loop quantum gravity.

  19. Constraining the cosmic deceleration-acceleration transition with type Ia supernova, BAO/CMB and H(z) data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, M. Vargas dos; Reis, R.R.R.; Waga, I., E-mail: vargas@if.ufrj.br, E-mail: ribamar@if.ufrj.br, E-mail: ioav@if.ufrj.br

    2016-02-01

    We revisit the kink-like parametrization of the deceleration parameter q(z) [1], which considers a transition, at redshift z{sub t}, from cosmic deceleration to acceleration. In this parametrization the initial, at z >> z{sub t}, value of the q-parameter is q{sub i}, its final, z=−1, value is q{sub f} and the duration of the transition is parametrized by τ. By assuming a flat space geometry we obtain constraints on the free parameters of the model using recent data from type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), cosmic microwave background (CMB) and the Hubble parameter H(z). The use of H(z) data introducesmore » an explicit dependence of the combined likelihood on the present value of the Hubble parameter H{sub 0}, allowing us to explore the influence of different priors when marginalizing over this parameter. We also study the importance of the CMB information in the results by considering data from WMAP7, WMAP9 (Wilkinson Microwave Anisotropy Probe—7 and 9 years) and Planck 2015. We show that the contours and best fit do not depend much on the different CMB data used and that the considered new BAO data is responsible for most of the improvement in the results. Assuming a flat space geometry, q{sub i}=1/2 and expressing the present value of the deceleration parameter q{sub 0} as a function of the other three free parameters, we obtain z{sub t}=0.67{sup +0.10}{sub −0.08}, τ=0.26{sup +0.14}{sub −0.10} and q{sub 0}=−0.48{sup +0.11}{sub −0.13}, at 68% of confidence level, with an uniform prior over H{sub 0}. If in addition we fix q{sub f}=−1, as in flat ΛCDM, DGP and Chaplygin quartessence that are special models described by our parametrization, we get z{sub t}=0.66{sup +0.03}{sub −0.04}, τ=0.33{sup +0.04}{sub −0.04} and q{sub 0}=−0.54{sup +0.05}{sub −0.07}, in excellent agreement with flat ΛCDM for which τ=1/3. We also obtain for flat wCDM, another dark energy model described by our parametrization, the constraint on the equation of state parameter −1.22 < w < −0.78 at more than 99% confidence level.« less

  20. Revisiting CMB constraints on warm inflation

    NASA Astrophysics Data System (ADS)

    Arya, Richa; Dasgupta, Arnab; Goswami, Gaurav; Prasad, Jayanti; Rangarajan, Raghavan

    2018-02-01

    We revisit the constraints that Planck 2015 temperature, polarization and lensing data impose on the parameters of warm inflation. To this end, we study warm inflation driven by a single scalar field with a quartic self interaction potential in the weak dissipative regime. We analyse the effect of the parameters of warm inflation, namely, the inflaton self coupling λ and the inflaton dissipation parameter QP on the CMB angular power spectrum. We constrain λ and QP for 50 and 60 number of e-foldings with the full Planck 2015 data (TT, TE, EE + lowP and lensing) by performing a Markov-Chain Monte Carlo analysis using the publicly available code CosmoMC and obtain the joint as well as marginalized distributions of those parameters. We present our results in the form of mean and 68 % confidence limits on the parameters and also highlight the degeneracy between λ and QP in our analysis. From this analysis we show how warm inflation parameters can be well constrained using the Planck 2015 data.

  1. A Multiphase Model for the Intracluster Medium

    NASA Technical Reports Server (NTRS)

    Nagai, Daisuke; Sulkanen, Martin E.; Evrard, August E.

    1999-01-01

    Constraints on the clustered mass density of the universe derived from the observed population mean intracluster gas fraction of x-ray clusters may be biased by reliance on a single-phase assumption for the thermodynamic structure of the intracluster medium (ICM). We propose a descriptive model for multiphase structure in which a spherically symmetric ICM contains isobaric density perturbations with a radially dependent variance. Fixing the x-ray emission and emission weighted temperature, we explore two independently observable signatures of the model in the parameter space. For bremsstrahlung dominated emission, the central Sunyaev-Zel'dovich (SZ) decrement in the multiphase case is increased over the single-phase case and multiphase x-ray spectra in the range 0.1-20 keV are flatter in the continuum and exhibit stronger low energy emission lines than their single-phase counterpart. We quantify these effects for a fiducial 10e8 K cluster and demonstrate how the combination of SZ and x-ray spectroscopy can be used to identify a preferred location in the plane of the model parameter space. From these parameters the correct value of mean intracluster gas fraction in the multiphase model results, allowing an unbiased estimate of clustered mass density to he recovered.

  2. Effective theory of dark energy at redshift survey scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gleyzes, Jérôme; Mancarella, Michele; Vernizzi, Filippo

    2016-02-01

    We explore the phenomenological consequences of general late-time modifications of gravity in the quasi-static approximation, in the case where cold dark matter is non-minimally coupled to the gravitational sector. Assuming spectroscopic and photometric surveys with configuration parameters similar to those of the Euclid mission, we derive constraints on our effective description from three observables: the galaxy power spectrum in redshift space, tomographic weak-lensing shear power spectrum and the correlation spectrum between the integrated Sachs-Wolfe effect and the galaxy distribution. In particular, with ΛCDM as fiducial model and a specific choice for the time dependence of our effective functions, we performmore » a Fisher matrix analysis and find that the unmarginalized 68% CL errors on the parameters describing the modifications of gravity are of order σ∼10{sup −2}–10{sup −3}. We also consider two other fiducial models. A nonminimal coupling of CDM enhances the effects of modified gravity and reduces the above statistical errors accordingly. In all cases, we find that the parameters are highly degenerate, which prevents the inversion of the Fisher matrices. Some of these degeneracies can be broken by combining all three observational probes.« less

  3. Top-philic dark matter within and beyond the WIMP paradigm

    NASA Astrophysics Data System (ADS)

    Garny, Mathias; Heisig, Jan; Hufnagel, Marco; Lülf, Benedikt

    2018-04-01

    We present a comprehensive analysis of top-philic Majorana dark matter that interacts via a colored t -channel mediator. Despite the simplicity of the model—introducing three parameters only—it provides an extremely rich phenomenology allowing us to accommodate the relic density for a large range of coupling strengths spanning over 6 orders of magnitude. This model features all "exceptional" mechanisms for dark matter freeze-out, including the recently discovered conversion-driven freeze-out mode, with interesting signatures of long-lived colored particles at colliders. We constrain the cosmologically allowed parameter space with current experimental limits from direct, indirect and collider searches, with special emphasis on light dark matter below the top mass. In particular, we explore the interplay between limits from Xenon1T, Fermi-LAT and AMS-02 as well as limits from stop, monojet and Higgs invisible decay searches at the LHC. We find that several blind spots for light dark matter evade current constraints. The region in parameter space where the relic density is set by the mechanism of conversion-driven freeze-out can be conclusively tested by R -hadron searches at the LHC with 300 fb-1 .

  4. Severely Constraining Dark Matter Interpretations of the 21-cm Anomaly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlin, Asher; Hooper, Dan; Krnjaic, Gordan

    The EDGES Collaboration has recently reported the detection of a stronger-than-expected absorption feature in the global 21-cm spectrum, centered at a frequency corresponding to a redshift of z ~ 17. This observation has been interpreted as evidence that the gas was cooled during this era as a result of scattering with dark matter. In this study, we explore this possibility, applying constraints from the cosmic microwave background, light element abundances, Supernova 1987A, and a variety of laboratory experiments. After taking these constraints into account, we find that the vast majority of the parameter space capable of generating the observed 21-cmmore » signal is ruled out. The only range of models that remains viable is that in which a small fraction, ~ 0.3-2%, of the dark matter consists of particles with a mass of ~ 10-80 MeV and which couple to the photon through a small electric charge, epsilon ~ 10^{-6}-10^{-4}. Furthermore, in order to avoid being overproduced in the early universe, such models must be supplemented with an additional depletion mechanism, such as annihilations through a L_{\\mu}-L_{\\tau} gauge boson or annihilations to a pair of rapidly decaying hidden sector scalars.« less

  5. Exploring extensions to multi-state models with multiple unobservable states

    USGS Publications Warehouse

    Bailey, L.L.; Kendall, W.L.; Church, D.R.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    Many biological systems include a portion of the target population that is unobservable during certain life history stages. Transition to and from an unobservable state may be of primary interest in many ecological studies and such movements are easily incorporated into multi-state models. Several authors have investigated properties of open-population multi-state mark-recapture models with unobservable states, and determined the scope and constraints under which parameters are identifiable (or, conversely, are redundant), but only in the context of a single observable and a single unobservable state (Schmidt et al. 2002; Kendall and Nichols 2002; Schaub et al. 2004; Kendall 2004). Some of these constraints can be relaxed if data are collected under a version of the robust design (Kendall and Bjorkland 2001; Kendall and Nichols 2002; Kendall 2004; Bailey et al. 2004), which entails >1 capture period per primary period of interest (e.g., 2 sampling periods within a breeding season). The critical assumption shared by all versions of the robust design is that the state of the individual (e.g. observable or unobservable) remains static for the duration of the primary period (Kendall 2004). In this paper, we extend previous work by relaxing this assumption to allow movement among observable states within primary periods while maintaining static observable or unobservable states. Stated otherwise, both demographic and geographic closure assumptions are relaxed, but all individuals are either observable or unobservable within primary periods. Within these primary periods transitions are possible among multiple observable states, but transitions are not allowed among the corresponding unobservable states. Our motivation for this work is exploring potential differences in population parameters for pond-breeding amphibians, where the quality of habitat surrounding the pond is not spatially uniform. The scenario is an example of a more general case where individuals move between habitats both during the breeding season (within primary periods; transitions among observable states only) and during the non-breeding season (between primary periods; transitions between observable and unobservable states). Presumably, habitat quality affects demographic parameters (e.g. survival and breeding probabilities). Using this model we are able to test this prediction for amphibians and determine if individuals move to more favorable habitats to increase survival and breeding probabilities.

  6. Full Two-Body Problem Mass Parameter Observability Explored Through Doubly Synchronous Systems

    NASA Astrophysics Data System (ADS)

    Davis, Alex Benjamin; Scheeres, Daniel

    2018-04-01

    The full two-body problem (F2BP) is often used to model binary asteroid systems, representing the bodies as two finite mass distributions whose dynamics are influenced by their mutual gravity potential. The emergent behavior of the F2BP is highly coupled translational and rotational mutual motion of the mass distributions. For these systems the doubly synchronous equilibrium occurs when both bodies are tidally-locked and in a circular co-orbit. Stable oscillations about this equilibrium can be shown, for the nonplanar system, to be combinations of seven fundamental frequencies of the system and the mutual orbit rate. The fundamental frequencies arise as the linear periods of center manifolds identified about the equilibrium which are heavily influenced by each body’s mass parameters. We leverage these eight dynamical constraints to investigate the observability of binary asteroid mass parameters via dynamical observations. This is accomplished by proving the nonsingularity of the relationship between the frequencies and mass parameters for doubly synchronous systems. Thus we can invert the relationship to show that given observations of the frequencies, we can solve for the mass parameters of a target system. In so doing we are able to predict the estimation covariance of the mass parameters based on observation quality and define necessary observation accuracies for desired mass parameter certainties. We apply these tools to 617 Patroclus, a doubly synchronous Trojan binary and flyby target of the LUCY mission, as well as the Pluto and Charon system in order to predict mutual behaviors of these doubly synchronous systems and to provide observational requirements for these systems’ mass parameters

  7. Spatial Coverage Planning and Optimization for Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Gaines, Daniel M.; Estlin, Tara; Chouinard, Caroline

    2008-01-01

    We are developing onboard planning and scheduling technology to enable in situ robotic explorers, such as rovers and aerobots, to more effectively assist scientists in planetary exploration. In our current work, we are focusing on situations in which the robot is exploring large geographical features such as craters, channels or regional boundaries. In to develop valid and high quality plans, the robot must take into account a range of scientific and engineering constraints and preferences. We have developed a system that incorporates multiobjective optimization and planning allowing the robot to generate high quality mission operations plans that respect resource limitations and mission constraints while attempting to maximize science and engineering objectives. An important scientific objective for the exploration of geological features is selecting observations that spatially cover an area of interest. We have developed a metric to enable an in situ explorer to reason about and track the spatial coverage quality of a plan. We describe this technique and show how it is combined in the overall multiobjective optimization and planning algorithm.

  8. Clustering by soft-constraint affinity propagation: applications to gene-expression data.

    PubMed

    Leone, Michele; Sumedha; Weigt, Martin

    2007-10-15

    Similarity-measure-based clustering is a crucial problem appearing throughout scientific data analysis. Recently, a powerful new algorithm called Affinity Propagation (AP) based on message-passing techniques was proposed by Frey and Dueck (2007a). In AP, each cluster is identified by a common exemplar all other data points of the same cluster refer to, and exemplars have to refer to themselves. Albeit its proved power, AP in its present form suffers from a number of drawbacks. The hard constraint of having exactly one exemplar per cluster restricts AP to classes of regularly shaped clusters, and leads to suboptimal performance, e.g. in analyzing gene expression data. This limitation can be overcome by relaxing the AP hard constraints. A new parameter controls the importance of the constraints compared to the aim of maximizing the overall similarity, and allows to interpolate between the simple case where each data point selects its closest neighbor as an exemplar and the original AP. The resulting soft-constraint affinity propagation (SCAP) becomes more informative, accurate and leads to more stable clustering. Even though a new a priori free parameter is introduced, the overall dependence of the algorithm on external tuning is reduced, as robustness is increased and an optimal strategy for parameter selection emerges more naturally. SCAP is tested on biological benchmark data, including in particular microarray data related to various cancer types. We show that the algorithm efficiently unveils the hierarchical cluster structure present in the data sets. Further on, it allows to extract sparse gene expression signatures for each cluster.

  9. A heuristic constraint programmed planner for deep space exploration problems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao; Xu, Rui; Cui, Pingyuan

    2017-10-01

    In recent years, the increasing numbers of scientific payloads and growing constraints on the probe have made constraint processing technology a hotspot in the deep space planning field. In the procedure of planning, the ordering of variables and values plays a vital role. This paper we present two heuristic ordering methods for variables and values. On this basis a graphplan-like constraint-programmed planner is proposed. In the planner we convert the traditional constraint satisfaction problem to a time-tagged form with different levels. Inspired by the most constrained first principle in constraint satisfaction problem (CSP), the variable heuristic is designed by the number of unassigned variables in the constraint and the value heuristic is designed by the completion degree of the support set. The simulation experiments show that the planner proposed is effective and its performance is competitive with other kind of planners.

  10. Principals' Self-Efficacy: Relations with Job Autonomy, Job Satisfaction, and Contextual Constraints

    ERIC Educational Resources Information Center

    Federici, Roger A.

    2013-01-01

    The purpose of the present study was to explore relations between principals' self-efficacy, perceived job autonomy, job satisfaction, and perceived contextual constraints to autonomy. Principal self-efficacy was measured by a multidimensional scale called the Norwegian Principal Self-Efficacy Scale. Job autonomy, job satisfaction, and contextual…

  11. Exploring Matter: An Interactive, Inexpensive Chemistry Exhibit for Museums

    ERIC Educational Resources Information Center

    Murov, Steven; Chavez, Arnold

    2017-01-01

    Despite its vital importance in our lives, chemistry is inadequately represented in most museums. Issues such as safety, replenishing and disposal of chemicals, supervision required, and cost are constraints that have limited the number and size of chemistry exhibits. Taking into account the constraints, a 21-station interactive and inexpensive…

  12. Deterrents to Women's Participation in Continuing Professional Development

    ERIC Educational Resources Information Center

    Chuang, Szu-Fang

    2015-01-01

    This study was designed to explore and define key factors that deter women from participating in continuing professional development (CPD) in the workplace. Four dimensions of deterrents that are caused by women's social roles, gender inequality and gender dimensions are discussed: family and time constraints, cost and work constraints, lack of…

  13. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    EPA Science Inventory

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  14. META II Complex Systems Design and Analysis (CODA)

    DTIC Science & Technology

    2011-08-01

    37  3.8.7  Variables, Parameters and Constraints ............................................................. 37  3.8.8  Objective...18  Figure 7: Inputs, States, Outputs and Parameters of System Requirements Specifications ......... 19...Design Rule Based on Device Parameter ....................................................... 57  Figure 35: AEE Device Design Rules (excerpt

  15. MAPGEN: Mixed-Initiative Activity Planning for the Mars Exploration Rover Mission

    NASA Technical Reports Server (NTRS)

    Ai-Chang, Mitchell; Bresina, John; Hsu, Jennifer; Jonsson, Ari; Kanefsky, Bob; McCurdy, Michael; Morris, Paul; Rajan, Kanna; Vera, Alonso; Yglesias, Jeffrey

    2004-01-01

    This document describes the Mixed initiative Activity Plan Generation system MAPGEN. This system is one of the critical tools in the Mars Exploration Rover mission surface operations, where it is used to build activity plans for each of the rovers, each Martian day. The MAPGEN system combines an existing tool for activity plan editing and resource modeling, with an advanced constraint-based reasoning and planning framework. The constraint-based planning component provides active constraint and rule enforcement, automated planning capabilities, and a variety of tools and functions that are useful for building activity plans in an interactive fashion. In this demonstration, we will show the capabilities of the system and demonstrate how the system has been used in actual Mars rover operations. In contrast to the demonstration given at ICAPS 03, significant improvement have been made to the system. These include various additional capabilities that are based on automated reasoning and planning techniques, as well as a new Constraint Editor support tool. The Constraint Editor (CE) as part of the process for generating these command loads, the MAPGEN tool provides engineers and scientists an intelligent activity planning tool that allows them to more effectively generate complex plans that maximize the science return each day. The key to the effectiveness of the MAPGEN tool is an underlying constraint-based planning and reasoning engine.

  16. How CMB and large-scale structure constrain chameleon interacting dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength,more » can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.« less

  17. Positive signs in massive gravity

    DOE PAGES

    Cheung, Clifford; Remmen, Grant N.

    2016-04-01

    Here, we derive new constraints on massive gravity from unitarity and analyticity of scattering amplitudes. Our results apply to a general effective theory defined by Einstein gravity plus the leading soft diffeomorphism-breaking corrections. We calculate scattering amplitudes for all combinations of tensor, vector, and scalar polarizations. Furthermore, the high-energy behavior of these amplitudes prescribes a specific choice of couplings that ameliorates the ultraviolet cutoff, in agreement with existing literature. We then derive consistency conditions from analytic dispersion relations, which dictate positivity of certain combinations of parameters appearing in the forward scattering amplitudes. These constraints exclude all but a small islandmore » in the parameter space of ghost-free massive gravity. And while the theory of the "Galileon" scalar mode alone is known to be inconsistent with positivity constraints, this is remedied in the full massive gravity theory.« less

  18. Adaptive fuzzy dynamic surface control of nonlinear systems with input saturation and time-varying output constraints

    NASA Astrophysics Data System (ADS)

    Edalati, L.; Khaki Sedigh, A.; Aliyari Shooredeli, M.; Moarefianpour, A.

    2018-02-01

    This paper deals with the design of adaptive fuzzy dynamic surface control for uncertain strict-feedback nonlinear systems with asymmetric time-varying output constraints in the presence of input saturation. To approximate the unknown nonlinear functions and overcome the problem of explosion of complexity, a Fuzzy logic system is combined with the dynamic surface control in the backstepping design technique. To ensure the output constraints satisfaction, an asymmetric time-varying Barrier Lyapunov Function (BLF) is used. Moreover, by applying the minimal learning parameter technique, the number of the online parameters update for each subsystem is reduced to 2. Hence, the semi-globally uniformly ultimately boundedness (SGUUB) of all the closed-loop signals with appropriate tracking error convergence is guaranteed. The effectiveness of the proposed control is demonstrated by two simulation examples.

  19. Constraint on reconstructed f(R) gravity models from gravitational waves

    NASA Astrophysics Data System (ADS)

    Lee, Seokcheon

    2018-06-01

    The gravitational wave (GW) detection of a binary neutron star inspiral made by the Advanced LIGO and Advanced Virgo paves the unprecedented way for multi-messenger observations. The propagation speed of this GW can be scrutinized by comparing the arrival times between GW and neutrinos or photons. It provides the constraint on the mass of the graviton. f(R) gravity theories have the habitual non-zero mass gravitons in addition to usual massless ones. Previously, we show that the model independent f(R) gravity theories can be constructed from the both background evolution and the matter growth with one undetermined parameter. We show that this parameter can be constrained from the graviton mass bound obtained from GW detection. Thus, the GW detection provides the invaluable constraint on the validity of f(R) gravity theories.

  20. Atmospheric neutrino oscillation analysis with external constraints in Super-Kamiokande I-IV

    NASA Astrophysics Data System (ADS)

    Abe, K.; Bronner, C.; Haga, Y.; Hayato, Y.; Ikeda, M.; Iyogi, K.; Kameda, J.; Kato, Y.; Kishimoto, Y.; Marti, Ll.; Miura, M.; Moriyama, S.; Nakahata, M.; Nakajima, T.; Nakano, Y.; Nakayama, S.; Okajima, Y.; Orii, A.; Pronost, G.; Sekiya, H.; Shiozawa, M.; Sonoda, Y.; Takeda, A.; Takenaka, A.; Tanaka, H.; Tasaka, S.; Tomura, T.; Akutsu, R.; Irvine, T.; Kajita, T.; Kametani, I.; Kaneyuki, K.; Nishimura, Y.; Okumura, K.; Richard, E.; Tsui, K. M.; Labarga, L.; Fernandez, P.; Blaszczyk, F. d. M.; Gustafson, J.; Kachulis, C.; Kearns, E.; Raaf, J. L.; Stone, J. L.; Sulak, L. R.; Berkman, S.; Tobayama, S.; Goldhaber, M.; Carminati, G.; Elnimr, M.; Kropp, W. R.; Mine, S.; Locke, S.; Renshaw, A.; Smy, M. B.; Sobel, H. W.; Takhistov, V.; Weatherly, P.; Ganezer, K. S.; Hartfiel, B. L.; Hill, J.; Hong, N.; Kim, J. Y.; Lim, I. T.; Park, R. G.; Akiri, T.; Himmel, A.; Li, Z.; O'Sullivan, E.; Scholberg, K.; Walter, C. W.; Wongjirad, T.; Ishizuka, T.; Nakamura, T.; Jang, J. S.; Choi, K.; Learned, J. G.; Matsuno, S.; Smith, S. N.; Amey, J.; Litchfield, R. P.; Ma, W. Y.; Uchida, Y.; Wascko, M. O.; Cao, S.; Friend, M.; Hasegawa, T.; Ishida, T.; Ishii, T.; Kobayashi, T.; Nakadaira, T.; Nakamura, K.; Oyama, Y.; Sakashita, K.; Sekiguchi, T.; Tsukamoto, T.; Abe, KE.; Hasegawa, M.; Suzuki, A. T.; Takeuchi, Y.; Yano, T.; Hayashino, T.; Hirota, S.; Huang, K.; Ieki, K.; Jiang, M.; Kikawa, T.; Nakamura, KE.; Nakaya, T.; Patel, N. D.; Suzuki, K.; Takahashi, S.; Wendell, R. A.; Anthony, L. H. V.; McCauley, N.; Pritchard, A.; Fukuda, Y.; Itow, Y.; Mitsuka, G.; Murase, M.; Muto, F.; Suzuki, T.; Mijakowski, P.; Frankiewicz, K.; Hignight, J.; Imber, J.; Jung, C. K.; Li, X.; Palomino, J. L.; Santucci, G.; Vilela, C.; Wilking, M. J.; Yanagisawa, C.; Ito, S.; Fukuda, D.; Ishino, H.; Kayano, T.; Kibayashi, A.; Koshio, Y.; Mori, T.; Nagata, H.; Sakuda, M.; Xu, C.; Kuno, Y.; Wark, D.; Di Lodovico, F.; Richards, B.; Tacik, R.; Kim, S. B.; Cole, A.; Thompson, L.; Okazawa, H.; Choi, Y.; Ito, K.; Nishijima, K.; Koshiba, M.; Totsuka, Y.; Suda, Y.; Yokoyama, M.; Calland, R. G.; Hartz, M.; Martens, K.; Quilain, B.; Simpson, C.; Suzuki, Y.; Vagins, M. R.; Hamabe, D.; Kuze, M.; Yoshida, T.; Ishitsuka, M.; Martin, J. F.; Nantais, C. M.; de Perio, P.; Tanaka, H. A.; Konaka, A.; Chen, S.; Wan, L.; Zhang, Y.; Wilkes, R. J.; Minamino, A.; Super-Kamiokande Collaboration

    2018-04-01

    An analysis of atmospheric neutrino data from all four run periods of Super-Kamiokande optimized for sensitivity to the neutrino mass hierarchy is presented. Confidence intervals for Δ m322 , sin2θ23, sin2θ13 and δC P are presented for normal neutrino mass hierarchy and inverted neutrino mass hierarchy hypotheses, based on atmospheric neutrino data alone. Additional constraints from reactor data on θ13 and from published binned T2K data on muon neutrino disappearance and electron neutrino appearance are added to the atmospheric neutrino fit to give enhanced constraints on the above parameters. Over the range of parameters allowed at 90% confidence level, the normal mass hierarchy is favored by between 91.9% and 94.5% based on the combined Super-Kamiokande plus T2K result.

  1. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  2. Markov Chain Monte Carlo Inversion of Mantle Temperature and Composition, with Application to Iceland

    NASA Astrophysics Data System (ADS)

    Brown, Eric; Petersen, Kenni; Lesher, Charles

    2017-04-01

    Basalts are formed by adiabatic decompression melting of the asthenosphere, and thus provide records of the thermal, chemical and dynamical state of the upper mantle. However, uniquely constraining the importance of these factors through the lens of melting is challenging given the inevitability that primary basalts are the product of variable mixing of melts derived from distinct lithologies having different melting behaviors (e.g. peridotite vs. pyroxenite). Forward mantle melting models, such as REEBOX PRO [1], are useful tools in this regard, because they can account for differences in melting behavior and melt pooling processes, and provide estimates of bulk crust composition and volume that can be compared with geochemical and geophysical constraints, respectively. Nevertheless, these models require critical assumptions regarding mantle temperature, and lithologic abundance(s)/composition(s), all of which are poorly constrained. To provide better constraints on these parameters and their uncertainties, we have coupled a Markov Chain Monte Carlo (MCMC) sampling technique with the REEBOX PRO melting model. The MCMC method systematically samples distributions of key REEBOX PRO input parameters (mantle potential temperature, and initial abundances and compositions of the source lithologies) based on a likelihood function that describes the 'fit' of the model outputs (bulk crust composition and volume and end-member peridotite and pyroxenite melts) relative to geochemical and geophysical constraints and their associated uncertainties. As a case study, we have tested and applied the model to magmatism along Reykjanes Peninsula in Iceland, where pyroxenite has been inferred to be present in the mantle source. This locale is ideal because there exist sufficient geochemical and geophysical data to estimate bulk crust compositions and volumes, as well as the range of near-parental melts derived from the mantle. We find that for the case of passive upwelling, the models that best fit the geochemical and geophysical observables require elevated mantle potential temperatures ( 120 °C above ambient mantle), and 5% pyroxenite. The modeled peridotite source has a trace element composition similar to depleted MORB mantle, whereas the trace element composition of the pyroxenite is similar to enriched mid-ocean ridge basalt. These results highlight the promise of this method for efficiently exploring the range of mantle temperatures, lithologic abundances, and mantle source compositions that are most consistent with available observational constraints in individual volcanic systems. 1 Brown and Lesher (2016), G-cubed, 17, 3929-3968

  3. Lensing convergence in galaxy clustering in ΛCDM and beyond

    NASA Astrophysics Data System (ADS)

    Villa, Eleonora; Di Dio, Enea; Lepori, Francesca

    2018-04-01

    We study the impact of neglecting lensing magnification in galaxy clustering analyses for future galaxy surveys, considering the ΛCDM model and two extensions: massive neutrinos and modifications of General Relativity. Our study focuses on the biases on the constraints and on the estimation of the cosmological parameters. We perform a comprehensive investigation of these two effects for the upcoming photometric and spectroscopic galaxy surveys Euclid and SKA for different redshift binning configurations. We also provide a fitting formula for the magnification bias of SKA. Our results show that the information present in the lensing contribution does improve the constraints on the modified gravity parameters whereas the lensing constraining power is negligible for the ΛCDM parameters. For photometric surveys the estimation is biased for all the parameters if lensing is not taken into account. This effect is particularly significant for the modified gravity parameters. Conversely for spectroscopic surveys the bias is below one sigma for all the parameters. Our findings show the importance of including lensing in galaxy clustering analyses for testing General Relativity and to constrain the parameters which describe its modifications.

  4. Likelihood analysis of the sub-GUT MSSM in light of LHC 13-TeV data

    NASA Astrophysics Data System (ADS)

    Costa, J. C.; Bagnaschi, E.; Sakurai, K.; Borsato, M.; Buchmueller, O.; Citron, M.; De Roeck, A.; Dolan, M. J.; Ellis, J. R.; Flächer, H.; Heinemeyer, S.; Lucio, M.; Santos, D. Martínez; Olive, K. A.; Richards, A.; Weiglein, G.

    2018-02-01

    We describe a likelihood analysis using MasterCode of variants of the MSSM in which the soft supersymmetry-breaking parameters are assumed to have universal values at some scale M_in below the supersymmetric grand unification scale M_GUT, as can occur in mirage mediation and other models. In addition to M_in, such `sub-GUT' models have the 4 parameters of the CMSSM, namely a common gaugino mass m_{1/2}, a common soft supersymmetry-breaking scalar mass m_0, a common trilinear mixing parameter A and the ratio of MSSM Higgs vevs tan β , assuming that the Higgs mixing parameter μ > 0. We take into account constraints on strongly- and electroweakly-interacting sparticles from ˜ 36/fb of LHC data at 13 TeV and the LUX and 2017 PICO, XENON1T and PandaX-II searches for dark matter scattering, in addition to the previous LHC and dark matter constraints as well as full sets of flavour and electroweak constraints. We find a preference for M_in˜ 10^5 to 10^9 GeV, with M_in˜ M_GUT disfavoured by Δ χ ^2 ˜ 3 due to the BR(B_{s, d} → μ ^+μ ^-) constraint. The lower limits on strongly-interacting sparticles are largely determined by LHC searches, and similar to those in the CMSSM. We find a preference for the LSP to be a Bino or Higgsino with m_{\\tilde{χ }^01} ˜ 1 TeV, with annihilation via heavy Higgs bosons H / A and stop coannihilation, or chargino coannihilation, bringing the cold dark matter density into the cosmological range. We find that spin-independent dark matter scattering is likely to be within reach of the planned LUX-Zeplin and XENONnT experiments. We probe the impact of the (g-2)_μ constraint, finding similar results whether or not it is included.

  5. Dark energy equation of state parameter and its evolution at low redshift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Ashutosh; Sangwan, Archana; Jassal, H.K., E-mail: ashutosh_tripathi@fudan.edu.cn, E-mail: archanakumari@iisermohali.ac.in, E-mail: hkjassal@iisermohali.ac.in

    In this paper, we constrain dark energy models using a compendium of observations at low redshifts. We consider the dark energy as a barotropic fluid, with the equation of state a constant as well the case where dark energy equation of state is a function of time. The observations considered here are Supernova Type Ia data, Baryon Acoustic Oscillation data and Hubble parameter measurements. We compare constraints obtained from these data and also do a combined analysis. The combined observational constraints put strong limits on variation of dark energy density with redshift. For varying dark energy models, the range ofmore » parameters preferred by the supernova type Ia data is in tension with the other low redshift distance measurements.« less

  6. Experimental determination of J-Q in the two-parameter characterization of fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, S.; Chiang, F.P.

    1995-11-01

    It is well recognized that using a single parameter to characterize crack tip deformation is no long adequate if constraint is present. Several approaches of two-parameter characterization scheme have been proposed. There are the J-T approach, the J-Q approach of Shih et al and the J-Q approach of Sharma and Aravas. The authors propose a scheme to measure the J and Q of the J-Q theory of Sharma and Aravas. They find that with the addition of Q term the experimentally measured U-field displacement component agrees well with the theoretical prediction. The agreement increases as the crack tip constraint increases.more » The results of a SEN and a CN specimen are presented.« less

  7. SKA weak lensing - I. Cosmological forecasts and the power of radio-optical cross-correlations

    NASA Astrophysics Data System (ADS)

    Harrison, Ian; Camera, Stefano; Zuntz, Joe; Brown, Michael L.

    2016-12-01

    We construct forecasts for cosmological parameter constraints from weak gravitational lensing surveys involving the Square Kilometre Array (SKA). Considering matter content, dark energy and modified gravity parameters, we show that the first phase of the SKA (SKA1) can be competitive with other Stage III experiments such as the Dark Energy Survey and that the full SKA (SKA2) can potentially form tighter constraints than Stage IV optical weak lensing experiments, such as those that will be conducted with LSST, WFIRST-AFTA or Euclid-like facilities. Using weak lensing alone, going from SKA1 to SKA2 represents improvements by factors of ˜10 in matter, ˜10 in dark energy and ˜5 in modified gravity parameters. We also show, for the first time, the powerful result that comparably tight constraints (within ˜5 per cent) for both Stage III and Stage IV experiments, can be gained from cross-correlating shear maps between the optical and radio wavebands, a process which can also eliminate a number of potential sources of systematic errors which can otherwise limit the utility of weak lensing cosmology.

  8. Parameterized Complexity Results for General Factors in Bipartite Graphs with an Application to Constraint Programming

    NASA Astrophysics Data System (ADS)

    Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders

    The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.

  9. A globally convergent Lagrange and barrier function iterative algorithm for the traveling salesman problem.

    PubMed

    Dang, C; Xu, L

    2001-03-01

    In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.

  10. Elimination of spiral waves in a locally connected chaotic neural network by a dynamic phase space constraint.

    PubMed

    Li, Yang; Oku, Makito; He, Guoguang; Aihara, Kazuyuki

    2017-04-01

    In this study, a method is proposed that eliminates spiral waves in a locally connected chaotic neural network (CNN) under some simplified conditions, using a dynamic phase space constraint (DPSC) as a control method. In this method, a control signal is constructed from the feedback internal states of the neurons to detect phase singularities based on their amplitude reduction, before modulating a threshold value to truncate the refractory internal states of the neurons and terminate the spirals. Simulations showed that with appropriate parameter settings, the network was directed from a spiral wave state into either a plane wave (PW) state or a synchronized oscillation (SO) state, where the control vanished automatically and left the original CNN model unaltered. Each type of state had a characteristic oscillation frequency, where spiral wave states had the highest, and the intra-control dynamics was dominated by low-frequency components, thereby indicating slow adjustments to the state variables. In addition, the PW-inducing and SO-inducing control processes were distinct, where the former generally had longer durations but smaller average proportions of affected neurons in the network. Furthermore, variations in the control parameter allowed partial selectivity of the control results, which were accompanied by modulation of the control processes. The results of this study broaden the applicability of DPSC to chaos control and they may also facilitate the utilization of locally connected CNNs in memory retrieval and the exploration of traveling wave dynamics in biological neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Estimation and detection information trade-off for x-ray system optimization

    NASA Astrophysics Data System (ADS)

    Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali

    2016-05-01

    X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.

  12. Clouds on the hot Jupiter HD189733b: Constraints from the reflection spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.

    2014-05-10

    The hot Jupiter HD 189733b is probably the best studied of the known extrasolar planets, with published transit and eclipse spectra covering the near UV to mid-IR range. Recent work on the transmission spectrum has shown clear evidence for the presence of clouds in its atmosphere, which significantly increases the model atmosphere parameter space that must be explored in order to fully characterize this planet. In this work, we apply the NEMESIS atmospheric retrieval code to the recently published HST/STIS reflection spectrum, and also to the dayside thermal emission spectrum in light of new Spitzer/IRAC measurements, as well as ourmore » own re-analysis of the HST/NICMOS data. We first use the STIS data to place some constraints on the nature of clouds on HD 189733b and explore solution degeneracy between different cloud properties and the abundance of Na in the atmosphere; as already noted in previous work, absorption due to Na plays a significant role in determining the shape of the reflection spectrum. We then perform a new retrieval of the temperature profile and abundances of H{sub 2}O, CO{sub 2}, CO, and CH{sub 4} from the dayside thermal emission spectrum. Finally, we investigate the effect of including cloud in the model on this retrieval process. We find that the current quality of data does not warrant the extra complexity introduced by including cloud in the model; however, future data are likely to be of sufficient resolution and signal-to-noise that a more complete model, including scattering particles, will be required.« less

  13. Clouds on the Hot Jupiter HD189733b: Constraints from the Reflection Spectrum

    NASA Astrophysics Data System (ADS)

    Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.; Hackler, T.; Fletcher, L. N.; Lee, J. M.; Gibson, N. P.

    2014-05-01

    The hot Jupiter HD 189733b is probably the best studied of the known extrasolar planets, with published transit and eclipse spectra covering the near UV to mid-IR range. Recent work on the transmission spectrum has shown clear evidence for the presence of clouds in its atmosphere, which significantly increases the model atmosphere parameter space that must be explored in order to fully characterize this planet. In this work, we apply the NEMESIS atmospheric retrieval code to the recently published HST/STIS reflection spectrum, and also to the dayside thermal emission spectrum in light of new Spitzer/IRAC measurements, as well as our own re-analysis of the HST/NICMOS data. We first use the STIS data to place some constraints on the nature of clouds on HD 189733b and explore solution degeneracy between different cloud properties and the abundance of Na in the atmosphere; as already noted in previous work, absorption due to Na plays a significant role in determining the shape of the reflection spectrum. We then perform a new retrieval of the temperature profile and abundances of H2O, CO2, CO, and CH4 from the dayside thermal emission spectrum. Finally, we investigate the effect of including cloud in the model on this retrieval process. We find that the current quality of data does not warrant the extra complexity introduced by including cloud in the model; however, future data are likely to be of sufficient resolution and signal-to-noise that a more complete model, including scattering particles, will be required.

  14. Optimum structural design with static aeroelastic constraints

    NASA Technical Reports Server (NTRS)

    Bowman, Keith B; Grandhi, Ramana V.; Eastep, F. E.

    1989-01-01

    The static aeroelastic performance characteristics, divergence velocity, control effectiveness and lift effectiveness are considered in obtaining an optimum weight structure. A typical swept wing structure is used with upper and lower skins, spar and rib thicknesses, and spar cap and vertical post cross-sectional areas as the design parameters. Incompressible aerodynamic strip theory is used to derive the constraint formulations, and aerodynamic load matrices. A Sequential Unconstrained Minimization Technique (SUMT) algorithm is used to optimize the wing structure to meet the desired performance constraints.

  15. Constraints on Black Hole Spin in a Sample of Broad Iron Line AGN

    NASA Technical Reports Server (NTRS)

    Brenneman, Laura W.; Reynolds, Christopher S.

    2008-01-01

    We present a uniform X-ray spectral analysis of nine type-1 active galactic nuclei (AGN) that have been previously found to harbor relativistically broadened iron emission lines. We show that the need for relativistic effects in the spectrum is robust even when one includes continuum "reflection" from the accretion disk. We then proceed to model these relativistic effects in order to constrain the spin of the supermassive black holes in these AGN. Our principal assumption, supported by recent simulations of geometrically-thin accretion disks, is that no iron line emission (or any associated Xray reflection features) can originate from the disk within the innermost stable circular orbit. Under this assumption, which tends to lead to constraints in the form of lower limits on the spin parameter, we obtain non-trivial spin constraints on five AGN. The spin parameters of these sources range from moderate (a approximates 0.6) to high (a > 0.96). Our results allow, for the first time, an observational constraint on the spin distribution function of local supermassive black holes. Parameterizing this as a power-law in dimensionless spin parameter (f(a) varies as absolute value of (a) exp zeta), we present the probability distribution for zeta implied by our results. Our results suggest 90% and 95% confidence limits of zeta > -0.09 and zeta > -0.3 respectively.

  16. A 125 GeV fat Higgs at large tan β

    DOE PAGES

    Menon, Arjun; Raj, Nirmal

    2015-12-02

    In this paper, we study the viability of regions of large tan β within the frame-work of Fat Higgs/λ-SUSY Models. We compute the one-loop effective potential to find the corrections to the Higgs boson mass due to the heavy non-standard Higgs bosons. As the tree level contribution to the Higgs boson mass is suppressed at large tan β, these one-loop corrections are crucial to raising the Higgs boson mass to the measured LHC value. By raising the Higgsino and singlino mass parameters, typical electroweak precision constraints can also be avoided. We illustrate these new regions of Fat Higgs/λ-SUSY parameter spacemore » by finding regions of large tan β that are consistent with all experimental constraints including direct dark matter detection experiments, relic density limits and the invisible decay width of the Z boson. We find that there exist regions around λ = 1.25, tan β = 50 and a uniform psuedo-scalar 4 TeV ≲ M A ≲ 8 TeV which are consistent will all present phenomenological constraints. In this region the dark matter relic abundance and direct detection limits are satisfied by a lightest neutralino that is mostly bino or singlino. As an interesting aside we also find a region of low tan β and small singlino mass parameter where a well-tempered neutralino avoids all cosmological and direct detection constraints.« less

  17. A 125 GeV fat Higgs at large tan β

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menon, Arjun; Raj, Nirmal

    In this paper, we study the viability of regions of large tan β within the frame-work of Fat Higgs/λ-SUSY Models. We compute the one-loop effective potential to find the corrections to the Higgs boson mass due to the heavy non-standard Higgs bosons. As the tree level contribution to the Higgs boson mass is suppressed at large tan β, these one-loop corrections are crucial to raising the Higgs boson mass to the measured LHC value. By raising the Higgsino and singlino mass parameters, typical electroweak precision constraints can also be avoided. We illustrate these new regions of Fat Higgs/λ-SUSY parameter spacemore » by finding regions of large tan β that are consistent with all experimental constraints including direct dark matter detection experiments, relic density limits and the invisible decay width of the Z boson. We find that there exist regions around λ = 1.25, tan β = 50 and a uniform psuedo-scalar 4 TeV ≲ M A ≲ 8 TeV which are consistent will all present phenomenological constraints. In this region the dark matter relic abundance and direct detection limits are satisfied by a lightest neutralino that is mostly bino or singlino. As an interesting aside we also find a region of low tan β and small singlino mass parameter where a well-tempered neutralino avoids all cosmological and direct detection constraints.« less

  18. A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling.

    PubMed

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2009-02-01

    This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).

  19. Multi-Objective Trajectory Optimization of a Hypersonic Reconnaissance Vehicle with Temperature Constraints

    NASA Astrophysics Data System (ADS)

    Masternak, Tadeusz J.

    This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.

  20. Resolving mobility constraints impeding rural seniors' access to regionalized services.

    PubMed

    Ryser, Laura; Halseth, Greg

    2012-01-01

    Rural and small town places in developed economies are aging. While attention has been paid to the local transportation needs of rural seniors, fewer researchers have explored their regional transportation needs. This is important given policies that have reduced and regionalized many services and supports. This article explores mobility constraints impeding rural seniors' access to regionalized services using the example of northern British Columbia. Drawing upon several qualitative studies, we explore geographical, maintenance, organizational, communication, human resources, infrastructure, and financial constraints that affect seniors' regional mobility. Our findings indicate that greater coordination across multiple government agencies and jurisdictions is needed and more supportive policies and resources must be in place to facilitate a comprehensive regional transportation strategy. In addition to discussing the complexities of these geographies, the article identifies innovative solutions that have been deployed in northern British Columbia to support an aging population. This research provides a foundation for developing a comprehensive understanding of the key issues that need to be addressed to inform strategic investments in infrastructure and programs that support the regional mobility and, hence, healthy aging of rural seniors.

  1. Regional-scale estimates of surface moisture availability and thermal inertia using remote thermal measurements

    NASA Technical Reports Server (NTRS)

    Carlson, T. N.

    1986-01-01

    A review is presented of numerical models which were developed to interpret thermal IR data and to identify the governing parameters and surface energy fluxes recorded in the images. Analytic, predictive, diagnostic and empirical models are described. The limitations of each type of modeling approach are explored in terms of the error sources and inherent constraints due to theoretical or measurement limitations. Sample results of regional-scale soil moisture or evaporation patterns derived from the Heat Capacity Mapping Mission and GOES satellite data through application of the predictive model devised by Carlson (1981) are discussed. The analysis indicates that pattern recognition will probably be highest when data are collected over flat, arid, sparsely vegetated terrain. The soil moisture data then obtained may be accurate to within 10-20 percent.

  2. Office of exploration overview

    NASA Technical Reports Server (NTRS)

    Alred, John

    1989-01-01

    The NASA Office of Exploration case studies for FY89 are reviewed with regard to study ground rules and constraints. Three study scenarios are presented: lunar evolution, Mars evolution, and Mars expedition with emphasis on the key mission objectives.

  3. Re-Assembling Formal Features in Second Language Acquisition: Beyond Minimalism

    ERIC Educational Resources Information Center

    Carroll, Susanne E.

    2009-01-01

    In this commentary, Lardiere's discussion of features is compared with the use of features in constraint-based theories, and it is argued that constraint-based theories might offer a more elegant account of second language acquisition (SLA). Further evidence is reported to question the accuracy of Chierchia's (1998) Nominal Mapping Parameter.…

  4. Light weakly coupled axial forces: models, constraints, and projections

    DOE PAGES

    Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth; ...

    2017-05-01

    Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less

  5. Light weakly coupled axial forces: models, constraints, and projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kahn, Yonatan; Krnjaic, Gordan; Mishra-Sharma, Siddharth

    Here, we investigate the landscape of constraints on MeV-GeV scale, hidden U(1) forces with nonzero axial-vector couplings to Standard Model fermions. While the purely vector-coupled dark photon, which may arise from kinetic mixing, is a well-motivated scenario, several MeV-scale anomalies motivate a theory with axial couplings which can be UV-completed consistent with Standard Model gauge invariance. Moreover, existing constraints on dark photons depend on products of various combinations of axial and vector couplings, making it difficult to isolate the e ects of axial couplings for particular flavors of SM fermions. We present a representative renormalizable, UV-complete model of a darkmore » photon with adjustable axial and vector couplings, discuss its general features, and show how some UV constraints may be relaxed in a model with nonrenormalizable Yukawa couplings at the expense of fine-tuning. We survey the existing parameter space and the projected reach of planned experiments, brie y commenting on the relevance of the allowed parameter space to low-energy anomalies in π 0 and 8Be* decay.« less

  6. Hard Constraints in Optimization Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2008-01-01

    This paper proposes a methodology for the analysis and design of systems subject to parametric uncertainty where design requirements are specified via hard inequality constraints. Hard constraints are those that must be satisfied for all parameter realizations within a given uncertainty model. Uncertainty models given by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles, are the focus of this paper. These models, which are also quite practical, allow for a rigorous mathematical treatment within the proposed framework. Hard constraint feasibility is determined by sizing the largest uncertainty set for which the design requirements are satisfied. Analytically verifiable assessments of robustness are attained by comparing this set with the actual uncertainty model. Strategies that enable the comparison of the robustness characteristics of competing design alternatives, the description and approximation of the robust design space, and the systematic search for designs with improved robustness are also proposed. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, this methodology is applicable to a broad range of engineering problems.

  7. Likelihood analysis of supersymmetric SU(5) GUTs

    DOE PAGES

    Bagnaschi, Emanuele; Costa, J. C.; Sakurai, K.; ...

    2017-02-16

    Here, we perform a likelihood analysis of the constraints from accelerator experiments and astrophysical observations on supersymmetric (SUSY) models with SU(5) boundary conditions on soft SUSY-breaking parameters at the GUT scale. The parameter space of the models studied has 7 parameters: a universal gaugino massmore » $$m_{1/2}$$, distinct masses for the scalar partners of matter fermions in five- and ten-dimensional representations of SU(5), $$m_5$$ and $$m_{10}$$, and for the $$\\mathbf{5}$$ and $$\\mathbf{\\bar 5}$$ Higgs representations $$m_{H_u}$$ and $$m_{H_d}$$, a universal trilinear soft SUSY-breaking parameter $$A_0$$, and the ratio of Higgs vevs $$\\tan \\beta$$. In addition to previous constraints from direct sparticle searches, low-energy and flavour observables, we incorporate constraints based on preliminary results from 13 TeV LHC searches for jets + MET events and long-lived particles, as well as the latest PandaX-II and LUX searches for direct Dark Matter detection. In addition to previously-identified mechanisms for bringing the supersymmetric relic density into the range allowed by cosmology, we identify a novel $${\\tilde u_R}/{\\tilde c_R} - \\tilde{\\chi}^0_1$$ coannihilation mechanism that appears in the supersymmetric SU(5) GUT model and discuss the role of $${\\tilde \

  8. Integrated cosmological probes: concordance quantified

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch

    2017-10-01

    Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT andmore » ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.« less

  9. Constraints on pulsed emission model for repeating FRB 121102

    NASA Astrophysics Data System (ADS)

    Kisaka, Shota; Enoto, Teruaki; Shibata, Shinpei

    2017-12-01

    Recent localization of the repeating fast radio burst (FRB) 121102 revealed the distance of its host galaxy and luminosities of the bursts. We investigated constraints on the young neutron star (NS) model, that (a) the FRB intrinsic luminosity is supported by the spin-down energy, and (b) the FRB duration is shorter than the NS rotation period. In the case of a circular cone emission geometry, conditions (a) and (b) determine the NS parameters within very small ranges, compared with that from only condition (a) discussed in previous works. Anisotropy of the pulsed emission does not affect the area of the allowed parameter region by virtue of condition (b). The determined parameters are consistent with those independently limited by the properties of the possible persistent radio counterpart and the circumburst environments such as surrounding materials. Since the NS in the allowed parameter region is older than the spin-down timescale, the hypothetical GRP (giant radio pulse)-like model expects a rapid radio flux decay of ≲1 Jy within a few years as the spin-down luminosity decreases. The continuous monitoring will provide constraints on the young NS models. If no flux evolution is seen, we need to consider an alternative model, e.g., the magnetically powered flare.

  10. Constrained inference in mixed-effects models for longitudinal data with application to hearing loss.

    PubMed

    Davidov, Ori; Rosen, Sophia

    2011-04-01

    In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.

  11. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  12. The Influence of Individual Driver Characteristics on Congestion Formation

    NASA Astrophysics Data System (ADS)

    Wang, Lanjun; Zhang, Hao; Meng, Huadong; Wang, Xiqin

    Previous works have pointed out that one of the reasons for the formation of traffic congestion is instability in traffic flow. In this study, we investigate theoretically how the characteristics of individual drivers influence the instability of traffic flow. The discussions are based on the optimal velocity model, which has three parameters related to individual driver characteristics. We specify the mappings between the model parameters and driver characteristics in this study. With linear stability analysis, we obtain a condition for when instability occurs and a constraint about how the model parameters influence the unstable traffic flow. Meanwhile, we also determine how the region of unstable flow densities depends on these parameters. Additionally, the Langevin approach theoretically validates that under the constraint, the macroscopic characteristics of the unstable traffic flow becomes a mixture of free flows and congestions. All of these results imply that both overly aggressive and overly conservative drivers are capable of triggering traffic congestion.

  13. The reconstruction of tachyon inflationary potentials

    NASA Astrophysics Data System (ADS)

    Fei, Qin; Gong, Yungui; Lin, Jiong; Yi, Zhu

    2017-08-01

    We derive a lower bound on the field excursion for the tachyon inflation, which is determined by the amplitude of the scalar perturbation and the number of e-folds before the end of inflation. Using the relation between the observables like ns and r with the slow-roll parameters, we reconstruct three classes of tachyon potentials. The model parameters are determined from the observations before the potentials are reconstructed, and the observations prefer the concave potential. We also discuss the constraints from the reheating phase preceding the radiation domination for the three classes of models by assuming the equation of state parameter wre during reheating is a constant. Depending on the model parameters and the value of wre, the constraints on Nre and Tre are different. As ns increases, the allowed reheating epoch becomes longer for wre=-1/3, 0 and 1/6 while the allowed reheating epoch becomes shorter for wre=2/3.

  14. Observational constraints on variable equation of state parameters of dark matter and dark energy after Planck

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh; Xu, Lixin

    2014-10-01

    In this paper, we study a cosmological model in general relativity within the framework of spatially flat Friedmann-Robertson-Walker space-time filled with ordinary matter (baryonic), radiation, dark matter and dark energy, where the latter two components are described by Chevallier-Polarski-Linder equation of state parameters. We utilize the observational data sets from SNLS3, BAO and Planck + WMAP9 + WiggleZ measurements of matter power spectrum to constrain the model parameters. We find that the current observational data offer tight constraints on the equation of state parameter of dark matter. We consider the perturbations and study the behavior of dark matter by observing its effects on CMB and matter power spectra. We find that the current observational data favor the cold dark matter scenario with the cosmological constant type dark energy at the present epoch.

  15. Political Economy of Cost-Sharing in Higher Education: The Case of Jordan

    ERIC Educational Resources Information Center

    Kanaan, Taher H.; Al-Salamat, Mamdouh N.; Hanania, May D.

    2011-01-01

    This article analyzes patterns of expenditure on higher education in Jordan, explores the current system's adequacy, efficiency, and equity, and identifies its strengths and weaknesses in light of current constraints and future challenges. Among the constraints are the relatively low public expenditure on higher education, leaving households to…

  16. Constraints on Statistical Computations at 10 Months of Age: The Use of Phonological Features

    ERIC Educational Resources Information Center

    Gonzalez-Gomez, Nayeli; Nazzi, Thierry

    2015-01-01

    Recently, several studies have argued that infants capitalize on the statistical properties of natural languages to acquire the linguistic structure of their native language, but the kinds of constraints which apply to statistical computations remain largely unknown. Here we explored French-learning infants' perceptual preference for…

  17. Green School Grounds as Sites for Outdoor Learning: Barriers and Opportunities

    ERIC Educational Resources Information Center

    Dyment, Janet E.

    2005-01-01

    In their review of evidence-based research entitled "A Review of Research on Outdoor Learning," Rickinson "et al." (2004) identify five key constraints that limit the amount of outdoor learning. This paper explores whether green school grounds might be a location where these constraints could be minimised. Specifically, it…

  18. Perceived constraints to art museum attendance

    Treesearch

    Jinhee Jun; Gerard Kyle; Joseph T. O' Leary

    2007-01-01

    We explored selected socio-demographic factors that influence the perception of constraints to art museum attendance among a sample of interested individuals who were currently not enjoying art museum visitation. Data from the Survey of Public Participation in the Arts (SPPA), a nationwide survey were used for this study. Using multivariate analysis of variance, we...

  19. "Starting from Ground Zero:" Constraints and Experiences of Adult Women Returning to College

    ERIC Educational Resources Information Center

    Deutsch, Nancy L.; Schmertz, Barbara

    2011-01-01

    Women adult students face particular constraints when pursuing degrees. This paper uses focus group data to explore the educational pathways, barriers, and supports of women students. Women's educations are shaped by personal and structural gendered forces, including family, economic, and workplace issues. Women report conflict over short-term…

  20. Choice within Constraints: Mothers and Schooling.

    ERIC Educational Resources Information Center

    David, Miriam; Davies, Jackie; Edwards, Rosalind; Reay, Diane; Standing, Kay

    1997-01-01

    Explores, from a feminist perspective, the discourses of choice regarding how women make their choices as consumers in the education marketplace. It argues that mothers as parents are not free to choose but act within a range of constraints, i.e., their choices are limited by structural and moral possibilities in a patriarchal and racist society.…

  1. Open innovation in the European space sector: Existing practices, constraints and opportunities

    NASA Astrophysics Data System (ADS)

    van Burg, Elco; Giannopapa, Christina; Reymen, Isabelle M. M. J.

    2017-12-01

    To enhance innovative output and societal spillover of the European space sector, the open innovation approach is becoming popular. Yet, open innovation, referring to innovation practices that cross borders of individual firms, faces constraints. To explore these constraints and identify opportunities, this study performs interviews with government/agency officials and space technology entrepreneurs. The interviews highlight three topic areas with constraints and opportunities: 1) mainly one-directional knowledge flows (from outside the space sector to inside), 2) knowledge and property management, and 3) the role of small- and medium sized companies. These results bear important implications for innovation practices in the space sector.

  2. Cluster functions and scattering amplitudes for six and seven points

    DOE PAGES

    Harrington, Thomas; Spradlin, Marcus

    2017-07-05

    Scattering amplitudes in planar super-Yang-Mills theory satisfy several basic physical and mathematical constraints, including physical constraints on their branch cut structure and various empirically discovered connections to the mathematics of cluster algebras. The power of the bootstrap program for amplitudes is inversely proportional to the size of the intersection between these physical and mathematical constraints: ideally we would like a list of constraints which determine scattering amplitudes uniquely. Here, we explore this intersection quantitatively for two-loop six- and seven-point amplitudes by providing a complete taxonomy of the Gr(4, 6) and Gr(4, 7) cluster polylogarithm functions of [15] at weight 4.

  3. LPV Modeling and Control for Active Flutter Suppression of a Smart Airfoil

    NASA Technical Reports Server (NTRS)

    Al-Hajjar, Ali M. H.; Al-Jiboory, Ali Khudhair; Swei, Sean Shan-Min; Zhu, Guoming

    2018-01-01

    In this paper, a novel technique of linear parameter varying (LPV) modeling and control of a smart airfoil for active flutter suppression is proposed, where the smart airfoil has a groove along its chord and contains a moving mass that is used to control the airfoil pitching and plunging motions. The new LPV modeling technique is proposed that uses mass position as a scheduling parameter to describe the physical constraint of the moving mass, in addition the hard constraint at the boundaries is realized by proper selection of the parameter varying function. Therefore, the position of the moving mass and the free stream airspeed are considered the scheduling parameters in the study. A state-feedback based LPV gain-scheduling controller with guaranteed H infinity performance is presented by utilizing the dynamics of the moving mass as scheduling parameter at a given airspeed. The numerical simulations demonstrate the effectiveness of the proposed LPV control architecture by significantly improving the performance while reducing the control effort.

  4. Updated constraints on self-interacting dark matter from Supernova 1987A

    NASA Astrophysics Data System (ADS)

    Mahoney, Cameron; Leibovich, Adam K.; Zentner, Andrew R.

    2017-08-01

    We revisit SN1987A constraints on light, hidden sector gauge bosons ("dark photons") that are coupled to the standard model through kinetic mixing with the photon. These constraints are realized because excessive bremsstrahlung radiation of the dark photon can lead to rapid cooling of the SN1987A progenitor core, in contradiction to the observed neutrinos from that event. The models we consider are of interest as phenomenological models of strongly self-interacting dark matter. We clarify several possible ambiguities in the literature and identify errors in prior analyses. We find constraints on the dark photon mixing parameter that are in rough agreement with the early estimates of Dent et al. [arXiv:1201.2683.], but only because significant errors in their analyses fortuitously canceled. Our constraints are in good agreement with subsequent analyses by Rrapaj & Reddy [Phys. Rev. C 94, 045805 (2016)., 10.1103/PhysRevC.94.045805] and Hardy & Lasenby [J. High Energy Phys. 02 (2017) 33., 10.1007/JHEP02(2017)033]. We estimate the dark photon bremsstrahlung rate using one-pion exchange (OPE), while Rrapaj & Reddy use a soft radiation approximation (SRA) to exploit measured nuclear scattering cross sections. We find that the differences between mixing parameter constraints obtained through the OPE approximation or the SRA approximation are roughly a factor of ˜2 - 3 . Hardy & Laseby [J. High Energy Phys. 02 (2017) 33., 10.1007/JHEP02(2017)033] include plasma effects in their calculations finding significantly weaker constraints on dark photon mixing for dark photon masses below ˜10 MeV . We do not consider plasma effects. Lastly, we point out that the properties of the SN1987A progenitor core remain somewhat uncertain and that this uncertainty alone causes uncertainty of at least a factor of ˜2 - 3 in the excluded values of the dark photon mixing parameter. Further refinement of these estimates is unwarranted until either the interior of the SN1987A progenitor is more well understood or additional, large, and heretofore neglected effects, such as the plasma interactions studied by Hardy & Lasenby [J. High Energy Phys. 02 (2017) 33. 10.1007/JHEP02(2017)033], are identified.

  5. Transfer as a function of exploration and stabilization in original practice.

    PubMed

    Pacheco, Matheus M; Newell, Karl M

    2015-12-01

    The identification of practice conditions that provide flexibility to perform successfully in transfer is a long-standing issue in motor learning but is still not well understood. Here we investigated the hypothesis that a search strategy that encompasses both exploration and stabilization of the perceptual-motor workspace will enhance performance in transfer. Twenty-two participants practiced a virtual projection task (120 trials on each of 3 days) and subsequently performed two transfer conditions (20 trials/condition) with different constraints in the angle to project the object. The findings revealed a quadratic relation between exploration in practice (indexed by autocorrelation and distribution of error) and subsequent performance error in transfer. The integration of exploration and stabilization of the perceptual-motor workspace enhances transfer to tasks with different constraints on the scaling of motor output. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  7. A compendium of chameleon constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burrage, Clare; Sakstein, Jeremy, E-mail: clare.burrage@nottingham.ac.uk, E-mail: jeremy.sakstein@port.ac.uk

    2016-11-01

    The chameleon model is a scalar field theory with a screening mechanism that explains how a cosmologically relevant light scalar can avoid the constraints of intra-solar-system searches for fifth-forces. The chameleon is a popular dark energy candidate and also arises in f ( R ) theories of gravity. Whilst the chameleon is designed to avoid historical searches for fifth-forces it is not unobservable and much effort has gone into identifying the best observables and experiments to detect it. These results are not always presented for the same models or in the same language, a particular problem when comparing astrophysical andmore » laboratory searches making it difficult to understand what regions of parameter space remain. Here we present combined constraints on the chameleon model from astrophysical and laboratory searches for the first time and identify the remaining windows of parameter space. We discuss the implications for cosmological chameleon searches and future small-scale probes.« less

  8. Interpreting short tandem repeat variations in humans using mutational constraint

    PubMed Central

    Gymrek, Melissa; Willems, Thomas; Reich, David; Erlich, Yaniv

    2017-01-01

    Identifying regions of the genome that are depleted of mutations can reveal potentially deleterious variants. Short tandem repeats (STRs), also known as microsatellites, are among the largest contributors of de novo mutations in humans. However, per-locus studies of STR mutations have been limited to highly ascertained panels of several dozen loci. Here, we harnessed bioinformatics tools and a novel analytical framework to estimate mutation parameters for each STR in the human genome by correlating STR genotypes with local sequence heterozygosity. We applied our method to obtain robust estimates of the impact of local sequence features on mutation parameters and used this to create a framework for measuring constraint at STRs by comparing observed vs. expected mutation rates. Constraint scores identified known pathogenic variants with early onset effects. Our metric will provide a valuable tool for prioritizing pathogenic STRs in medical genetics studies. PMID:28892063

  9. A robust fuzzy local Information c-means clustering algorithm with noise detection

    NASA Astrophysics Data System (ADS)

    Shang, Jiayu; Li, Shiren; Huang, Junwei

    2018-04-01

    Fuzzy c-means clustering (FCM), especially with spatial constraints (FCM_S), is an effective algorithm suitable for image segmentation. Its reliability contributes not only to the presentation of fuzziness for belongingness of every pixel but also to exploitation of spatial contextual information. But these algorithms still remain some problems when processing the image with noise, they are sensitive to the parameters which have to be tuned according to prior knowledge of the noise. In this paper, we propose a new FCM algorithm, combining the gray constraints and spatial constraints, called spatial and gray-level denoised fuzzy c-means (SGDFCM) algorithm. This new algorithm conquers the parameter disadvantages mentioned above by considering the possibility of noise of each pixel, which aims to improve the robustness and obtain more detail information. Furthermore, the possibility of noise can be calculated in advance, which means the algorithm is effective and efficient.

  10. New dynamic variables for rotating spacecraft

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    This paper introduces two new seven-parameter representations for spacecraft attitude dynamics modeling. The seven parameters are the three components of the total system angular momentum in the spacecraft body frame; the three components of the angular momentum in the inertial reference frame; and an angle variable. These obey a single constraint as do parameterizations that include a quaternion; in this case the constraint is the equality of the sum of the squares of the angular momentum components in the two frames. The two representations are nonsingular if the system angular momentum is non-zero and obeys certain orientation constraints. The new parameterizations of the attitude matrix, the equations of motion, and the relation of the solution of these equations to Euler angles for torque-free motion are developed and analyzed. The superiority of the new parameterizations for numerical integration is shown in a specific example.

  11. Influence of flow constraints on the properties of the critical endpoint of symmetric nuclear matter

    NASA Astrophysics Data System (ADS)

    Ivanytskyi, A. I.; Bugaev, K. A.; Sagun, V. V.; Bravina, L. V.; Zabrodin, E. E.

    2018-06-01

    We propose a novel family of equations of state for symmetric nuclear matter based on the induced surface tension concept for the hard-core repulsion. It is shown that having only four adjustable parameters the suggested equations of state can, simultaneously, reproduce not only the main properties of the nuclear matter ground state, but the proton flow constraint up its maximal particle number densities. Varying the model parameters we carefully examine the range of values of incompressibility constant of normal nuclear matter and its critical temperature, which are consistent with the proton flow constraint. This analysis allows us to show that the physically most justified value of nuclear matter critical temperature is 15.5-18 MeV, the incompressibility constant is 270-315 MeV and the hard-core radius of nucleons is less than 0.4 fm.

  12. Crack Instability Predictions Using a Multi-Term Approach

    NASA Technical Reports Server (NTRS)

    Zanganeh, Mohammad; Forman, Royce G.

    2015-01-01

    Present crack instability analysis for fracture critical flight hardware is normally performed using a single parameter, K(sub C), fracture toughness value obtained from standard ASTM 2D geometry test specimens made from the appropriate material. These specimens do not sufficiently match the boundary conditions and the elastic-plastic constraint characteristics of the hardware component, and also, the crack instability of most commonly used aircraft and aerospace structural materials have some amount of stable crack growth before fracture which makes the normal use of a K(sub C) single parameter toughness value highly approximate. In the past, extensive studies have been conducted to improve the single parameter (K or J controlled) approaches by introducing parameters accounting for the geometry or in-plane constraint effects. Using 'J-integral' and 'A' parameter as a measure of constraint is one of the most accurate elastic-plastic crack solutions currently available. In this work the feasibility of the J-A approach for prediction of the crack instability was investigated first by ignoring the effects of stable crack growth i.e. using a critical J and A and second by considering the effects of stable crack growth using the corrected J-delta a using the 'A' parameter. A broad range of initial crack lengths and a wide range of specimen geometries including C(T), M(T), ESE(T), SE(T), Double Edge Crack (DEC), Three-Hole-Tension (THT) and NC (crack from a notch) manufactured from Al7075 were studied. Improvements in crack instability predictions were observed compared to the other methods available in the literature.

  13. Variation in the thermal parameters of Odontophrynus occidentalis in the Monte desert, Argentina: response to the environmental constraints.

    PubMed

    Sanabria, Eduardo Alfredo; Quiroga, Lorena Beatriz; Martino, Adolfo Ludovico

    2012-03-01

    We studied the variation of thermal parameters of Odontophrynus occidentalis between season (wet and dry) in the Monte desert (Argentina). We measured body temperatures, microhabitat temperatures, and operative temperatures; while in the laboratory, we measured the selected body temperatures. Our results show a change in the thermal parameters of O. occidentalis that is related to environmental constraints of their thermal niche. Environmental thermal constraints are present in both seasons (dry and wet), showing variations in thermal parameters studied. Apparently imposed environmental restrictions, the toads in nature always show body temperatures below the set point. Acclimatization is an advantage for toads because it allows them to bring more frequent body temperatures to the set point. The selected body temperature has seasonal intraindividual variability. These variations can be due to thermo-sensitivity of toads and life histories of individuals that limits their allocation and acquisition of resources. Possibly the range of variation found in selected body temperature is a consequence of the thermal environmental variation along the year. These variations of thermal parameters are commonly found in deserts and thermal bodies of nocturnal ectotherms. The plasticity of selected body temperature allows O. occidentales to have longer periods of activity for foraging and reproduction, while maintaining reasonable high performance at different temperatures. The plasticity in seasonal variation of the thermal parameters has been poorly studied, and is greatly advantageous to desert species during changes in both seasonal and daily temperature, as these environments are known for their high environmental variability. © 2012 WILEY PERIODICALS, INC.

  14. Haptic discrimination of bilateral symmetry in 2-dimensional and 3-dimensional unfamiliar displays.

    PubMed

    Ballesteros, S; Manga, D; Reales, J M

    1997-01-01

    In five experiments, we tested the accuracy and sensitivity of the haptic system in detecting bilateral symmetry of raised-line shapes (Experiments 1 and 2) and unfamiliar 3-D objects (Experiments 3-5) under different time constraints and different modes of exploration. Touch was moderately accurate for detecting this property in raised displays. Experiment 1 showed that asymmetric judgments were systematically more accurate than were symmetric judgements with scanning by one finger. Experiments 2 confirmed the results of Experiment 1 but also showed that bimanual exploration facilitated processing of symmetric shapes without improving asymmetric detections. Bimanual exploration of 3-D objects was very accurate and significantly facilitated processing of symmetric objects under different time constraints (Experiment 3). Unimanual exploration did not differ from bimanual exploration (Experiment 4), but restricting hand movements to one enclosure reduced performance significantly (Experiment 5). Spatial reference information, signal detection measures, and hand movements in processing bilateral symmetry by touch are discussed.

  15. The Higgs properties in the MSSM after the LHC Run-2

    NASA Astrophysics Data System (ADS)

    Zhao, Jun

    2018-04-01

    We scrutinize the parameter space of the SM-like Higgs boson in the minimal supersymmetric standard model (MSSM) under current experimental constraints. The constraints are from (i) the precision electroweak data and various flavor observables; (ii) the direct 22 separate ATLAS searches in Run-1; (iii) the latest LHC Run-2 Higgs data and tri-lepton search of electroweakinos. We perform a scan over the parameter space and find that the Run-2 data can further exclude a part of parameter space. For the property of the SM-like Higgs boson, its gauge couplings further approach to the SM values with a deviation below 0.1%, while its Yukawa couplings hbb¯ and hτ+τ‑ can still sizably differ from the SM predictions by several tens percent.

  16. Activity Planning for the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Jonsson, Ari K.; Morris, Paul H.; Rajan, Kanna

    2004-01-01

    Operating the Mars Exploration Rovers is a challenging, time-pressured task. Each day, the operations team must generate a new plan describing the rover activities for the next day. These plans must abide by resource limitations, safety rules, and temporal constraints. The objective is to achieve as much science as possible, choosing from a set of observation requests that oversubscribe rover resources. In order to accomplish this objective, given the short amount of planning time available, the MAPGEN (Mixed-initiative Activity Plan GENerator) system was made a mission-critical part of the ground operations system. MAPGEN is a mixed-initiative system that employs automated constraint-based planning, scheduling, and temporal reasoning to assist operations staff in generating the daily activity plans. This paper describes the adaptation of constraint-based planning and temporal reasoning to a mixed-initiative setting and the key technical solutions developed for the mission deployment of MAPGEN.

  17. The extended Baryon Oscillation Spectroscopic Survey: a cosmological forecast

    NASA Astrophysics Data System (ADS)

    Zhao, Gong-Bo; Wang, Yuting; Ross, Ashley J.; Shandera, Sarah; Percival, Will J.; Dawson, Kyle S.; Kneib, Jean-Paul; Myers, Adam D.; Brownstein, Joel R.; Comparat, Johan; Delubac, Timothée; Gao, Pengyuan; Hojjati, Alireza; Koyama, Kazuya; McBride, Cameron K.; Meza, Andrés; Newman, Jeffrey A.; Palanque-Delabrouille, Nathalie; Pogosian, Levon; Prada, Francisco; Rossi, Graziano; Schneider, Donald P.; Seo, Hee-Jong; Tao, Charling; Wang, Dandan; Yèche, Christophe; Zhang, Hanyu; Zhang, Yuecheng; Zhou, Xu; Zhu, Fangzhou; Zou, Hu

    2016-04-01

    We present a science forecast for the extended Baryon Oscillation Spectroscopic Survey (eBOSS) survey. Focusing on discrete tracers, we forecast the expected accuracy of the baryonic acoustic oscillation (BAO), the redshift-space distortion (RSD) measurements, the fNL parameter quantifying the primordial non-Gaussianity, the dark energy and modified gravity parameters. We also use the line-of-sight clustering in the Lyman α forest to constrain the total neutrino mass. We find that eBOSS luminous red galaxies, emission line galaxies and clustering quasars can achieve a precision of 1, 2.2 and 1.6 per cent, respectively, for spherically averaged BAO distance measurements. Using the same samples, the constraint on fσ8 is expected to be 2.5, 3.3 and 2.8 per cent, respectively. For primordial non-Gaussianity, eBOSS alone can reach an accuracy of σ(fNL) ˜ 10-15. eBOSS can at most improve the dark energy figure of merit by a factor of 3 for the Chevallier-Polarski-Linder parametrization, and can well constrain three eigenmodes for the general equation-of-state parameter. eBOSS can also significantly improve constraints on modified gravity parameters by providing the RSD information, which is highly complementary to constraints obtained from weak lensing measurements. A principal component analysis shows that eBOSS can measure the eigenmodes of the effective Newton's constant to 2 per cent precision; this is a factor of 10 improvement over that achievable without eBOSS. Finally, we derive the eBOSS constraint (combined with Planck, Dark Energy Survey and BOSS) on the total neutrino mass, σ(Σmν) = 0.03 eV (68 per cent CL), which in principle makes it possible to distinguish between the two scenarios of neutrino mass hierarchies.

  18. Cosmological implications of different baryon acoustic oscillation data

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; Hu, YaZhou; Li, Miao

    2017-04-01

    In this work, we explore the cosmological implications of different baryon acoustic oscillation (BAO) data, including the BAO data extracted by using the spherically averaged one-dimensional galaxy clustering (GC) statistics (hereafter BAO1) and the BAO data obtained by using the anisotropic two-dimensional GC statistics (hereafter BAO2). To make a comparison, we also take into account the case without BAO data (hereafter NO BAO). Firstly, making use of these BAO data, as well as the SNLS3 type Ia supernovae sample and the Planck distance priors data, we give the cosmological constraints of the ΛCDM, the wCDM, and the Chevallier-Polarski-Linder (CPL) model. Then, we discuss the impacts of different BAO data on cosmological consquences, including its effects on parameter space, equation of state (EoS), figure of merit (FoM), deceleration-acceleration transition redshift, Hubble parameter H( z), deceleration parameter q( z), statefinder hierarchy S 3 (1)( z), S 4 (1)( z) and cosmic age t( z). We find that: (1) NO BAO data always give a smallest fractional matter density Ω m0, a largest fractional curvature density Ωk0 and a largest Hubble constant h; in contrast, BAO1 data always give a largest Ω m0, a smallest Ω k0 and a smallest h. (2) For the wCDM and the CPL model, NO BAO data always give a largest EoS w; in contrast, BAO2 data always give a smallest w. (3) Compared with the case of BAO1, BAO2 data always give a slightly larger FoM, and thus can give a cosmological constraint with a slightly better accuracy. (4) The impacts of different BAO data on the cosmic evolution and the comic age are very small, and cannot be distinguished by using various dark energy diagnoses and the cosmic age data.

  19. Ultracool dwarf benchmarks with Gaia primaries

    NASA Astrophysics Data System (ADS)

    Marocco, F.; Pinfield, D. J.; Cook, N. J.; Zapatero Osorio, M. R.; Montes, D.; Caballero, J. A.; Gálvez-Ortiz, M. C.; Gromadzki, M.; Jones, H. R. A.; Kurtev, R.; Smart, R. L.; Zhang, Z.; Cabrera Lavers, A. L.; García Álvarez, D.; Qi, Z. X.; Rickard, M. J.; Dover, L.

    2017-10-01

    We explore the potential of Gaia for the field of benchmark ultracool/brown dwarf companions, and present the results of an initial search for metal-rich/metal-poor systems. A simulated population of resolved ultracool dwarf companions to Gaia primary stars is generated and assessed. Of the order of ˜24 000 companions should be identifiable outside of the Galactic plane (|b| > 10 deg) with large-scale ground- and space-based surveys including late M, L, T and Y types. Our simulated companion parameter space covers 0.02 ≤ M/M⊙ ≤ 0.1, 0.1 ≤ age/Gyr ≤ 14 and -2.5 ≤ [Fe/H] ≤ 0.5, with systems required to have a false alarm probability <10-4, based on projected separation and expected constraints on common distance, common proper motion and/or common radial velocity. Within this bulk population, we identify smaller target subsets of rarer systems whose collective properties still span the full parameter space of the population, as well as systems containing primary stars that are good age calibrators. Our simulation analysis leads to a series of recommendations for candidate selection and observational follow-up that could identify ˜500 diverse Gaia benchmarks. As a test of the veracity of our methodology and simulations, our initial search uses UKIRT Infrared Deep Sky Survey and Sloan Digital Sky Survey to select secondaries, with the parameters of primaries taken from Tycho-2, Radial Velocity Experiment, Large sky Area Multi-Object fibre Spectroscopic Telescope and Tycho-Gaia Astrometric Solution. We identify and follow up 13 new benchmarks. These include M8-L2 companions, with metallicity constraints ranging in quality, but robust in the range -0.39 ≤ [Fe/H] ≤ +0.36, and with projected physical separation in the range 0.6 < s/kau < 76. Going forward, Gaia offers a very high yield of benchmark systems, from which diverse subsamples may be able to calibrate a range of foundational ultracool/sub-stellar theory and observation.

  20. Feasibility of employing model-based optimization of pulse amplitude and electrode distance for effective tumor electropermeabilization.

    PubMed

    Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan

    2007-05-01

    In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.

  1. Constrained spectral clustering under a local proximity structure assumption

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Xu, Qianjun; des Jardins, Marie

    2005-01-01

    This work focuses on incorporating pairwise constraints into a spectral clustering algorithm. A new constrained spectral clustering method is proposed, as well as an active constraint acquisition technique and a heuristic for parameter selection. We demonstrate that our constrained spectral clustering method, CSC, works well when the data exhibits what we term local proximity structure.

  2. Level-Set Topology Optimization with Aeroelastic Constraints

    NASA Technical Reports Server (NTRS)

    Dunning, Peter D.; Stanford, Bret K.; Kim, H. Alicia

    2015-01-01

    Level-set topology optimization is used to design a wing considering skin buckling under static aeroelastic trim loading, as well as dynamic aeroelastic stability (flutter). The level-set function is defined over the entire 3D volume of a transport aircraft wing box. Therefore, the approach is not limited by any predefined structure and can explore novel configurations. The Sequential Linear Programming (SLP) level-set method is used to solve the constrained optimization problems. The proposed method is demonstrated using three problems with mass, linear buckling and flutter objective and/or constraints. A constraint aggregation method is used to handle multiple buckling constraints in the wing skins. A continuous flutter constraint formulation is used to handle difficulties arising from discontinuities in the design space caused by a switching of the critical flutter mode.

  3. Optimal control of singularly perturbed nonlinear systems with state-variable inequality constraints

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Corban, J. E.

    1990-01-01

    The established necessary conditions for optimality in nonlinear control problems that involve state-variable inequality constraints are applied to a class of singularly perturbed systems. The distinguishing feature of this class of two-time-scale systems is a transformation of the state-variable inequality constraint, present in the full order problem, to a constraint involving states and controls in the reduced problem. It is shown that, when a state constraint is active in the reduced problem, the boundary layer problem can be of finite time in the stretched time variable. Thus, the usual requirement for asymptotic stability of the boundary layer system is not applicable, and cannot be used to construct approximate boundary layer solutions. Several alternative solution methods are explored and illustrated with simple examples.

  4. Fast Prediction of Blast Damage from Airbursts: An Empirical Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Brown, Peter G.; Stokan, Ed

    2016-10-01

    The February 15, 2013 Chelyabinsk airburst was the first modern bolide whose associated shockwave caused blast damage at the ground (Popova et al., 2013). Near-Earth Object (NEO) impacts in the Chelyabinsk-size range (~20 m) are expected to occur every few decades (Boslough et al., 2015) and therefore we expect ground damage from meteoric airbursts to be the next planetary defense threat to be confronted. With pre-impact detections of small NEOs certain to become more common, decision makers will be faced with estimating blast damage from impactors with uncertain physical properties on short timescales.High fidelity numerical bolide entry models have been developed in recent years (eg. Boslough and Crawford, 2008; Shuvalov et al., 2013), but the wide range in a priori data about strength, fragmentation behavior, and other physical properties for a specific impactor make predictions of bolide behavior difficult. The long computational running times for hydrocode models make the exploration of a wide parameter space challenging in the days to hours before an actual impact.Our approach to this problem is to use an analytical bolide entry model, the triggered-progressive fragmentation model (TPFM) developed by ReVelle (2005) within a Monte Carlo formalism. In particular, we couple this model with empirical constraints on the statistical spread in strength for meter-scale impactors from Brown et al (2015) based on the observed height at maximum bolide brightness. We also use the correlation of peak bolide brightness with total energy as given by Brown (2016) as a proxy for fragmentation behaviour. Using these constraints, we are able to quickly generate a large set of realizations of probable bolide energy deposition curves and produce simple estimates of expected blast damage using existing analytical relations.We validate this code with the known parameters of the Chelyabinsk airburst and explore how changes to the entry conditions of the observed bolide may have modified the blast damage at the ground. We will also present how this approach could be used in an actual short-warning impact scenario.

  5. Bi-dimensional null model analysis of presence-absence binary matrices.

    PubMed

    Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J

    2018-01-01

    Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  6. Discrete Event Simulation Modeling and Analysis of Key Leader Engagements

    DTIC Science & Technology

    2012-06-01

    to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing

  7. Geocoronal Balmer α line profile observations and forward-model analysis

    NASA Astrophysics Data System (ADS)

    Mierkiewicz, E. J.; Bishop, J.; Roesler, F. L.; Nossal, S. M.

    2006-05-01

    High spectral resolution geocoronal Balmer α line profile observations from Pine Bluff Observatory (PBO) are presented in the context of forward-model analysis. Because Balmer series column emissions depend significantly on multiple scattering, retrieval of hydrogen parameters of general aeronomic interest from these observations (e.g., the hydrogen column abundance) currently requires a forward modeling approach. This capability is provided by the resonance radiative transfer code LYAO_RT. We have recently developed a parametric data-model comparison search procedure employing an extensive grid of radiative transport model input parameters (defining a 6-dimensional parameter space) to map-out bounds for feasible forward model retrieved atomic hydrogen density distributions. We applied this technique to same-night (March, 2000) ground-based Balmer α data from PBO and geocoronal Lyman β measurements from the Espectrógrafo Ultravioleta extremo para la Radiación Difusa (EURD) instrument on the Spanish satellite MINISAT-1 (provided by J.F. Gómez and C. Morales of the Laboratorio de Astrofisica Espacial y Física Fundamental, INTA, Madrid, Spain) in order to investigate the modeling constraints imposed by two sets of independent geocoronal intensity measurements, both of which rely on astronomical calibration methods. In this poster we explore extending this analysis to the line profile information also contained in the March 2000 PBO Balmer α data set. In general, a decrease in the Doppler width of the Balmer α emission with shadow altitude is a persistent feature in every night of PBO observations in which a wide range of shadow altitudes are observed. Preliminary applications of the LYAO_RT code, which includes the ability to output Doppler line profiles for both the singly and multiply scattered contributions to the Balmer α emission line, displays good qualitative agreement with regard to geocoronal Doppler width trends observed from PBO. Model-data Balmer α Doppler width comparisons, using the best-fit model parameters obtained during the March 2000 PBO/EURD forward-model study, will be presented and discussed, including the feasibility of using Balmer α observed Doppler widths as an additional model constraint in our forward-model search procedure.

  8. Slowly-rotating neutron stars in massive bigravity

    NASA Astrophysics Data System (ADS)

    Sullivan, A.; Yunes, N.

    2018-02-01

    We study slowly-rotating neutron stars in ghost-free massive bigravity. This theory modifies general relativity by introducing a second, auxiliary but dynamical tensor field that couples to matter through the physical metric tensor through non-linear interactions. We expand the field equations to linear order in slow rotation and numerically construct solutions in the interior and exterior of the star with a set of realistic equations of state. We calculate the physical mass function with respect to observer radius and find that, unlike in general relativity, this function does not remain constant outside the star; rather, it asymptotes to a constant a distance away from the surface, whose magnitude is controlled by the ratio of gravitational constants. The Vainshtein-like radius at which the physical and auxiliary mass functions asymptote to a constant is controlled by the graviton mass scaling parameter, and outside this radius, bigravity modifications are suppressed. We also calculate the frame-dragging metric function and find that bigravity modifications are typically small in the entire range of coupling parameters explored. We finally calculate both the mass-radius and the moment of inertia-mass relations for a wide range of coupling parameters and find that both the graviton mass scaling parameter and the ratio of the gravitational constants introduce large modifications to both. These results could be used to place future constraints on bigravity with electromagnetic and gravitational-wave observations of isolated and binary neutron stars.

  9. Torsion as a dark matter candidate from the Higgs portal

    NASA Astrophysics Data System (ADS)

    Belyaev, Alexander S.; Thomas, Marc C.; Shapiro, Ilya L.

    2017-05-01

    Torsion is a metric-independent component of gravitation, which may provide a more general geometry than the one taking place within general relativity. On the other hand, torsion could lead to interesting phenomenology in both particle physics and cosmology. In the present work it is shown that a torsion field interacting with the SM Higgs doublet and having a negligible coupling to standard model (SM) fermions is protected from decaying by a Z2 symmetry, and therefore becomes a promising dark matter (DM) candidate. This model provides a good motivation for Higgs portal vector DM scenario. We evaluate the DM relic density and explore direct DM detection and collider constraints on this model to understand its consistency with experimental data and establish the most up-to-date limits on its parameter space. We have found in the model when the Higgs boson is only partly responsible for the generation of torsion mass, there is a region of parameter space where torsion contributes 100% to the DM budget of the Universe. Furthermore, we present the first results on the potential of the LHC to probe the parameter space of minimal scenario with Higgs portal vector DM using mono-jet searches and have found that LHC at high luminosity will be sensitive to the substantial part of model parameter space which cannot be probed by other experiments.

  10. An Implanted, Stimulated Muscle Powered Piezoelectric Generator

    NASA Technical Reports Server (NTRS)

    Lewandowski, Beth; Gustafson, Kenneth; Kilgore, Kevin

    2007-01-01

    A totally implantable piezoelectric generator system able to harness power from electrically activated muscle could be used to augment the power systems of implanted medical devices, such as neural prostheses, by reducing the number of battery replacement surgeries or by allowing periods of untethered functionality. The features of our generator design are no moving parts and the use of a portion of the generated power for system operation and regulation. A software model of the system has been developed and simulations have been performed to predict the output power as the system parameters were varied within their constraints. Mechanical forces that mimic muscle forces have been experimentally applied to a piezoelectric generator to verify the accuracy of the simulations and to explore losses due to mechanical coupling. Depending on the selection of system parameters, software simulations predict that this generator concept can generate up to approximately 700 W of power, which is greater than the power necessary to drive the generator, conservatively estimated to be 50 W. These results suggest that this concept has the potential to be an implantable, self-replenishing power source and further investigation is underway.

  11. A Modern Take on the RV Classics: N-body Analysis of GJ 876 and 55 Cnc

    NASA Astrophysics Data System (ADS)

    Nelson, Benjamin E.; Ford, E. B.; Wright, J.

    2013-01-01

    Over the past two decades, radial velocity (RV) observations have uncovered a diverse population of exoplanet systems, in particular a subset of multi-planet systems that exhibit strong dynamical interactions. To extract the model parameters (and uncertainties) accurately from these observations, one requires self-consistent n-body integrations and must explore a high-dimensional 7 x number of planets) parameter space, both of which are computationally challenging. Utilizing the power of modern computing resources, we apply our Radial velocity Using N-body Differential Evolution Markov Chain Monte Carlo code (RUN DEMCMC) to two landmark systems from early exoplanet surveys: GJ 876 and 55 Cnc. For GJ 876, we analyze the Keck HIRES (Rivera et al. 2010) and HARPS (Correia et al. 2010) data and constrain the distribution of the Laplace argument. For 55 Cnc, we investigate the orbital architecture based on a cumulative 1086 RV observations from various sources and transit constraints from Winn et al. 2011. In both cases, we also test for long-term orbital stability.

  12. Overview and Evaluation of Bluetooth Low Energy: An Emerging Low-Power Wireless Technology

    PubMed Central

    Gomez, Carles; Oller, Joaquim; Paradells, Josep

    2012-01-01

    Bluetooth Low Energy (BLE) is an emerging low-power wireless technology developed for short-range control and monitoring applications that is expected to be incorporated into billions of devices in the next few years. This paper describes the main features of BLE, explores its potential applications, and investigates the impact of various critical parameters on its performance. BLE represents a trade-off between energy consumption, latency, piconet size, and throughput that mainly depends on parameters such as connInterval and connSlaveLatency. According to theoretical results, the lifetime of a BLE device powered by a coin cell battery ranges between 2.0 days and 14.1 years. The number of simultaneous slaves per master ranges between 2 and 5,917. The minimum latency for a master to obtain a sensor reading is 676 μs, although simulation results show that, under high bit error rate, average latency increases by up to three orders of magnitude. The paper provides experimental results that complement the theoretical and simulation findings, and indicates implementation constraints that may reduce BLE performance.

  13. Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach

    NASA Technical Reports Server (NTRS)

    Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)

    2003-01-01

    Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.

  14. Mitigating direct detection bounds in non-minimal Higgs portal scalar dark matter models

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Subhaditya; Ghosh, Purusottam; Maity, Tarak Nath; Ray, Tirtha Sankar

    2017-10-01

    The minimal Higgs portal dark matter model is increasingly in tension with recent results form direct detection experiments like LUX and XENON. In this paper we make a systematic study of simple extensions of the Z_2 stabilized singlet scalar Higgs portal scenario in terms of their prospects at direct detection experiments. We consider both enlarging the stabilizing symmetry to Z_3 and incorporating multipartite features in the dark sector. We demonstrate that in these non-minimal models the interplay of annihilation, co-annihilation and semi-annihilation processes considerably relax constraints from present and proposed direct detection experiments while simultaneously saturating observed dark matter relic density. We explore in particular the resonant semi-annihilation channel within the multipartite Z_3 framework which results in new unexplored regions of parameter space that would be difficult to constrain by direct detection experiments in the near future. The role of dark matter exchange processes within multi-component Z_3× Z_3^' } framework is illustrated. We make quantitative estimates to elucidate the role of various annihilation processes in the different allowed regions of parameter space within these models.

  15. How similar are nut-cracking and stone-flaking? A functional approach to percussive technology

    PubMed Central

    Bril, Blandine; Parry, Ross; Dietrich, Gilles

    2015-01-01

    Various authors have suggested similarities between tool use in early hominins and chimpanzees. This has been particularly evident in studies of nut-cracking which is considered to be the most complex skill exhibited by wild apes, and has also been interpreted as a precursor of more complex stone-flaking abilities. It has been argued that there is no major qualitative difference between what the chimpanzee does when he cracks a nut and what early hominins did when they detached a flake from a core. In this paper, similarities and differences between skills involved in stone-flaking and nut-cracking are explored through an experimental protocol with human subjects performing both tasks. We suggest that a ‘functional’ approach to percussive action, based on the distinction between functional parameters that characterize each task and parameters that characterize the agent's actions and movements, is a fruitful method for understanding those constraints which need to be mastered to perform each task successfully, and subsequently, the nature of skill involved in both tasks. PMID:26483533

  16. Simultaneous state-parameter estimation supports the evaluation of data assimilation performance and measurement design for soil-water-atmosphere-plant system

    NASA Astrophysics Data System (ADS)

    Hu, Shun; Shi, Liangsheng; Zha, Yuanyuan; Williams, Mathew; Lin, Lin

    2017-12-01

    Improvements to agricultural water and crop managements require detailed information on crop and soil states, and their evolution. Data assimilation provides an attractive way of obtaining these information by integrating measurements with model in a sequential manner. However, data assimilation for soil-water-atmosphere-plant (SWAP) system is still lack of comprehensive exploration due to a large number of variables and parameters in the system. In this study, simultaneous state-parameter estimation using ensemble Kalman filter (EnKF) was employed to evaluate the data assimilation performance and provide advice on measurement design for SWAP system. The results demonstrated that a proper selection of state vector is critical to effective data assimilation. Especially, updating the development stage was able to avoid the negative effect of ;phenological shift;, which was caused by the contrasted phenological stage in different ensemble members. Simultaneous state-parameter estimation (SSPE) assimilation strategy outperformed updating-state-only (USO) assimilation strategy because of its ability to alleviate the inconsistency between model variables and parameters. However, the performance of SSPE assimilation strategy could deteriorate with an increasing number of uncertain parameters as a result of soil stratification and limited knowledge on crop parameters. In addition to the most easily available surface soil moisture (SSM) and leaf area index (LAI) measurements, deep soil moisture, grain yield or other auxiliary data were required to provide sufficient constraints on parameter estimation and to assure the data assimilation performance. This study provides an insight into the response of soil moisture and grain yield to data assimilation in SWAP system and is helpful for soil moisture movement and crop growth modeling and measurement design in practice.

  17. Constraints on the Energy Density Content of the Universe Using Only Clusters of Galaxies

    NASA Technical Reports Server (NTRS)

    Molnar, Sandor M.; Haiman, Zoltan; Birkinshaw, Mark

    2003-01-01

    We demonstrate that it is possible to constrain the energy content of the Universe with high accuracy using observations of clusters of galaxies only. The degeneracies in the cosmological parameters are lifted by combining constraints from different observables of galaxy clusters. We show that constraints on cosmological parameters from galaxy cluster number counts as a function of redshift and accurate angular diameter distance measurements to clusters are complementary to each other and their combination can constrain the energy density content of the Universe well. The number counts can be obtained from X-ray and/or SZ (Sunyaev-Zeldovich effect) surveys, the angular diameter distances can be determined from deep observations of the intra-cluster gas using their thermal bremsstrahlung X-ray emission and the SZ effect (X-SZ method). In this letter we combine constraints from simulated cluster number counts expected from a 12 deg2 SZ cluster survey and constraints from simulated angular diameter distance measurements based on using the X-SZ method assuming an expected accuracy of 7% in the angular diameter distance determination of 70 clusters with redshifts less than 1.5. We find that R, can be determined within about 25%, A within 20%, and w within 16%. Any cluster survey can be used to select clusters for high accuracy distance measurements, but we assumed accurate angular diameter distance measurements for only 70 clusters since long observations are necessary to achieve high accuracy in distance measurements. Thus the question naturally arises: How to select clusters of galaxies for accurate diameter distance determinations? In this letter, as an example, we demonstrate that it is possible to optimize this selection changing the number of clusters observed, and the upper cut off of their redshift range. We show that constraints on cosmological parameters from combining cluster number counts and angular diameter distance measurements, as opposed to general expectations, will not improve substantially selecting clusters with redshifts higher than one. This important conclusion allow us to restrict our cluster sample to clusters closer than one, in a range where the observational time for accurate distance measurements are more manageable. Subject headings: cosmological parameters - cosmology: theory - galaxies: clusters: general - X-rays: galaxies: clusters

  18. Tailored parameter optimization methods for ordinary differential equation models with steady-state constraints.

    PubMed

    Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan

    2016-08-22

    Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the analysis of the dataset for Raf/MEK/ERK signaling provides novel biological insights regarding the existence of feedback regulation. Many optimization problems considered in systems and computational biology are subject to steady-state constraints. While most optimization methods have convergence problems if these steady-state constraints are highly nonlinear, the methods presented recover the convergence properties of optimizers which can exploit an analytical expression for the parameter-dependent steady state. This renders them an excellent alternative to methods which are currently employed in systems and computational biology.

  19. Convective dynamics and chemical disequilibrium in the atmospheres of substellar objects

    NASA Astrophysics Data System (ADS)

    Bordwell, Baylee; Brown, Benjamin P.; Oishi, Jeffrey S.

    2017-11-01

    The thousands of substellar objects now known provide a unique opportunity to test our understanding of atmospheric dynamics across a range of environments. The chemical timescales of certain species transition from being much shorter than the dynamical timescales to being much longer than them at a point in the atmosphere known as the quench point. This transition leads to a state of dynamical disequilibrium, the effects of which can be used to probe the atmospheric dynamics of these objects. Unfortunately, due to computational constraints, models that inform the interpretation of these observations are run at dynamical parameters which are far from realistic values. In this study, we explore the behavior of a disequilibrium chemical process with increasingly realistic planetary conditions, to quantify the effects of the approximations used in current models. We simulate convection in 2-D, plane-parallel, polytropically-stratified atmospheres, into which we add reactive passive tracers that explore disequilibrium behavior. We find that as we increase the Rayleigh number, and thus achieve more realistic planetary conditions, the behavior of these tracers does not conform to the classical predictions of disequilibrium chemistry.

  20. Risk management for the Space Exploration Initiative

    NASA Technical Reports Server (NTRS)

    Buchbinder, Ben

    1993-01-01

    Probabilistic Risk Assessment (PRA) is a quantitative engineering process that provides the analytic structure and decision-making framework for total programmatic risk management. Ideally, it is initiated in the conceptual design phase and used throughout the program life cycle. Although PRA was developed for assessment of safety, reliability, and availability risk, it has far greater application. Throughout the design phase, PRA can guide trade-off studies among system performance, safety, reliability, cost, and schedule. These studies are based on the assessment of the risk of meeting each parameter goal, with full consideration of the uncertainties. Quantitative trade-off studies are essential, but without full identification, propagation, and display of uncertainties, poor decisions may result. PRA also can focus attention on risk drivers in situations where risk is too high. For example, if safety risk is unacceptable, the PRA prioritizes the risk contributors to guide the use of resources for risk mitigation. PRA is used in the Space Exploration Initiative (SEI) Program. To meet the stringent requirements of the SEI mission, within strict budgetary constraints, the PRA structure supports informed and traceable decision-making. This paper briefly describes the SEI PRA process.

  1. "If You Don't Abstain, You Will Die of AIDS": AIDS Education in Kenyan Public Schools

    ERIC Educational Resources Information Center

    Njue, Carolyne; Nzioka, Charles; Ahlberg, Beth-Maina; Pertet, Anne M.; Voeten, Helene A. C. M.

    2009-01-01

    We explored constraints of implementing AIDS education in public schools in Kenya. Sixty interviews with teachers and 60 focus group discussions with students were conducted in 21 primary and nine secondary schools. System/school-level constraints included lack of time in the curriculum, limited reach of secondary-school students (because AIDS…

  2. Management as the enabling technology for space exploration

    NASA Technical Reports Server (NTRS)

    Mandell, Humboldt C., Jr.; Griffin, Michael D.

    1992-01-01

    This paper addresses the dilemma which NASA faces in starting a major new initiative within the constraints of the current national budget. It addressed the fact that unlike previous NASA programs, the major mission constraints come from management factors as opposed to technologies. An action plan is presented, along with some results from early management simplification processes.

  3. Primordial black holes as dark matter: constraints from compact ultra-faint dwarfs

    NASA Astrophysics Data System (ADS)

    Zhu, Qirong; Vasiliev, Eugene; Li, Yuexing; Jing, Yipeng

    2018-05-01

    The ground-breaking detections of gravitational waves from black hole mergers by LIGO have rekindled interest in primordial black holes (PBHs) and the possibility of dark matter being composed of PBHs. It has been suggested that PBHs of tens of solar masses could serve as dark matter candidates. Recent analytical studies demonstrated that compact ultra-faint dwarf galaxies can serve as a sensitive test for the PBH dark matter hypothesis, since stars in such a halo-dominated system would be heated by the more massive PBHs, their present-day distribution can provide strong constraints on PBH mass. In this study, we further explore this scenario with more detailed calculations, using a combination of dynamical simulations and Bayesian inference methods. The joint evolution of stars and PBH dark matter is followed with a Fokker-Planck code PHASEFLOW. We run a large suite of such simulations for different dark matter parameters, then use a Markov chain Monte Carlo approach to constrain the PBH properties with observations of ultra-faint galaxies. We find that two-body relaxation between the stars and PBH drives up the stellar core size, and increases the central stellar velocity dispersion. Using the observed half-light radius and velocity dispersion of stars in the compact ultra-faint dwarf galaxies as joint constraints, we infer that these dwarfs may have a cored dark matter halo with the central density in the range of 1-2 M⊙pc - 3, and that the PBHs may have a mass range of 2-14 M⊙ if they constitute all or a substantial fraction of the dark matter.

  4. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  5. Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.

    PubMed

    Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis

    2008-10-01

    We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)

  6. Implicit Motives and Men’s Perceived Constraint in Fatherhood

    PubMed Central

    Ruppen, Jessica; Waldvogel, Patricia; Ehlert, Ulrike

    2016-01-01

    Research shows that implicit motives influence social relationships. However, little is known about their role in fatherhood and, particularly, how men experience their paternal role. Therefore, this study examined the association of implicit motives and fathers’ perceived constraint due to fatherhood. Furthermore, we explored their relation to fathers’ life satisfaction. Participants were fathers with biological children (N = 276). They were asked to write picture stories, which were then coded for implicit affiliation and power motives. Perceived constraint and life satisfaction were assessed on a visual analog scale. A higher implicit need for affiliation was significantly associated with lower perceived constraint, whereas the implicit need for power had the opposite effect. Perceived constraint had a negative influence on life satisfaction. Structural equation modeling revealed significant indirect effects of implicit affiliation and power motives on life satisfaction mediated by perceived constraint. Our findings indicate that men with a higher implicit need for affiliation experience less constraint due to fatherhood, resulting in higher life satisfaction. The implicit need for power, however, results in more perceived constraint and is related to decreased life satisfaction. PMID:27933023

  7. Theoretical physics implications of gravitational wave observation with future detectors

    NASA Astrophysics Data System (ADS)

    Chamberlain, Katie; Yunes, Nicolás

    2017-10-01

    Gravitational waves encode invaluable information about the nature of the relatively unexplored extreme gravity regime, where the gravitational interaction is strong, nonlinear and highly dynamical. Recent gravitational wave observations by advanced LIGO have provided the first glimpses into this regime, allowing for the extraction of new inferences on different aspects of theoretical physics. For example, these detections provide constraints on the mass of the graviton, Lorentz violation in the gravitational sector, the existence of large extra dimensions, the temporal variability of Newton's gravitational constant, and modified dispersion relations of gravitational waves. Many of these constraints, however, are not yet competitive with constraints obtained, for example, through Solar System observations or binary pulsar observations. In this paper, we study the degree to which theoretical physics inferences drawn from gravitational wave observations will strengthen with detections from future detectors. We consider future ground-based detectors, such as the LIGO-class expansions A + , Voyager, Cosmic Explorer and the Einstein Telescope, as well as space-based detectors, such as various configurations of eLISA and the recently proposed LISA mission. We find that space-based detectors will place constraints on general relativity up to 12 orders of magnitude more stringently than current aLIGO bounds, but these space-based constraints are comparable to those obtained with the ground-based Cosmic Explorer or the Einstein Telescope (A + and Voyager only lead to modest improvements in constraints). We also generically find that improvements in the instrument sensitivity band at low frequencies lead to large improvements in certain classes of constraints, while sensitivity improvements at high frequencies lead to more modest gains. These results strengthen the case for the development of future detectors, while providing additional information that could be useful in future design decisions.

  8. Cosmology Constraints from the Weak Lensing Peak Counts and the Power Spectrum in CFHTLenS

    DOE PAGES

    Liu, Jia; May, Morgan; Petri, Andrea; ...

    2015-03-04

    Lensing peaks have been proposed as a useful statistic, containing cosmological information from non-Gaussianities that is inaccessible from traditional two-point statistics such as the power spectrum or two-point correlation functions. Here we examine constraints on cosmological parameters from weak lensing peak counts, using the publicly available data from the 154 deg2 CFHTLenS survey. We utilize a new suite of ray-tracing N-body simulations on a grid of 91 cosmological models, covering broad ranges of the three parameters Ω m, σ 8, and w, and replicating the galaxy sky positions, redshifts, and shape noise in the CFHTLenS observations. We then build anmore » emulator that interpolates the power spectrum and the peak counts to an accuracy of ≤ 5%, and compute the likelihood in the three-dimensional parameter space (Ω m, σ 8, w) from both observables. We find that constraints from peak counts are comparable to those from the power spectrum, and somewhat tighter when different smoothing scales are combined. Neither observable can constrain w without external data. When the power spectrum and peak counts are combined, the area of the error “banana” in the (Ω m, σ 8) plane reduces by a factor of ≈ two, compared to using the power spectrum alone. For a flat Λ cold dark matter model, combining both statistics, we obtain the constraint σ 8(Ω m/0.27)0.63 = 0.85 +0.03 -0.03.« less

  9. Cosmology Constraints from the Weak Lensing Peak Counts and the Power Spectrum in CFHTLenS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jia; May, Morgan; Petri, Andrea

    Lensing peaks have been proposed as a useful statistic, containing cosmological information from non-Gaussianities that is inaccessible from traditional two-point statistics such as the power spectrum or two-point correlation functions. Here we examine constraints on cosmological parameters from weak lensing peak counts, using the publicly available data from the 154 deg2 CFHTLenS survey. We utilize a new suite of ray-tracing N-body simulations on a grid of 91 cosmological models, covering broad ranges of the three parameters Ω m, σ 8, and w, and replicating the galaxy sky positions, redshifts, and shape noise in the CFHTLenS observations. We then build anmore » emulator that interpolates the power spectrum and the peak counts to an accuracy of ≤ 5%, and compute the likelihood in the three-dimensional parameter space (Ω m, σ 8, w) from both observables. We find that constraints from peak counts are comparable to those from the power spectrum, and somewhat tighter when different smoothing scales are combined. Neither observable can constrain w without external data. When the power spectrum and peak counts are combined, the area of the error “banana” in the (Ω m, σ 8) plane reduces by a factor of ≈ two, compared to using the power spectrum alone. For a flat Λ cold dark matter model, combining both statistics, we obtain the constraint σ 8(Ω m/0.27)0.63 = 0.85 +0.03 -0.03.« less

  10. Optimizing Environmental Flow Operation Rules based on Explicit IHA Constraints

    NASA Astrophysics Data System (ADS)

    Dongnan, L.; Wan, W.; Zhao, J.

    2017-12-01

    Multi-objective operation of reservoirs are increasingly asked to consider the environmental flow to support ecosystem health. Indicators of Hydrologic Alteration (IHA) is widely used to describe environmental flow regimes, but few studies have explicitly formulated it into optimization models and thus is difficult to direct reservoir release. In an attempt to incorporate the benefit of environmental flow into economic achievement, a two-objective reservoir optimization model is developed and all 33 hydrologic parameters of IHA are explicitly formulated into constraints. The benefit of economic is defined by Hydropower Production (HP) while the benefit of environmental flow is transformed into Eco-Index (EI) that combined 5 of the 33 IHA parameters chosen by principal component analysis method. Five scenarios (A to E) with different constraints are tested and solved by nonlinear programming. The case study of Jing Hong reservoir, located in the upstream of Mekong basin, China, shows: 1. A Pareto frontier is formed by maximizing on only HP objective in scenario A and on only EI objective in scenario B. 2. Scenario D using IHA parameters as constraints obtains the optimal benefits of both economic and ecological. 3. A sensitive weight coefficient is found in scenario E, but the trade-offs between HP and EI objectives are not within the Pareto frontier. 4. When the fraction of reservoir utilizable capacity reaches 0.8, both HP and EI capture acceptable values. At last, to make this modelmore conveniently applied to everyday practice, a simplified operation rule curve is extracted.

  11. Lesbians and Gay Men's Vacation Motivations, Perceptions, and Constraints: A Study of Cruise Vacation Choice.

    PubMed

    Weeden, Clare; Lester, Jo-Anne; Jarvis, Nigel

    2016-08-01

    This study explores the push-pull vacation motivations of gay male and lesbian consumers and examines how these underpin their perceptions and purchase constraints of a mainstream and LGBT(1) cruise. Findings highlight a complex vacation market. Although lesbians and gay men share many of the same travel motivations as their heterosexual counterparts, the study reveals sexuality is a significant variable in their perception of cruise vacations, which further influences purchase constraints and destination choice. Gay men have more favorable perceptions than lesbians of both mainstream and LGBT cruises. The article recommends further inquiry into the multifaceted nature of motivations, perception, and constraints within the LGBT market in relation to cruise vacations.

  12. Variations in the fine-structure constant constraining gravity theories

    NASA Astrophysics Data System (ADS)

    Bezerra, V. B.; Cunha, M. S.; Muniz, C. R.; Tahim, M. O.; Vieira, H. S.

    2016-08-01

    In this paper, we investigate how the fine-structure constant, α, locally varies in the presence of a static and spherically symmetric gravitational source. The procedure consists in calculating the solution and the energy eigenvalues of a massive scalar field around that source, considering the weak-field regime. From this result, we obtain expressions for a spatially variable fine-structure constant by considering suitable modifications in the involved parameters admitting some scenarios of semi-classical and quantum gravities. Constraints on free parameters of the approached theories are calculated from astrophysical observations of the emission spectra of a white dwarf. Such constraints are finally compared with those obtained in the literature.

  13. General squark flavour mixing: constraints, phenomenology and benchmarks

    DOE PAGES

    De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...

    2015-11-19

    Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.

  14. Constraints on cosmological parameters from the analysis of the cosmic lens all sky survey radio-selected gravitational lens statistics.

    PubMed

    Chae, K-H; Biggs, A D; Blandford, R D; Browne, I W A; De Bruyn, A G; Fassnacht, C D; Helbig, P; Jackson, N J; King, L J; Koopmans, L V E; Mao, S; Marlow, D R; McKean, J P; Myers, S T; Norbury, M; Pearson, T J; Phillips, P M; Readhead, A C S; Rusin, D; Sykes, C M; Wilkinson, P N; Xanthopoulos, E; York, T

    2002-10-07

    We derive constraints on cosmological parameters and the properties of the lensing galaxies from gravitational lens statistics based on the final Cosmic Lens All Sky Survey data. For a flat universe with a classical cosmological constant, we find that the present matter fraction of the critical density is Omega(m)=0.31(+0.27)(-0.14) (68%)+0.12-0.10 (syst). For a flat universe with a constant equation of state for dark energy w=p(x)(pressure)/rho(x)(energy density), we find w<-0.55(+0.18)(-0.11) (68%).

  15. Buckling analysis of planar compression micro-springs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jing; Sui, Li; Shi, Gengchen

    2015-04-15

    Large compression deformation causes micro-springs buckling and loss of load capacity. We analyzed the impact of structural parameters and boundary conditions for planar micro-springs, and obtained the change rules for the two factors that affect buckling. A formula for critical buckling deformation of micro-springs under compressive load was derived based on elastic thin plate theory. Results from this formula were compared with finite element analysis results but these did not always correlate. Therefore, finite element analysis is necessary for micro-spring buckling analysis. We studied the variation of micro-spring critical buckling deformation caused by four structural parameters using ANSYS software undermore » two constraint conditions. The simulation results show that when an x-direction constraint is added, the critical buckling deformation increases by 32.3-297.9%. The critical buckling deformation decreases with increase in micro-spring arc radius or section width and increases with increase in micro-spring thickness or straight beam width. We conducted experiments to confirm the simulation results, and the experimental and simulation trends were found to agree. Buckling analysis of the micro-spring establishes a theoretical foundation for optimizing micro-spring structural parameters and constraint conditions to maximize the critical buckling load.« less

  16. Comprehensive cosmographic analysis by Markov chain method

    NASA Astrophysics Data System (ADS)

    Capozziello, S.; Lazkoz, R.; Salzano, V.

    2011-12-01

    We study the possibility of extracting model independent information about the dynamics of the Universe by using cosmography. We intend to explore it systematically, to learn about its limitations and its real possibilities. Here we are sticking to the series expansion approach on which cosmography is based. We apply it to different data sets: Supernovae type Ia (SNeIa), Hubble parameter extracted from differential galaxy ages, gamma ray bursts, and the baryon acoustic oscillations data. We go beyond past results in the literature extending the series expansion up to the fourth order in the scale factor, which implies the analysis of the deceleration q0, the jerk j0, and the snap s0. We use the Markov chain Monte Carlo method (MCMC) to analyze the data statistically. We also try to relate direct results from cosmography to dark energy (DE) dynamical models parametrized by the Chevallier-Polarski-Linder model, extracting clues about the matter content and the dark energy parameters. The main results are: (a) even if relying on a mathematical approximate assumption such as the scale factor series expansion in terms of time, cosmography can be extremely useful in assessing dynamical properties of the Universe; (b) the deceleration parameter clearly confirms the present acceleration phase; (c) the MCMC method can help giving narrower constraints in parameter estimation, in particular for higher order cosmographic parameters (the jerk and the snap), with respect to the literature; and (d) both the estimation of the jerk and the DE parameters reflect the possibility of a deviation from the ΛCDM cosmological model.

  17. Fragmentation uncertainties in hadronic observables for top-quark mass measurements

    NASA Astrophysics Data System (ADS)

    Corcella, Gennaro; Franceschini, Roberto; Kim, Doojin

    2018-04-01

    We study the Monte Carlo uncertainties due to modeling of hadronization and showering in the extraction of the top-quark mass from observables that use exclusive hadronic final states in top decays, such as t →anything + J / ψ or t →anything + (B →charged tracks), where B is a B-hadron. To this end, we investigate the sensitivity of the top-quark mass, determined by means of a few observables already proposed in the literature as well as some new proposals, to the relevant parameters of event generators, such as HERWIG 6 and PYTHIA 8. We find that constraining those parameters at O (1%- 10%) is required to avoid a Monte Carlo uncertainty on mt greater than 500 MeV. For the sake of achieving the needed accuracy on such parameters, we examine the sensitivity of the top-quark mass measured from spectral features, such as peaks, endpoints and distributions of EB, mBℓ, and some mT2-like variables. We find that restricting oneself to regions sufficiently close to the endpoints enables one to substantially decrease the dependence on the Monte Carlo parameters, but at the price of inflating significantly the statistical uncertainties. To ameliorate this situation we study how well the data on top-quark production and decay at the LHC can be utilized to constrain the showering and hadronization variables. We find that a global exploration of several calibration observables, sensitive to the Monte Carlo parameters but very mildly to mt, can offer useful constraints on the parameters, as long as such quantities are measured with a 1% precision.

  18. Constrained positive matrix factorization: Elemental ratios, spatial distinction, and chemical transport model source contributions

    NASA Astrophysics Data System (ADS)

    Sturtz, Timothy M.

    Source apportionment models attempt to untangle the relationship between pollution sources and the impacts at downwind receptors. Two frameworks of source apportionment models exist: source-oriented and receptor-oriented. Source based apportionment models use presumed emissions and atmospheric processes to estimate the downwind source contributions. Conversely, receptor based models leverage speciated concentration data from downwind receptors and apply statistical methods to predict source contributions. Integration of both source-oriented and receptor-oriented models could lead to a better understanding of the implications sources have on the environment and society. The research presented here investigated three different types of constraints applied to the Positive Matrix Factorization (PMF) receptor model within the framework of the Multilinear Engine (ME-2): element ratio constraints, spatial separation constraints, and chemical transport model (CTM) source attribution constraints. PM10-2.5 mass and trace element concentrations were measured in Winston-Salem, Chicago, and St. Paul at up to 60 sites per city during two different seasons in 2010. PMF was used to explore the underlying sources of variability. Information on previously reported PM10-2.5 tire and brake wear profiles were used to constrain these features in PMF by prior specification of selected species ratios. We also modified PMF to allow for combining the measurements from all three cities into a single model while preserving city-specific soil features. Relatively minor differences were observed between model predictions with and without the prior ratio constraints, increasing confidence in our ability to identify separate brake wear and tire wear features. Using separate data, source contributions to total fine particle carbon predicted by a CTM were incorporated into the PMF receptor model to form a receptor-oriented hybrid model. The level of influence of the CTM versus traditional PMF was varied using a weighting parameter applied to an object function as implemented in ME-2. The resulting hybrid model was used to quantify the contributions of total carbon from both wildfires and biogenic sources at two Interagency Monitoring of Protected Visual Environment monitoring sites, Monture and Sula Peak, Montana, from 2006 through 2008.

  19. Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²

    NASA Astrophysics Data System (ADS)

    Goldenson, N. L.

    2014-12-01

    Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect, reservoirs and flows, albedo feedback, Snowball Earth, climate sensitivity, and model experiment design. Climate calculations are extended to Mars with some modifications to the Earth climate component, and could be used in lessons about the Mars atmosphere, and exploring scenarios of Mars climate history.

  20. Langlands Parameters of Quivers in the Sato Grassmannian

    NASA Astrophysics Data System (ADS)

    Luu, Martin T.; Penciak, Matej

    2018-01-01

    Motivated by quantum field theoretic partition functions that can be expressed as products of tau functions of the KP hierarchy we attach several types of local geometric Langlands parameters to quivers in the Sato Grassmannian. We study related questions of Virasoro constraints, of moduli spaces of relevant quivers, and of classical limits of the Langlands parameters.

  1. Constraining alternative theories of gravity using GW150914 and GW151226

    NASA Astrophysics Data System (ADS)

    De Laurentis, Mariafelicia; Porth, Oliver; Bovard, Luke; Ahmedov, Bobomurat; Abdujabbarov, Ahmadjon

    2016-12-01

    The recently reported gravitational wave events GW150914 and GW151226 caused by the mergers of binary black holes [Abbott et al., Phys. Rev. Lett. 116, 221101 (2016); Phys. Rev. Lett. 116, 241103 (2016); Phys. Rev. X 6, 041015] provide a formidable way to set constraints on alternative metric theories of gravity in the strong field regime. In this paper, we develop an approach where an arbitrary theory of gravity can be parametrized by an effective coupling Geff and an effective gravitational potential Φ (r ). The standard Newtonian limit of general relativity is recovered as soon as Geff→GN and Φ (r )→ΦN. The upper bound on the graviton mass and the gravitational interaction length, reported by the LIGO-VIRGO Collaboration, can be directly recast in terms of the parameters of the theory that allows an analysis where the gravitational wave frequency modulation sets constraints on the range of possible alternative models of gravity. Numerical results based on published parameters for the binary black hole mergers are also reported. The comparison of the observed phases of GW150914 and GW151226 with the modulated phase in alternative theories of gravity does not give reasonable constraints due to the large uncertainties in the estimated parameters for the coalescing black holes. In addition to these general considerations, we obtain limits for the frequency dependence of the α parameter in scalar tensor theories of gravity.

  2. A Poisson nonnegative matrix factorization method with parameter subspace clustering constraint for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Sun, Weiwei; Ma, Jun; Yang, Gang; Du, Bo; Zhang, Liangpei

    2017-06-01

    A new Bayesian method named Poisson Nonnegative Matrix Factorization with Parameter Subspace Clustering Constraint (PNMF-PSCC) has been presented to extract endmembers from Hyperspectral Imagery (HSI). First, the method integrates the liner spectral mixture model with the Bayesian framework and it formulates endmember extraction into a Bayesian inference problem. Second, the Parameter Subspace Clustering Constraint (PSCC) is incorporated into the statistical program to consider the clustering of all pixels in the parameter subspace. The PSCC could enlarge differences among ground objects and helps finding endmembers with smaller spectrum divergences. Meanwhile, the PNMF-PSCC method utilizes the Poisson distribution as the prior knowledge of spectral signals to better explain the quantum nature of light in imaging spectrometer. Third, the optimization problem of PNMF-PSCC is formulated into maximizing the joint density via the Maximum A Posterior (MAP) estimator. The program is finally solved by iteratively optimizing two sub-problems via the Alternating Direction Method of Multipliers (ADMM) framework and the FURTHESTSUM initialization scheme. Five state-of-the art methods are implemented to make comparisons with the performance of PNMF-PSCC on both the synthetic and real HSI datasets. Experimental results show that the PNMF-PSCC outperforms all the five methods in Spectral Angle Distance (SAD) and Root-Mean-Square-Error (RMSE), and especially it could identify good endmembers for ground objects with smaller spectrum divergences.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M. R.

    We present the first constraints on cosmology from the Dark Energy Survey (DES), using weak lensing measurements from the preliminary Science Verification (SV) data. We use 139 square degrees of SV data, which is less than 3% of the full DES survey area. Using cosmic shear 2-point measurements over three redshift bins we find σ 8(m=0.3) 0.5 = 0:81 ± 0:06 (68% confidence), after marginalising over 7 systematics parameters and 3 other cosmological parameters. Furthermore, we examine the robustness of our results to the choice of data vector and systematics assumed, and find them to be stable. About 20% ofmore » our error bar comes from marginalising over shear and photometric redshift calibration uncertainties. The current state-of-the-art cosmic shear measurements from CFHTLenS are mildly discrepant with the cosmological constraints from Planck CMB data. Our results are consistent with both datasets. Our uncertainties are ~30% larger than those from CFHTLenS when we carry out a comparable analysis of the two datasets, which we attribute largely to the lower number density of our shear catalogue. We investigate constraints on dark energy and find that, with this small fraction of the full survey, the DES SV constraints make negligible impact on the Planck constraints. The moderate disagreement between the CFHTLenS and Planck values of σ 8(Ω m=0.3) 0.5 is present regardless of the value of w.« less

  4. Testing for new physics: neutrinos and the primordial power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canac, Nicolas; Abazajian, Kevork N.; Aslanyan, Grigor

    2016-09-01

    We test the sensitivity of neutrino parameter constraints from combinations of CMB and LSS data sets to the assumed form of the primordial power spectrum (PPS) using Bayesian model selection. Significantly, none of the tested combinations, including recent high-precision local measurements of H{sub 0} and cluster abundances, indicate a signal for massive neutrinos or extra relativistic degrees of freedom. For PPS models with a large, but fixed number of degrees of freedom, neutrino parameter constraints do not change significantly if the location of any features in the PPS are allowed to vary, although neutrino constraints are more sensitive to PPSmore » features if they are known a priori to exist at fixed intervals in log k . Although there is no support for a non-standard neutrino sector from constraints on both neutrino mass and relativistic energy density, we see surprisingly strong evidence for features in the PPS when it is constrained with data from Planck 2015, SZ cluster counts, and recent high-precision local measurements of H{sub 0}. Conversely combining Planck with matter power spectrum and BAO measurements yields a much weaker constraint. Given that this result is sensitive to the choice of data this tension between SZ cluster counts, Planck and H{sub 0} measurements is likely an indication of unmodeled systematic bias that mimics PPS features, rather than new physics in the PPS or neutrino sector.« less

  5. Modeling phytoplankton community in reservoirs. A comparison between taxonomic and functional groups-based models.

    PubMed

    Di Maggio, Jimena; Fernández, Carolina; Parodi, Elisa R; Diaz, M Soledad; Estrada, Vanina

    2016-01-01

    In this paper we address the formulation of two mechanistic water quality models that differ in the way the phytoplankton community is described. We carry out parameter estimation subject to differential-algebraic constraints and validation for each model and comparison between models performance. The first approach aggregates phytoplankton species based on their phylogenetic characteristics (Taxonomic group model) and the second one, on their morpho-functional properties following Reynolds' classification (Functional group model). The latter approach takes into account tolerance and sensitivity to environmental conditions. The constrained parameter estimation problems are formulated within an equation oriented framework, with a maximum likelihood objective function. The study site is Paso de las Piedras Reservoir (Argentina), which supplies water for consumption for 450,000 population. Numerical results show that phytoplankton morpho-functional groups more closely represent each species growth requirements within the group. Each model performance is quantitatively assessed by three diagnostic measures. Parameter estimation results for seasonal dynamics of the phytoplankton community and main biogeochemical variables for a one-year time horizon are presented and compared for both models, showing the functional group model enhanced performance. Finally, we explore increasing nutrient loading scenarios and predict their effect on phytoplankton dynamics throughout a one-year time horizon. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. The Results of MINOS and the Future with MINOS+

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timmons, A.

    Tmore » he MINOS experiment took data from 2005 up until 2012. he MINOS experiment took data from 2005 up until 2012, continuing beyond that as the MINOS+ experiment. he experiment is a two-detector, on-axis, long-baseline experiment, sending neutrinos from Fermilab to the Soudan Underground Laboratory in northern Minnesota. By searching for the deficit of muon neutrinos at the Far Detector, MINOS/MINOS+ is sensitive to the atmospheric neutrino oscillation parameters Δ m 32 2 and θ 23 . By using the full MINOS data set looking at both ν μ disappearance and ν e appearance in both neutrino and antineutrino configurations at the NuMI beam along with atmospheric neutrino data recorded at the FD, MINOS has made the most precise measurement of Δ m 32 2 . Using a full three-flavour framework and searching for ν e appearance, MINOS/MINOS+ gains sensitivity to θ 13 , the mass hierarchy, and the octant of θ 23 . Exotic phenomenon is also explored with the MINOS detectors looking for nonstandard interactions and sterile neutrinos. he current MINOS+ era goals are to build on the previous MINOS results improving the precision on the three-flavour oscillation parameter measurements and strengthening the constraints placed on the sterile neutrino parameter space.« less

  7. The Results of MINOS and the Future with MINOS+

    DOE PAGES

    Timmons, A.

    2016-01-01

    Tmore » he MINOS experiment took data from 2005 up until 2012. he MINOS experiment took data from 2005 up until 2012, continuing beyond that as the MINOS+ experiment. he experiment is a two-detector, on-axis, long-baseline experiment, sending neutrinos from Fermilab to the Soudan Underground Laboratory in northern Minnesota. By searching for the deficit of muon neutrinos at the Far Detector, MINOS/MINOS+ is sensitive to the atmospheric neutrino oscillation parameters Δ m 32 2 and θ 23 . By using the full MINOS data set looking at both ν μ disappearance and ν e appearance in both neutrino and antineutrino configurations at the NuMI beam along with atmospheric neutrino data recorded at the FD, MINOS has made the most precise measurement of Δ m 32 2 . Using a full three-flavour framework and searching for ν e appearance, MINOS/MINOS+ gains sensitivity to θ 13 , the mass hierarchy, and the octant of θ 23 . Exotic phenomenon is also explored with the MINOS detectors looking for nonstandard interactions and sterile neutrinos. he current MINOS+ era goals are to build on the previous MINOS results improving the precision on the three-flavour oscillation parameter measurements and strengthening the constraints placed on the sterile neutrino parameter space.« less

  8. Size-distribution analysis of macromolecules by sedimentation velocity ultracentrifugation and lamm equation modeling.

    PubMed

    Schuck, P

    2000-03-01

    A new method for the size-distribution analysis of polymers by sedimentation velocity analytical ultracentrifugation is described. It exploits the ability of Lamm equation modeling to discriminate between the spreading of the sedimentation boundary arising from sample heterogeneity and from diffusion. Finite element solutions of the Lamm equation for a large number of discrete noninteracting species are combined with maximum entropy regularization to represent a continuous size-distribution. As in the program CONTIN, the parameter governing the regularization constraint is adjusted by variance analysis to a predefined confidence level. Estimates of the partial specific volume and the frictional ratio of the macromolecules are used to calculate the diffusion coefficients, resulting in relatively high-resolution sedimentation coefficient distributions c(s) or molar mass distributions c(M). It can be applied to interference optical data that exhibit systematic noise components, and it does not require solution or solvent plateaus to be established. More details on the size-distribution can be obtained than from van Holde-Weischet analysis. The sensitivity to the values of the regularization parameter and to the shape parameters is explored with the help of simulated sedimentation data of discrete and continuous model size distributions, and by applications to experimental data of continuous and discrete protein mixtures.

  9. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observedmore » galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.« less

  10. Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network.

    PubMed

    Pan, Wei-Xing; Schmidt, Robert; Wickens, Jeffery R; Hyland, Brian I

    2005-06-29

    Behavioral conditioning of cue-reward pairing results in a shift of midbrain dopamine (DA) cell activity from responding to the reward to responding to the predictive cue. However, the precise time course and mechanism underlying this shift remain unclear. Here, we report a combined single-unit recording and temporal difference (TD) modeling approach to this question. The data from recordings in conscious rats showed that DA cells retain responses to predicted reward after responses to conditioned cues have developed, at least early in training. This contrasts with previous TD models that predict a gradual stepwise shift in latency with responses to rewards lost before responses develop to the conditioned cue. By exploring the TD parameter space, we demonstrate that the persistent reward responses of DA cells during conditioning are only accurately replicated by a TD model with long-lasting eligibility traces (nonzero values for the parameter lambda) and low learning rate (alpha). These physiological constraints for TD parameters suggest that eligibility traces and low per-trial rates of plastic modification may be essential features of neural circuits for reward learning in the brain. Such properties enable rapid but stable initiation of learning when the number of stimulus-reward pairings is limited, conferring significant adaptive advantages in real-world environments.

  11. V3885 Sagittarius: A Comparison With a Range of Standard Model Accretion Disks

    NASA Technical Reports Server (NTRS)

    Linnell, Albert P.; Godon, Patrick; Hubeny, Ivan; Sion, Edward M; Szkody, Paula; Barrett, Paul E.

    2009-01-01

    A chi-squared analysis of standard model accretion disk synthetic spectrum fits to combined Far Ultraviolet Spectroscopic Explorer and Space Telescope Imaging Spectrograph spectra of V3885 Sagittarius, on an absolute flux basis, selects a model that accurately represents the observed spectral energy distribution. Calculation of the synthetic spectrum requires the following system parameters. The cataclysmic variable secondary star period-mass relation calibrated by Knigge in 2006 and 2007 sets the secondary component mass. A mean white dwarf (WD) mass from the same study, which is consistent with an observationally determined mass ratio, sets the adopted WD mass of 0.7M(solar mass), and the WD radius follows from standard theoretical models. The adopted inclination, i = 65 deg, is a literature consensus, and is subsequently supported by chi-squared analysis. The mass transfer rate is the remaining parameter to set the accretion disk T(sub eff) profile, and the Hipparcos parallax constrains that parameter to mas transfer = (5.0 +/- 2.0) x 10(exp -9) M(solar mass)/yr by a comparison with observed spectra. The fit to the observed spectra adopts the contribution of a 57,000 +/- 5000 K WD. The model thus provides realistic constraints on mass transfer and T(sub eff) for a large mass transfer system above the period gap.

  12. Effects of ordinary and superconducting cosmic strings on primordial nucleosynthesis

    NASA Technical Reports Server (NTRS)

    Hodges, Hardy M.; Turner, Michael S.

    1988-01-01

    A precise calculation is done of the primordial nucleosynthesis constraint on the energy per length of ordinary and superconducting cosmic strings. A general formula is provided for the constraint on the string tension for ordinary strings. Using the current values for the various parameters that describe the evolution of loops, the constraint for ordinary strings is G mu less than 2.2 x 10 to the minus 5 power. Our constraint is weaker than previously quoted limits by a factor of approximately 5. For superconducting loops, with currents generated by primordial magnetic fields, the constraint can be less or more stringent than this limit, depending on the strength of the magnetic field. It is also found in this case that there is a negligible amount of entropy production if the electromagnetic radiation from strings thermalizes with the radiation background.

  13. Numerical difficulties associated with using equality constraints to achieve multi-level decomposition in structural optimization

    NASA Technical Reports Server (NTRS)

    Thareja, R.; Haftka, R. T.

    1986-01-01

    There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.

  14. The Ostrogradsky Prescription for BFV Formalism

    NASA Astrophysics Data System (ADS)

    Nirov, Khazret S.

    Gauge-invariant systems of a general form with higher order time derivatives of gauge parameters are investigated within the framework of the BFV formalism. Higher order terms of the BRST charge and BRST-invariant Hamiltonian are obtained. It is shown that the identification rules for Lagrangian and Hamiltonian BRST ghost variables depend on the choice of the extension of constraints from the primary constraint surface.

  15. A systematic linear space approach to solving partially described inverse eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Hu, Sau-Lon James; Li, Haujun

    2008-06-01

    Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.

  16. Estimation of Random Medium Parameters from 2D Post-Stack Seismic Data and Its Application in Seismic Inversion

    NASA Astrophysics Data System (ADS)

    Yang, X.; Zhu, P.; Gu, Y.; Xu, Z.

    2015-12-01

    Small scale heterogeneities of subsurface medium can be characterized conveniently and effectively using a few simple random medium parameters (RMP), such as autocorrelation length, angle and roughness factor, etc. The estimation of these parameters is significant in both oil reservoir prediction and metallic mine exploration. Poor accuracy and low stability existed in current estimation approaches limit the application of random medium theory in seismic exploration. This study focuses on improving the accuracy and stability of RMP estimation from post-stacked seismic data and its application in the seismic inversion. Experiment and theory analysis indicate that, although the autocorrelation of random medium is related to those of corresponding post-stacked seismic data, the relationship is obviously affected by the seismic dominant frequency, the autocorrelation length, roughness factor and so on. Also the error of calculation of autocorrelation in the case of finite and discrete model decreases the accuracy. In order to improve the precision of estimation of RMP, we design two improved approaches. Firstly, we apply region growing algorithm, which often used in image processing, to reduce the influence of noise in the autocorrelation calculated by the power spectrum method. Secondly, the orientation of autocorrelation is used as a new constraint in the estimation algorithm. The numerical experiments proved that it is feasible. In addition, in post-stack seismic inversion of random medium, the estimated RMP may be used to constrain inverse procedure and to construct the initial model. The experiment results indicate that taking inversed model as random medium and using relatively accurate estimated RMP to construct initial model can get better inversion result, which contained more details conformed to the actual underground medium.

  17. Super-Eddington accreting massive black holes explore high-z cosmology: Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Cai, Rong-Gen; Guo, Zong-Kuan; Huang, Qing-Guo; Yang, Tao

    2018-06-01

    In this paper, we simulate Super-Eddington accreting massive black holes (SEAMBHs) as the candles to probe cosmology for the first time. SEAMBHs have been demonstrated to be able to provide a new tool for estimating cosmological distance. Thus, we create a series of mock data sets of SEAMBHs, especially in the high redshift region, to check their abilities to probe the cosmology. To fulfill the potential of the SEAMBHs on the cosmology, we apply the simulated data to three projects. The first is the exploration of their abilities to constrain the cosmological parameters, in which we combine different data sets of current observations such as the cosmic microwave background from Planck and type Ia supernovae from Joint Light-curve Analysis (JLA). We find that the high redshift SEAMBHs can help to break the degeneracies of the background cosmological parameters constrained by Planck and JLA, thus giving much tighter constraints of the cosmological parameters. The second uses the high redshift SEAMBHs as the complements of the low redshift JLA to constrain the early expansion rate and the dark energy density evolution in the cold dark matter frame. Our results show that these high redshift SEAMBHs are very powerful on constraining the early Hubble rate and the evolution of the dark energy density; thus they can give us more information about the expansion history of our Universe, which is also crucial for testing the Λ CDM model in the high redshift region. Finally, we check the SEAMBH candles' abilities to reconstruct the equation of state for dark energy at high redshift. In summary, our results show that the SEAMBHs, as the rare candles in the high redshift region, can provide us a new and independent observation to probe cosmology in the future.

  18. Observational constraints on secret neutrino interactions from big bang nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Huang, Guo-yuan; Ohlsson, Tommy; Zhou, Shun

    2018-04-01

    We investigate possible interactions between neutrinos and massive scalar bosons via gϕν ¯ν ϕ (or massive vector bosons via gVν ¯γμν Vμ) and explore the allowed parameter space of the coupling constant gϕ (or gV) and the scalar (or vector) boson mass mϕ (or mV) by requiring that these secret neutrino interactions (SNIs) should not spoil the success of big bang nucleosynthesis (BBN). Incorporating the SNIs into the evolution of the early Universe in the BBN era, we numerically solve the Boltzmann equations and compare the predictions for the abundances of light elements with observations. It turns out that the constraint on gϕ and mϕ in the scalar-boson case is rather weak, due to a small number of degrees of freedom (d.o.f.). However, in the vector-boson case, the most stringent bound on the coupling gV≲6 ×10-10 at 95% confidence level is obtained for mV≃1 MeV , while the bound becomes much weaker gV≲8 ×10-6 for smaller masses mV≲10-4 MeV . Moreover, we discuss in some detail how the SNIs affect the cosmological evolution and the abundances of the lightest elements.

  19. Investigation of optimization-based reconstruction with an image-total-variation constraint in PET

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-08-01

    Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.

  20. Development and Implementation of an End-Effector Upper Limb Rehabilitation Robot for Hemiplegic Patients with Line and Circle Tracking Training

    PubMed Central

    Li, Chong; Bi, Sheng; Zhang, Xuemin; Huo, Jianfei

    2017-01-01

    Numerous robots have been widely used to deliver rehabilitative training for hemiplegic patients to improve their functional ability. Because of the complexity and diversity of upper limb motion, customization of training patterns is one key factor during upper limb rehabilitation training. Most of the current rehabilitation robots cannot intelligently provide adaptive training parameters, and they have not been widely used in clinical rehabilitation. This article proposes a new end-effector upper limb rehabilitation robot, which is a two-link robotic arm with two active degrees of freedom. This work investigated the kinematics and dynamics of the robot system, the control system, and the realization of different rehabilitation therapies. We also explored the influence of constraint in rehabilitation therapies on interaction force and muscle activation. The deviation of the trajectory of the end effector and the required trajectory was less than 1 mm during the tasks, which demonstrated the movement accuracy of the robot. Besides, results also demonstrated the constraint exerted by the robot provided benefits for hemiplegic patients by changing muscle activation in the way similar to the movement pattern of the healthy subjects, which indicated that the robot can improve the patient's functional ability by training the normal movement pattern. PMID:29065614

  1. MEASURING NEUTRON STAR RADII VIA PULSE PROFILE MODELING WITH NICER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Özel, Feryal; Psaltis, Dimitrios; Bauböck, Michi

    2016-11-20

    The Neutron-star Interior Composition Explorer is an X-ray astrophysics payload that will be placed on the International Space Station . Its primary science goal is to measure with high accuracy the pulse profiles that arise from the non-uniform thermal surface emission of rotation-powered pulsars. Modeling general relativistic effects on the profiles will lead to measuring the radii of these neutron stars and to constraining their equation of state. Achieving this goal will depend, among other things, on accurate knowledge of the source, sky, and instrument backgrounds. We use here simple analytic estimates to quantify the level at which these backgroundsmore » need to be known in order for the upcoming measurements to provide significant constraints on the properties of neutron stars. We show that, even in the minimal-information scenario, knowledge of the background at a few percent level for a background-to-source countrate ratio of 0.2 allows for a measurement of the neutron star compactness to better than 10% uncertainty for most of the parameter space. These constraints improve further when more realistic assumptions are made about the neutron star emission and spin, and when additional information about the source itself, such as its mass or distance, are incorporated.« less

  2. Development and Implementation of an End-Effector Upper Limb Rehabilitation Robot for Hemiplegic Patients with Line and Circle Tracking Training.

    PubMed

    Liu, Yali; Li, Chong; Ji, Linhong; Bi, Sheng; Zhang, Xuemin; Huo, Jianfei; Ji, Run

    2017-01-01

    Numerous robots have been widely used to deliver rehabilitative training for hemiplegic patients to improve their functional ability. Because of the complexity and diversity of upper limb motion, customization of training patterns is one key factor during upper limb rehabilitation training. Most of the current rehabilitation robots cannot intelligently provide adaptive training parameters, and they have not been widely used in clinical rehabilitation. This article proposes a new end-effector upper limb rehabilitation robot, which is a two-link robotic arm with two active degrees of freedom. This work investigated the kinematics and dynamics of the robot system, the control system, and the realization of different rehabilitation therapies. We also explored the influence of constraint in rehabilitation therapies on interaction force and muscle activation. The deviation of the trajectory of the end effector and the required trajectory was less than 1 mm during the tasks, which demonstrated the movement accuracy of the robot. Besides, results also demonstrated the constraint exerted by the robot provided benefits for hemiplegic patients by changing muscle activation in the way similar to the movement pattern of the healthy subjects, which indicated that the robot can improve the patient's functional ability by training the normal movement pattern.

  3. Non-Gaussianity of the cosmic infrared background anisotropies - II. Predictions of the bispectrum and constraints forecast

    NASA Astrophysics Data System (ADS)

    Pénin, A.; Lacasa, F.; Aghanim, N.

    2014-03-01

    Using a full analytical computation of the bispectrum based on the halo model together with the halo occupation number, we derive the bispectrum of the cosmic infrared background (CIB) anisotropies that trace the clustering of dusty-star-forming galaxies. We focus our analysis on wavelengths in the far-infrared and the sub-millimeter typical of the Planck/HFI and Herschel/SPIRE instruments, 350, 550, 850 and 1380 μm. We explore the bispectrum behaviour as a function of several models of evolution of galaxies and show that it is strongly sensitive to that ingredient. Contrary to the power spectrum, the bispectrum, at the four wavelengths, seems dominated by low-redshift galaxies. Such a contribution can be hardly limited by applying low flux cuts. We also discuss the contributions of halo mass as a function of the redshift and the wavelength, recovering that each term is sensitive to a different mass range. Furthermore, we show that the CIB bispectrum is a strong contaminant of the cosmic microwave background bispectrum at 850 μm and higher. Finally, a Fisher analysis of the power spectrum, bispectrum alone and of the combination of both shows that degeneracies on the halo occupation distribution parameters are broken by including the bispectrum information, leading to tight constraints even when including foreground residuals.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Yong-Seon; Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth, PO1 3FX; Zhao Gongbo

    We explore the complementarity of weak lensing and galaxy peculiar velocity measurements to better constrain modifications to General Relativity. We find no evidence for deviations from General Relativity on cosmological scales from a combination of peculiar velocity measurements (for Luminous Red Galaxies in the Sloan Digital Sky Survey) with weak lensing measurements (from the Canadian France Hawaii Telescope Legacy Survey). We provide a Fisher error forecast for a Euclid-like space-based survey including both lensing and peculiar velocity measurements and show that the expected constraints on modified gravity will be at least an order of magnitude better than with present data,more » i.e. we will obtain {approx_equal}5% errors on the modified gravity parametrization described here. We also present a model-independent method for constraining modified gravity parameters using tomographic peculiar velocity information, and apply this methodology to the present data set.« less

  5. Speedup for quantum optimal control from automatic differentiation based on graphics processing units

    NASA Astrophysics Data System (ADS)

    Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David

    2017-04-01

    We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.

  6. A reduced order model to analytically infer atmospheric CO2 concentration from stomatal and climate data

    NASA Astrophysics Data System (ADS)

    Konrad, Wilfried; Katul, Gabriel; Roth-Nebelsick, Anita; Grein, Michaela

    2017-06-01

    To address questions related to the acceleration or deceleration of the global hydrological cycle or links between the carbon and water cycles over land, reliable data for past climatic conditions based on proxies are required. In particular, the reconstruction of palaeoatmospheric CO2 content (Ca) is needed to assist the separation of natural from anthropogenic Ca variability and to explore phase relations between Ca and air temperature Ta time series. Both Ta and Ca are needed to fingerprint anthropogenic signatures in vapor pressure deficit, a major driver used to explain acceleration or deceleration phases in the global hydrological cycle. Current approaches to Ca reconstruction rely on a robust inverse correlation between measured stomatal density in leaves (ν) of many plant taxa and Ca. There are two methods that exploit this correlation: The first uses calibration curves obtained from extant species assumed to represent the fossil taxa, thereby restricting the suitable taxa to those existing today. The second is a hybrid eco-hydrological/physiological approach that determines Ca with the aid of systems of equations based on quasi-instantaneous leaf-gas exchange theories and fossil stomatal data collected along with other measured leaf anatomical traits and parameters. In this contribution, a reduced order model (ROM) is proposed that derives Ca from a single equation incorporating the aforementioned stomatal data, basic climate (e.g. temperature), estimated biochemical parameters of assimilation and isotope data. The usage of the ROM is then illustrated by applying it to isotopic and anatomical measurements from three extant species. The ROM derivation is based on a balance between the biochemical demand and atmospheric supply of CO2 that leads to an explicit expression linking stomatal conductance to internal CO2 concentration (Ci) and Ca. The resulting expression of stomatal conductance from the carbon economy of the leaf is then equated to another expression derived from water vapor gas diffusion that includes anatomical traits. When combined with isotopic measurements for long-term Ci/Ca, Ca can be analytically determined and is interpreted as the time-averaged Ca that existed over the life-span of the leaf. Key advantages of the proposed ROM are: 1) the usage of isotopic data provides constraints on the reconstructed atmospheric CO2 concentration from ν, 2) the analytical form of this approach permits direct links between parameter uncertainties and reconstructed Ca, and 3) the time-scale mismatch between the application of instantaneous leaf-gas exchange expressions constrained with longer-term isotopic data is reconciled through averaging rules and sensitivity analysis. The latter point was rarely considered in prior reconstruction studies that combined models of leaf-gas exchange and isotopic data to reconstruct Ca from ν. The proposed ROM is not without its limitations given the need to a priori assume a parameter related to the control on photosynthetic rate. The work here further explores immanent constraints for the aforementioned photosynthetic parameter.

  7. Observational constraints on cosmological models with Chaplygin gas and quadratic equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharov, G.S., E-mail: german.sharov@mail.ru

    Observational manifestations of accelerated expansion of the universe, in particular, recent data for Type Ia supernovae, baryon acoustic oscillations, for the Hubble parameter H ( z ) and cosmic microwave background constraints are described with different cosmological models. We compare the ΛCDM, the models with generalized and modified Chaplygin gas and the model with quadratic equation of state. For these models we estimate optimal model parameters and their permissible errors with different approaches to calculation of sound horizon scale r {sub s} ( z {sub d} ). Among the considered models the best value of χ{sup 2} is achieved formore » the model with quadratic equation of state, but it has 2 additional parameters in comparison with the ΛCDM and therefore is not favored by the Akaike information criterion.« less

  8. Hot as You Like It: Models of the Long-term Temperature History of Earth Under Different Geological Assumptions

    NASA Astrophysics Data System (ADS)

    Domagal-Goldman, S.; Sheldon, N. D.

    2012-12-01

    The long-term temperature history of the Earth is a subject of continued, vigorous debate. Past models of the climate of early Earth that utilize paleosol contraints on carbon dioxide struggle to maintain temperatures significantly greater than 0°C. In these models, the incoming stellar radiation is much lower than today, consistent with an expectation that the Sun was significantly fainter at that time. In contrast to these models, many proxies for ancient temperatures suggest much warmer conditions. The surface of the planet seems to have been generally free of glaciers throughout this period, other than a brief glaciation at ~2.9 billion years ago and extensive glaciation at ~2.4 billion years ago. Such glacier-free conditions suggest mean surface temperatures greater than 15°C. Measurements of oxygen isotopes in phosphates are consistent with temperatures in the range of 20-30°C; and similar measurements in cherts suggest temperatures over 50°C. This sets up a paradox. Models constrained by one set of geological proxies cannot reproduce the warm temperatures consistent with another set of geological proxies. In this presentation, we explore several potential resolutions to this paradox. First, we model the early Earth under modern-day conditions, but with the lower solar luminosity expected at the time. The next simulation allows carbon dioxide concentrations to increase up to the limits provided by paleosol constraints. Next, we lower the planet's surface albedo in a manner consistent with greater ocean coverage prior to the complete growth of continents. Finally, we remove all constraints on carbon dioxide and attempt to maximize surface temperatures without any geological constraints on model parameters. This set of experiments will allow us to set up potential resolutions to the paradox, and to drive a conversation on which solutions are capable of incorporating the greatest number of geological and geochemical constraints.

  9. Cosmological CPT violation and CMB polarization measurements

    NASA Astrophysics Data System (ADS)

    Xia, Jun-Qing

    2012-01-01

    In this paper we study the possibility of testing Charge-Parity-Time Reversal (CPT) symmetry with cosmic microwave background (CMB) experiments. We consider two kinds of Chern-Simons (CS) term, electromagnetic CS term and gravitational CS term, and study their effects on the CMB polarization power spectra in detail. By combining current CMB polarization measurements, the seven-year WMAP, BOOMERanG 2003 and BICEP observations, we obtain a tight constraint on the rotation angle Δα = -2.28±1.02 deg (1 σ), indicating a 2.2 σ detection of the CPT violation. Here, we particularly take the systematic errors of CMB measurements into account. After adding the QUaD polarization data, the constraint becomes -1.34 < Δα < 0.82 deg at 95% confidence level. When comparing with the effect of electromagnetic CS term, the gravitational CS term could only generate TB and EB power spectra with much smaller amplitude. Therefore, the induced parameter epsilon can not be constrained from the current polarization data. Furthermore, we study the capabilities of future CMB measurements, Planck and CMBPol, on the constraints of Δα and epsilon. We find that the constraint of Δα can be significantly improved by a factor of 15. Therefore, if this rotation angle effect can not be taken into account properly, the constraints of cosmological parameters will be biased obviously. For the gravitational CS term, the future Planck data still can not constrain epsilon very well, if the primordial tensor perturbations are small, r < 0.1. We need the more accurate CMBPol experiment to give better constraint on epsilon.

  10. Waste management under multiple complexities: Inexact piecewise-linearization-based fuzzy flexible programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less

  11. Seeking Time within Time: Exploring the Temporal Constraints of Women Teachers' Experiences as Graduate Students and Novice Researchers

    ERIC Educational Resources Information Center

    Kukner, Jennifer Mitton

    2014-01-01

    The primary focus of this qualitative study is an inquiry into three female teachers' experiences as novice researchers. Over the course of an academic year I maintained a focus upon participants' research experiences and their use of time as they conducted research studies. Delving into the temporal constraints that informed participants'…

  12. Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems.

    PubMed

    Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique

    2016-01-01

    Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems.

  13. Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems

    PubMed Central

    Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique

    2016-01-01

    Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems. PMID:26949383

  14. The role of logistic constraints in termite construction of chambers and tunnels.

    PubMed

    Ladley, Dan; Bullock, Seth

    2005-06-21

    In previous models of the building behaviour of termites, physical and logistic constraints that limit the movement of termites and pheromones have been neglected. Here, we present an individual-based model of termite construction that includes idealized constraints on the diffusion of pheromones, the movement of termites, and the integrity of the architecture that they construct. The model allows us to explore the extent to which the results of previous idealized models (typically realised in one or two dimensions via a set of coupled partial differential equations) generalize to a physical, 3-D environment. Moreover we are able to investigate new processes and architectures that rely upon these features. We explore the role of stigmergic recruitment in pillar formation, wall building, and the construction of royal chambers, tunnels and intersections. In addition, for the first time, we demonstrate the way in which the physicality of partially built structures can help termites to achieve efficient tunnel structures and to establish and maintain entrances in royal chambers. As such we show that, in at least some cases, logistic constraints can be important or even necessary in order for termites to achieve efficient, effective constructions.

  15. Fast Nonlinear Generalized Inversion of Gravity Data with Application to the Three-Dimensional Crustal Density Structure of Sichuan Basin, Southwest China

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Meng, Xiaohong; Li, Fang

    2017-11-01

    Generalized inversion is one of the important steps in the quantitative interpretation of gravity data. With appropriate algorithm and parameters, it gives a view of the subsurface which characterizes different geological bodies. However, generalized inversion of gravity data is time consuming due to the large amount of data points and model cells adopted. Incorporating of various prior information as constraints deteriorates the above situation. In the work discussed in this paper, a method for fast nonlinear generalized inversion of gravity data is proposed. The fast multipole method is employed for forward modelling. The inversion objective function is established with weighted data misfit function along with model objective function. The total objective function is solved by a dataspace algorithm. Moreover, depth weighing factor is used to improve depth resolution of the result, and bound constraint is incorporated by a transfer function to limit the model parameters in a reliable range. The matrix inversion is accomplished by a preconditioned conjugate gradient method. With the above algorithm, equivalent density vectors can be obtained, and interpolation is performed to get the finally density model on the fine mesh in the model domain. Testing on synthetic gravity data demonstrated that the proposed method is faster than conventional generalized inversion algorithm to produce an acceptable solution for gravity inversion problem. The new developed inversion method was also applied for inversion of the gravity data collected over Sichuan basin, southwest China. The established density structure in this study helps understanding the crustal structure of Sichuan basin and provides reference for further oil and gas exploration in this area.

  16. Characterization with microturbulence simulations of the zero particle flux condition in case of a TCV discharge showing toroidal rotation reversal

    NASA Astrophysics Data System (ADS)

    Mariani, A.; Merlo, G.; Brunner, S.; Merle, A.; Sauter, O.; Görler, T.; Jenko, F.; Told, D.

    2016-11-01

    In view of the stabilization effect of sheared plasma rotation on microturbulence, it is important to study the intrinsic rotation that develops in tokamaks that present negligible external toroidal torque, like ITER. Remarkable observations have been made on TCV, analysing discharges without NBI injection, as reported in [A. Bortolon et al. 2006 Phys. Rev. Lett. 97] and exhibiting a rotation inversion occurring in conjunction with a relatively small change in the plasma density. We focus in particular on a limited L-mode TCV shot published in [B. P. Duval et al. 2008 Phys. Plasmas 15], that shows a rotation reversal during a density ramp up. In view of performing a momentum transport analysis on this TCV shot, some constraints have to be considered to reduce the uncertainty on the experimental parameters. One useful constraint is the zero particle flux condition, resulting from the absence of direct particle fuelling to the plasma core. In this work, a preliminary study of the reconstruction of the zero particle flux hyper-surface in the physical parameters space is presented, taking into account the effect of the main impurity (carbon) and beginning to explore the effect of collisions, in order to find a subset of this hyper-surface within the experimental error bars. The analysis is done performing gyrokinetic simulations with the local (flux-tube) version of the Eulerian code GENE [Jenko et al 2000 Phys. Plasmas 7 1904], computing the fluxes with a Quasi-Linear model, according to [E. Fable et al. 2010 PPCF 52], and validating the QL results with Non-Linear simulations in a subset of cases.

  17. High-resolution Spectroscopy of Extremely Metal-poor Stars from SDSS/SEGUE. III. Unevolved Stars with [Fe/H] ≲ -3.5

    NASA Astrophysics Data System (ADS)

    Matsuno, Tadafumi; Aoki, Wako; Beers, Timothy C.; Lee, Young Sun; Honda, Satoshi

    2017-08-01

    We present elemental abundances for eight unevolved extremely metal-poor (EMP) stars with {T}{eff}> 5500 {{K}}, among which seven have [{Fe}/{{H}}]< -3.5. The sample is selected from the Sloan Digital Sky Survey/Sloan Extension for Galactic Understanding and Exploration (SDSS/SEGUE) and our previous high-resolution spectroscopic follow-up with the Subaru Telescope. Several methods to derive stellar parameters are compared, and no significant offset in the derived parameters is found in most cases. From an abundance analysis relative to the standard EMP star G64-12, an average Li abundance for stars with [{Fe}/{{H}}]< -3.5 is A({Li})=1.90, with a standard deviation of σ =0.10 dex. This result confirms that lower Li abundances are found at lower metallicity, as suggested by previous studies, and demonstrates that the star-to-star scatter is small. The small observed scatter could be a strong constraint on Li-depletion mechanisms proposed for explaining the low Li abundance at lower metallicity. Our analysis for other elements obtained the following results: (I) a statistically significant scatter in [{{X}}/{Fe}] for Na, Mg, Cr, Ti, Sr, and Ba, and an apparent bimodality in [{Na}/{Fe}] with a separation of ˜ 0.8 {dex}, (II) an absence of a sharp drop in the metallicity distribution, and (III) the existence of a CEMP-s star at [{Fe}/{{H}}]≃ -3.6 and possibly at [{Fe}/{{H}}]≃ -4.0, which may provide a constraint on the mixing efficiency of unevolved stars during their main-sequence phase. Based on data collected with the Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.

  18. Relativistic protons in the Coma galaxy cluster: first gamma-ray constraints ever on turbulent reacceleration

    NASA Astrophysics Data System (ADS)

    Brunetti, G.; Zimmer, S.; Zandanel, F.

    2017-12-01

    The Fermi-LAT (Large Area Telescope) collaboration recently published deep upper limits to the gamma-ray emission of the Coma cluster, a cluster hosting the prototype of giant radio haloes. In this paper, we extend previous studies and use a formalism that combines particle reacceleration by turbulence and the generation of secondary particles in the intracluster medium to constrain relativistic protons and their role for the origin of the radio halo. We conclude that a pure hadronic origin of the halo is clearly disfavoured as it would require excessively large magnetic fields. However, secondary particles can still generate the observed radio emission if they are reaccelerated. For the first time the deep gamma-ray limits allow us to derive meaningful constraints if the halo is generated during phases of reacceleration of relativistic protons and their secondaries by cluster-scale turbulence. In this paper, we explore a relevant range of parameter space of reacceleration models of secondaries. Within this parameter space, a fraction of model configurations is already ruled out by current gamma-ray limits, including the cases that assume weak magnetic fields in the cluster core, B ≤ 2-3 μG. Interestingly, we also find that the flux predicted by a large fraction of model configurations assuming magnetic fields consistent with Faraday rotation measures (RMs) is not far from the limits. This suggests that a detection of gamma-rays from the cluster might be possible in the near future, provided that the electrons generating the radio halo are secondaries reaccelerated and the magnetic field in the cluster is consistent with that inferred from RM.

  19. Ring rolling process simulation for geometry optimization

    NASA Astrophysics Data System (ADS)

    Franchi, Rodolfo; Del Prete, Antonio; Donatiello, Iolanda; Calabrese, Maurizio

    2017-10-01

    Ring Rolling is a complex hot forming process where different rolls are involved in the production of seamless rings. Since each roll must be independently controlled, different speed laws must be set; usually, in the industrial environment, a milling curve is introduced to monitor the shape of the workpiece during the deformation in order to ensure the correct ring production. In the present paper a ring rolling process has been studied and optimized in order to obtain anular components to be used in aerospace applications. In particular, the influence of process input parameters (feed rate of the mandrel and angular speed of main roll) on geometrical features of the final ring has been evaluated. For this purpose, a three-dimensional finite element model for HRR (Hot Ring Rolling) has been implemented in SFTC DEFORM V11. The FEM model has been used to formulate a proper optimization problem. The optimization procedure has been implemented in the commercial software DS ISight in order to find the combination of process parameters which allows to minimize the percentage error of each obtained dimension with respect to its nominal value. The software allows to find the relationship between input and output parameters applying Response Surface Methodology (RSM), by using the exact values of output parameters in the control points of the design space explored through FEM simulation. Once this relationship is known, the values of the output parameters can be calculated for each combination of the input parameters. After the calculation of the response surfaces for the selected output parameters, an optimization procedure based on Genetic Algorithms has been applied. At the end, the error between each obtained dimension and its nominal value has been minimized. The constraints imposed were the maximum values of standard deviations of the dimensions obtained for the final ring.

  20. Micro-Logistics Analysis for Human Space Exploration

    NASA Technical Reports Server (NTRS)

    Cirillo, William; Stromgren, Chel; Galan, Ricardo

    2008-01-01

    Traditionally, logistics analysis for space missions has focused on the delivery of elements and goods to a destination. This type of logistics analysis can be referred to as "macro-logistics". While the delivery of goods is a critical component of mission analysis, it captures only a portion of the constraints that logistics planning may impose on a mission scenario. The other component of logistics analysis concerns the local handling of goods at the destination, including storage, usage, and disposal. This type of logistics analysis, referred to as "micro-logistics", may also be a primary driver in the viability of a human lunar exploration scenario. With the rigorous constraints that will be placed upon a human lunar outpost, it is necessary to accurately evaluate micro-logistics operations in order to develop exploration scenarios that will result in an acceptable level of system performance.

  1. The reconstruction of tachyon inflationary potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fei, Qin; Gong, Yungui; Lin, Jiong

    We derive a lower bound on the field excursion for the tachyon inflation, which is determined by the amplitude of the scalar perturbation and the number of e -folds before the end of inflation. Using the relation between the observables like n {sub s} and r with the slow-roll parameters, we reconstruct three classes of tachyon potentials. The model parameters are determined from the observations before the potentials are reconstructed, and the observations prefer the concave potential. We also discuss the constraints from the reheating phase preceding the radiation domination for the three classes of models by assuming the equationmore » of state parameter w {sub re} during reheating is a constant. Depending on the model parameters and the value of w {sub re} , the constraints on N {sub re} and T {sub re} are different. As n {sub s} increases, the allowed reheating epoch becomes longer for w {sub re} =−1/3, 0 and 1/6 while the allowed reheating epoch becomes shorter for w {sub re} =2/3.« less

  2. Observational constraint on dynamical evolution of dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Yungui; Cai, Rong-Gen; Chen, Yun

    2010-01-01

    We use the Constitution supernova, the baryon acoustic oscillation, the cosmic microwave background, and the Hubble parameter data to analyze the evolution property of dark energy. We obtain different results when we fit different baryon acoustic oscillation data combined with the Constitution supernova data to the Chevallier-Polarski-Linder model. We find that the difference stems from the different values of Ω{sub m0}. We also fit the observational data to the model independent piecewise constant parametrization. Four redshift bins with boundaries at z = 0.22, 0.53, 0.85 and 1.8 were chosen for the piecewise constant parametrization of the equation of state parametermore » w(z) of dark energy. We find no significant evidence for evolving w(z). With the addition of the Hubble parameter, the constraint on the equation of state parameter at high redshift is improved by 70%. The marginalization of the nuisance parameter connected to the supernova distance modulus is discussed.« less

  3. Environmental design implications for two deep space SmallSats

    NASA Astrophysics Data System (ADS)

    Kahn, Peter; Imken, Travis; Elliott, John; Sherwood, Brent; Frick, Andreas; Sheldon, Douglas; Lunine, Jonathan

    2017-10-01

    The extreme environmental challenges of deep space exploration force unique solutions to small satellite design in order to enable their use as scientifically viable spacecraft. The challenges of implementing small satellites within limited resources can be daunting when faced with radiation effects on delicate electronics that require shielding or unique adaptations for protection, or mass, power and volume limitations due to constraints placed by the carrier spacecraft, or even Planetary Protection compliant design techniques that drive assembly and testing. This paper will explore two concept studies where the environmental constraints and/or planetary protection mitigations drove the design of the Flight System. The paper will describe the key technical drivers on the Sylph mission concept to explore a plume at Europa as a secondary free-flyer as a part of the planned Europa Mission. Sylph is a radiation-hardened smallsat concept that would utilize terrain relative navigation to fly at low altitudes through a plume, if found, and relay the mass spectra data back through the flyby spacecraft during its 24-h mission. The second topic to be discussed will be the mission design constraints of the Near Earth Asteroid (NEA) Scout concept. NEAScout is a 6U cubesat that would utilize an 86 sq. m solar sail as propulsion to execute a flyby with a near-Earth asteroid and help retire Strategic Knowledge Gaps for future human exploration. NEAScout would cruise for 24 months to reach and characterize one Near-Earth asteroid that is representative of Human Exploration targets and telemeter that data directly back to Earth at the end of its roughly 2.5 year mission.

  4. Implications of optimization cost for balancing exploration and exploitation in global search and for experimental optimization

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Anirban

    Global optimization based on expensive and time consuming simulations or experiments usually cannot be carried out to convergence, but must be stopped because of time constraints, or because the cost of the additional function evaluations exceeds the benefits of improving the objective(s). This dissertation sets to explore the implications of such budget and time constraints on the balance between exploration and exploitation and the decision of when to stop. Three different aspects are considered in terms of their effects on the balance between exploration and exploitation: 1) history of optimization, 2) fixed evaluation budget, and 3) cost as a part of objective function. To this end, this research develops modifications to the surrogate-based optimization technique, Efficient Global Optimization algorithm, that controls better the balance between exploration and exploitation, and stopping criteria facilitated by these modifications. Then the focus shifts to examining experimental optimization, which shares the issues of cost and time constraints. Through a study on optimization of thrust and power for a small flapping wing for micro air vehicles, important differences and similarities between experimental and simulation-based optimization are identified. The most important difference is that reduction of noise in experiments becomes a major time and cost issue, and a second difference is that parallelism as a way to cut cost is more challenging. The experimental optimization reveals the tendency of the surrogate to display optimistic bias near the surrogate optimum, and this tendency is then verified to also occur in simulation based optimization.

  5. The constraints of good governance practice in national solid waste management policy (NSWMP) implementation: A case study of Malaysia

    NASA Astrophysics Data System (ADS)

    Wee, Seow Ta; Abas, Muhamad Azahar; Chen, Goh Kai; Mohamed, Sulzakimin

    2017-10-01

    Nowadays, international donors have emphasised on the adoption of good governance practices in solid waste management which include policy implementation. In Malaysia, the National Solid Waste Management Policy (NSWMP) was introduced as the main guideline for its solid waste management and the Malaysian government has adopted good governance practice in the NSMWP implementation. However, the good governance practices implemented by the Malaysian government encountered several challenges. This study was conducted to explore the good governance constraints experienced by stakeholders in the NSWMP implementation. An exploratory research approach is applied in this study through in-depth interviews with several government agencies and concessionaires that involved in the NSWMP implementation in Malaysia. A total of six respondents took part in this study. The findings revealed three main good governance constraints in the NSWMP implementation, namely inadequate fund, poor staff's competency, and ambiguity of policy implementation system. Moreover, this study also disclosed that the main constraint influenced the other constraints. Hence, it is crucial to identify the main constraint in order to minimise its impact on the other constraints.

  6. A fluid model for the edge pressure pedestal height and width in tokamaks based on the transport constraint of particle, energy, and momentum balance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stacey, W. M., E-mail: weston.stacey@nre.gatech.edu

    2016-06-15

    A fluid model for the tokamak edge pressure profile required by the conservation of particles, momentum and energy in the presence of specified heating and fueling sources and electromagnetic and geometric parameters has been developed. Kinetics effects of ion orbit loss are incorporated into the model. The use of this model as a “transport” constraint together with a “Peeling-Ballooning (P-B)” instability constraint to achieve a prediction of edge pressure pedestal heights and widths in future tokamaks is discussed.

  7. Recent developments in INPOP planetary ephemerides

    NASA Astrophysics Data System (ADS)

    Fienga, Agnes; Viswanathan, Vishnu; Laskar, Jacques; Manche, Hervé; Gastineau, Mickael

    2015-08-01

    We present here the new version of the INPOP planetary ephemerides based on an update of the observational data sets as well as new results in term of asteroid masses and constraints obtained for General relativity parameters PPN β, γ, J2 and the secular variations of G. New constraints about the hypothetical existence of a super-Earth beyond the Neptune orbit will also be presented.

  8. Parameter transferability within homogeneous regions and comparisons with predictions from a priori parameters in the eastern United States

    NASA Astrophysics Data System (ADS)

    Chouaib, Wafa; Alila, Younes; Caldwell, Peter V.

    2018-05-01

    The need for predictions of flow time-series persists at ungauged catchments, motivating the research goals of our study. By means of the Sacramento model, this paper explores the use of parameter transfer within homogeneous regions of similar climate and flow characteristics and makes comparisons with predictions from a priori parameters. We assessed the performance using the Nash-Sutcliffe (NS), bias, mean monthly hydrograph and flow duration curve (FDC). The study was conducted on a large dataset of 73 catchments within the eastern US. Two approaches to the parameter transferability were developed and evaluated; (i) the within homogeneous region parameter transfer using one donor catchment specific to each region, (ii) the parameter transfer disregarding the geographical limits of homogeneous regions, where one donor catchment was common to all regions. Comparisons between both parameter transfers enabled to assess the gain in performance from the parameter regionalization and its respective constraints and limitations. The parameter transfer within homogeneous regions outperformed the a priori parameters and led to a decrease in bias and increase in efficiency reaching a median NS of 0.77 and a NS of 0.85 at individual catchments. The use of FDC revealed the effect of bias on the inaccuracy of prediction from parameter transfer. In one specific region, of mountainous and forested catchments, the prediction accuracy of the parameter transfer was less satisfactory and equivalent to a priori parameters. In this region, the parameter transfer from the outsider catchment provided the best performance; less-biased with smaller uncertainty in medium flow percentiles (40%-60%). The large disparity of energy conditions explained the lack of performance from parameter transfer in this region. Besides, the subsurface stormflow is predominant and there is a likelihood of lateral preferential flow, which according to its specific properties further explained the reduced efficiency. Testing the parameter transferability using criteria of similar climate and flow characteristics at ungauged catchments and comparisons with predictions from a priori parameters are a novelty. The ultimate limitations of both approaches are recognized and recommendations are made for future research.

  9. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  10. Cosmological Parameters From Pre-Planck CMB Measurements: A 2017 Update

    NASA Technical Reports Server (NTRS)

    Calabrese, Erminia; Hlolzek, Renee A.; Bond, J. Richard; Devlin, Mark J.; Dunkley, Joanna; Halpern, Mark; Hincks, Adam D.; Irwin, Kent D.; Kosowsky, Arthur; Moodley, Kavilan; hide

    2017-01-01

    We present cosmological constraints from the combination of the full mission nine-year WMAP release and small-scale temperature data from the pre-Planck Atacama Cosmology Telescope (ACT) and South Pole Telescope (SPT) generation of instruments. This is an update of the analysis presented in Calabrese et al. [Phys. Rev. D 87, 103012 (2013)], and highlights the impact on CDM cosmology of a 0.06 eV massive neutrino which was assumed in the Planck analysis but not in the ACTSPT analyses and a Planck-cleaned measurement of the optical depth to reionization. We show that cosmological constraints are now strong enough that small differences in assumptions about reionization and neutrino mass give systematic differences which are clearly detectable in the data. We recommend that these updated results be used when comparing cosmological constraints from WMAP, ACT and SPT with other surveys or with current and future full-mission Planck cosmology. Cosmological parameter chains are publicly available on the NASAs LAMBDA data archive.

  11. Observational constraint on spherical inhomogeneity with CMB and local Hubble parameter

    NASA Astrophysics Data System (ADS)

    Tokutake, Masato; Ichiki, Kiyotomo; Yoo, Chul-Moon

    2018-03-01

    We derive an observational constraint on a spherical inhomogeneity of the void centered at our position from the angular power spectrum of the cosmic microwave background (CMB) and local measurements of the Hubble parameter. The late time behaviour of the void is assumed to be well described by the so-called Λ-Lemaȋtre-Tolman-Bondi (ΛLTB) solution. Then, we restrict the models to the asymptotically homogeneous models each of which is approximated by a flat Friedmann-Lemaȋtre-Robertson-Walker model. The late time ΛLTB models are parametrized by four parameters including the value of the cosmological constant and the local Hubble parameter. The other two parameters are used to parametrize the observed distance-redshift relation. Then, the ΛLTB models are constructed so that they are compatible with the given distance-redshift relation. Including conventional parameters for the CMB analysis, we characterize our models by seven parameters in total. The local Hubble measurements are reflected in the prior distribution of the local Hubble parameter. As a result of a Markov-Chains-Monte-Carlo analysis for the CMB temperature and polarization anisotropies, we found that the inhomogeneous universe models with vanishing cosmological constant are ruled out as is expected. However, a significant under-density around us is still compatible with the angular power spectrum of CMB and the local Hubble parameter.

  12. A Multi-Week Behavioral Sampling Tag for Sound Effects Studies: Design Trade-Offs and Prototype Evaluation

    DTIC Science & Technology

    2013-09-30

    performance of algorithms detecting dives, strokes , clicks, respiration and gait changes. (ii) Calibration errors: Size and power constraints in...acceptance parameters used to detect and classify events. For example, swim stroke detection requires parameters defining the minimum magnitude and the min...and max duration of a stroke . Species dependent parameters can be selected from existing DTAG data but other parameters depend on the size of the

  13. Optimum Strategies for Selecting Descent Flight-Path Angles

    NASA Technical Reports Server (NTRS)

    Wu, Minghong G. (Inventor); Green, Steven M. (Inventor)

    2016-01-01

    An information processing system and method for adaptively selecting an aircraft descent flight path for an aircraft, are provided. The system receives flight adaptation parameters, including aircraft flight descent time period, aircraft flight descent airspace region, and aircraft flight descent flyability constraints. The system queries a plurality of flight data sources and retrieves flight information including any of winds and temperatures aloft data, airspace/navigation constraints, airspace traffic demand, and airspace arrival delay model. The system calculates a set of candidate descent profiles, each defined by at least one of a flight path angle and a descent rate, and each including an aggregated total fuel consumption value for the aircraft following a calculated trajectory, and a flyability constraints metric for the calculated trajectory. The system selects a best candidate descent profile having the least fuel consumption value while the fly ability constraints metric remains within aircraft flight descent flyability constraints.

  14. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; Steffen, J. H.; Weltman, A.

    2010-01-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here, we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss the GammeV-CHameleon Afterglow SEarch, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHameleon Afterglow SEarch. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experimentmore » will be able to probe a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  15. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; /Chicago U., EFI /KICP, Chicago; Steffen, J.H.

    2009-11-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss GammeV-CHASE, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHASE. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experiment will be able to probemore » a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  16. Expression level, cellular compartment and metabolic network position all influence the average selective constraint on mammalian enzymes

    PubMed Central

    2011-01-01

    Background A gene's position in regulatory, protein interaction or metabolic networks can be predictive of the strength of purifying selection acting on it, but these relationships are neither universal nor invariably strong. Following work in bacteria, fungi and invertebrate animals, we explore the relationship between selective constraint and metabolic function in mammals. Results We measure the association between selective constraint, estimated by the ratio of nonsynonymous (Ka) to synonymous (Ks) substitutions, and several, primarily metabolic, measures of gene function. We find significant differences between the selective constraints acting on enzyme-coding genes from different cellular compartments, with the nucleus showing higher constraint than genes from either the cytoplasm or the mitochondria. Among metabolic genes, the centrality of an enzyme in the metabolic network is significantly correlated with Ka/Ks. In contrast to yeasts, gene expression magnitude does not appear to be the primary predictor of selective constraint in these organisms. Conclusions Our results imply that the relationship between selective constraint and enzyme centrality is complex: the strength of selective constraint acting on mammalian genes is quite variable and does not appear to exclusively follow patterns seen in other organisms. PMID:21470417

  17. Lorentz invariance violation in the neutrino sector: a joint analysis from big bang nucleosynthesis and the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Dai, Wei-Ming; Guo, Zong-Kuan; Cai, Rong-Gen; Zhang, Yuan-Zhong

    2017-06-01

    We investigate constraints on Lorentz invariance violation in the neutrino sector from a joint analysis of big bang nucleosynthesis and the cosmic microwave background. The effect of Lorentz invariance violation during the epoch of big bang nucleosynthesis changes the predicted helium-4 abundance, which influences the power spectrum of the cosmic microwave background at the recombination epoch. In combination with the latest measurement of the primordial helium-4 abundance, the Planck 2015 data of the cosmic microwave background anisotropies give a strong constraint on the deformation parameter since adding the primordial helium measurement breaks the degeneracy between the deformation parameter and the physical dark matter density.

  18. Constrained Burn Optimization for the International Space Station

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.; Jones, Brandon A.

    2017-01-01

    In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.

  19. Big game hunting practices, meanings, motivations and constraints: a survey of Oregon big game hunters

    Treesearch

    Suresh K. Shrestha; Robert C. Burns

    2012-01-01

    We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...

  20. Parameter estimation in 3D affine and similarity transformation: implementation of variance component estimation

    NASA Astrophysics Data System (ADS)

    Amiri-Simkooei, A. R.

    2018-01-01

    Three-dimensional (3D) coordinate transformations, generally consisting of origin shifts, axes rotations, scale changes, and skew parameters, are widely used in many geomatics applications. Although in some geodetic applications simplified transformation models are used based on the assumption of small transformation parameters, in other fields of applications such parameters are indeed large. The algorithms of two recent papers on the weighted total least-squares (WTLS) problem are used for the 3D coordinate transformation. The methodology can be applied to the case when the transformation parameters are generally large of which no approximate values of the parameters are required. Direct linearization of the rotation and scale parameters is thus not required. The WTLS formulation is employed to take into consideration errors in both the start and target systems on the estimation of the transformation parameters. Two of the well-known 3D transformation methods, namely affine (12, 9, and 8 parameters) and similarity (7 and 6 parameters) transformations, can be handled using the WTLS theory subject to hard constraints. Because the method can be formulated by the standard least-squares theory with constraints, the covariance matrix of the transformation parameters can directly be provided. The above characteristics of the 3D coordinate transformation are implemented in the presence of different variance components, which are estimated using the least squares variance component estimation. In particular, the estimability of the variance components is investigated. The efficacy of the proposed formulation is verified on two real data sets.

  1. Stationkeeping for the Lunar Reconnaissance Orbiter (LRO)

    NASA Technical Reports Server (NTRS)

    Beckman, Mark; Lamb, Rivers

    2007-01-01

    The Lunar Reconnaissance Orbiter (LRO) is scheduled to launch in 2008 as the first mission under NASA's Vision for Space Exploration. Following several weeks in a quasi-frozen commissioning orbit, LRO will fly in a 50 km mean altitude lunar polar orbit. During the one year mission duration, the orbital dynamics of a low lunar orbit force LRO to perform periodic sets of stationkeeping maneuvers. This paper explores the characteristics of low lunar orbits and explains how the LRO stationkeeping plan is designed to accommodate the dynamics in such an orbit. The stationkeeping algorithm used for LRO must meet five mission constraints. These five constraints are to maintain ground station contact during maneuvers, to control the altitude variation of the orbit, to distribute periselene equally between northern and southern hemispheres, to match eccentricity at the beginning and the end of the sidereal period, and to minimize stationkeeping deltaV. This paper addresses how the maneuver plan for LRO is designed to meet all of the above constraints.

  2. Stationkeeping for the Lunar Reconnaissance Orbiter (LRO)

    NASA Technical Reports Server (NTRS)

    Beckman, Mark; Lamb, Rivers

    2007-01-01

    The Lunar Reconnaissance Orbiter (LRO) is scheduled to launch in 2008 as the first mission under NASA's Vision for Space Exploration. Follo wing several weeks in a quasi-frozen commissioning orbit, LRO will fl y in a 50 km mean altitude lunar polar orbit. During the one year mis sion duration, the orbital dynamics of a low lunar orbit force LRO to perform periodic sets of stationkeeping maneuvers. This paper explor es the characteristics of low lunar orbits and explains how the LRO s tationkeeping plan is designed to accommodate the dynamics in such an orbit. The stationkeeping algorithm used for LRO must meet five miss ion constraints. These five constraints are to maintain ground statio n contact during maneuvers, to control the altitude variation of the orbit, to distribute periselene equally between northern and southern hemispheres, to match eccentricity at the beginning and the end of the sidereal period, and to minimize stationkeeping (Delta)V. This pape r addresses how the maneuver plan for LRO is designed to meet all of the above constraints.

  3. Waste management under multiple complexities: inexact piecewise-linearization-based fuzzy flexible programming.

    PubMed

    Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen

    2012-06-01

    To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Combined Constraints on the Equation of State of Dense Neutron-rich Matter from Terrestrial Nuclear Experiments and Observations of Neutron Stars

    NASA Astrophysics Data System (ADS)

    Zhang, Nai-Bo; Li, Bao-An; Xu, Jun

    2018-06-01

    Within the parameter space of the equation of state (EOS) of dense neutron-rich matter limited by existing constraints mainly from terrestrial nuclear experiments, we investigate how the neutron star maximum mass M max > 2.01 ± 0.04 M ⊙, radius 10.62 km < R 1.4 < 12.83 km and tidal deformability Λ1.4 ≤ 800 of canonical neutron stars together constrain the EOS of dense neutron-rich nucleonic matter. While the 3D parameter space of K sym (curvature of nuclear symmetry energy), J sym, and J 0 (skewness of the symmetry energy and EOS of symmetric nuclear matter, respectively) is narrowed down significantly by the observational constraints, more data are needed to pin down the individual values of K sym, J sym, and J 0. The J 0 largely controls the maximum mass of neutron stars. While the EOS with J 0 = 0 is sufficiently stiff to support neutron stars as massive as 2.37 M ⊙, supporting the hypothetical ones as massive as 2.74 M ⊙ (composite mass of GW170817) requires J 0 to be larger than its currently known maximum value of about 400 MeV and beyond the causality limit. The upper limit on the tidal deformability of Λ1.4 = 800 from the recent observation of GW170817 is found to provide upper limits on some EOS parameters consistent with but far less restrictive than the existing constraints of other observables studied.

  5. Interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations.

    PubMed

    Simic, Vladimir

    2016-06-01

    As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Cosmic Microwave Background: cosmology from the Planck perspective

    NASA Astrophysics Data System (ADS)

    De Zotti, Gianfranco

    2016-07-01

    The Planck mission has measured the angular anisotropies in the temperature of the Cosmic Microwave Background (CMB) with an accuracy set by fundamental limits. These data have allowed the determination of the cosmological parameters with extraordinary precision. These lecture notes present an overview of the mission and of its cosmological results. After a short history of the project, the Planck instruments and their performances are introduced and compared with those of the WMAP satellite. Next the approach to data analysis adopted by the Planck collaboration is described. This includes the techniques for dealing with the contamination of the CMB signal by astrophysical foreground emissions and for determining cosmological parameters from the analysis of the CMB power spectrum. The power spectra measured by Planck were found to be very well described by the standard spatially flat six-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations. This is a remarkable result, considering that the six parameters account for the about 2500 independent power spectrum values measured by Planck (the power was measured for about 2500 multipoles), not to mention the about one trillion science samples produced. A large grid of cosmological models was also explored, using a range of additional astrophysical data sets in addition to Planck and high-resolution CMB data from ground-based experiments. On the whole, the Planck analysis of the CMB power spectrum allowed to vary and determined 16 parameters. Many other interesting parameters were derived from them. Although Planck was not initially designed to carry out high accuracy measurements of the CMB polarization anisotropies, its capabilities in this respect were significantly enhanced during its development. The quality of its polarization measurements have exceeded all original expectations. Planck's polarisation data confirmed and improved the understanding of the details of the cosmological picture determined from its temperature data. Moreover, they have provided an accurate determination of the optical depth for Thomson scattering, τ, due to the cosmic reionization. The result for τ has provided key information on the end of ``dark ages'' and largely removed the tension with the constraints on the reionization history provided by optical/UV data, indicated by earlier estimates. This has dispensed from the need of exotic energy sources in addition to the ionizing power provided by massive stars during the early galaxy evolution. A joint analysis of BICEP2, Keck Array, and Planck data has shown that the B-mode polarization detected by the BICEP2 team can be accounted for by polarized Galactic dust and has tightened the constraint on the B-mode amplitude due to primordial tensor perturbations.

  7. Cosmic Microwave Background: cosmology from the Planck perspective

    NASA Astrophysics Data System (ADS)

    De Zotti, Gianfranco

    2017-08-01

    The Planck mission has measured the angular anisotropies in the temperature of the Cosmic Microwave Background (CMB) with an accuracy set by fundamental limits. These data have allowed the determination of the cosmological parameters with extraordinary precision. These lecture notes present an overview of the mission and of its cosmological results. After a short history of the project, the Planck instruments and their performances are introduced and compared with those of the WMAP satellite. Next the approach to data analysis adopted by the Planck collaboration is described. This includes the techniques for dealing with the contamination of the CMB signal by astrophysical foreground emissions and for determining cosmological parameters from the analysis of the CMB power spectrum. The power spectra measured by Planck were found to be very well described by the standard spatially flat six-parameter ΛCDM cosmology with a power-law spectrum of adiabatic scalar perturbations. This is a remarkable result, considering that the six parameters account for the about 2500 independent power spectrum values measured by Planck (the power was measured for about 2500 multipoles), not to mention the about one trillion science samples produced. A large grid of cosmological models was also explored, using a range of additional astrophysical data sets in addition to Planck and high-resolution CMB data from ground-based experiments. On the whole, the Planck analysis of the CMB power spectrum allowed to vary and determined 16 parameters. Many other interesting parameters were derived from them. Although Planck was not initially designed to carry out high accuracy measurements of the CMB polarization anisotropies, its capabilities in this respect were significantly enhanced during its development. The quality of its polarization measurements have exceeded all original expectations. Planck's polarisation data confirmed and improved the understanding of the details of the cosmological picture determined from its temperature data. Moreover, they have provided an accurate determination of the optical depth for Thomson scattering, τ, due to the cosmic reionization. The result for τ has provided key information on the end of ``dark ages'' and largely removed the tension with the constraints on the reionization history provided by optical/UV data, indicated by earlier estimates. This has dispensed from the need of exotic energy sources in addition to the ionizing power provided by massive stars during the early galaxy evolution. A joint analysis of BICEP2, Keck Array, and Planck data has shown that the B-mode polarization detected by the BICEP2 team can be accounted for by polarized Galactic dust and has tightened the constraint on the B-mode amplitude due to primordial tensor perturbations.

  8. Constraining compensated isocurvature perturbations using the CMB

    NASA Astrophysics Data System (ADS)

    Smith, Tristan L.; Rhiannon Smith, Kyle Yee, Julian Munoz, Daniel Grin

    2017-01-01

    Compensated isocurvature perturbations (CIPs) are variations in the cosmic baryon fraction which leave the total non-relativistic matter (and radiation) density unchanged. They are predicted by models of inflation which involve more than one scalar field, such as the curvaton scenario. At linear order, they leave the CMB two-point correlation function nearly unchanged: this is why existing constraints to CIPs are so much more permissive than constraints to typical isocurvature perturbations. Recent work articulated an efficient way to calculate the second order CIP effects on the CMB two-point correlation. We have implemented this method in order to explore constraints to the CIP amplitude using current Planck temperature and polarization data. In addition, we have computed the contribution of CIPs to the CMB lensing estimator which provides us with a novel method to use CMB data to place constraints on CIPs. We find that Planck data places a constraint to the CIP amplitude which is competitive with other methods.

  9. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  10. Investigation of antenna pattern constraints for passive geosynchronous microwave imaging radiometers

    NASA Technical Reports Server (NTRS)

    Gasiewski, A. J.; Skofronick, G. M.

    1992-01-01

    Progress by investigators at Georgia Tech in defining the requirements for large space antennas for passive microwave Earth imaging systems is reviewed. In order to determine antenna constraints (e.g., the aperture size, illumination taper, and gain uncertainty limits) necessary for the retrieval of geophysical parameters (e.g., rain rate) with adequate spatial resolution and accuracy, a numerical simulation of the passive microwave observation and retrieval process is being developed. Due to the small spatial scale of precipitation and the nonlinear relationships between precipitation parameters (e.g., rain rate, water density profile) and observed brightness temperatures, the retrieval of precipitation parameters are of primary interest in the simulation studies. Major components of the simulation are described as well as progress and plans for completion. The overall goal of providing quantitative assessments of the accuracy of candidate geosynchronous and low-Earth orbiting imaging systems will continue under a separate grant.

  11. Constraints of beyond Standard Model parameters from the study of neutrinoless double beta decay

    NASA Astrophysics Data System (ADS)

    Stoica, Sabin

    2017-12-01

    Neutrinoless double beta (0νββ) decay is a beyond Standard Model (BSM) process whose discovery would clarify if the lepton number is conserved, decide on the neutrinos character (are they Dirac or Majorana particles?) and give a hint on the scale of their absolute masses. Also, from the study of 0νββ one can constrain other BSM parameters related to different scenarios by which this process can occur. In this paper I make first a short review on the actual challenges to calculate precisely the phase space factors and nuclear matrix elements entering the 0νββ decay lifetimes, and I report results of our group for these quantities. Then, taking advance of the most recent experimental limits for 0νββ lifetimes, I present new constraints of the neutrino mass parameters associated with different mechanisms of occurrence of the 0νββ decay mode.

  12. Evaluation of parameters of Black Hole, stellar cluster and dark matter distribution from bright star orbits in the Galactic Center

    NASA Astrophysics Data System (ADS)

    Zakharov, Alexander

    It is well-known that one can evaluate black hole (BH) parameters (including spin) analyz-ing trajectories of stars around BH. A bulk distribution of matter (dark matter (DM)+stellar cluster) inside stellar orbits modifies trajectories of stars, namely, generally there is a apoas-tron shift in direction which opposite to GR one, even now one could put constraints on DM distribution and BH parameters and constraints will more stringent in the future. Therefore, an analyze of bright star trajectories provides a relativistic test in a weak gravitational field approximation, but in the future one can test a strong gravitational field near the BH at the Galactic Center with the same technique due to a rapid progress in observational facilities. References A. Zakharov et al., Phys. Rev. D76, 062001 (2007). A.F. Zakharov et al., Space Sci. Rev. 148, 301313(2009).

  13. Optimization of spectroscopic surveys for testing non-Gaussianity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raccanelli, Alvise; Doré, Olivier; Dalal, Neal, E-mail: alvise@caltech.edu, E-mail: Olivier.P.Dore@jpl.nasa.gov, E-mail: dalaln@illinois.edu

    We investigate optimization strategies to measure primordial non-Gaussianity with future spectroscopic surveys. We forecast measurements coming from the 3D galaxy power spectrum and compute constraints on primordial non-Gaussianity parameters f{sub NL} and n{sub NG}. After studying the dependence on those parameters upon survey specifications such as redshift range, area, number density, we assume a reference mock survey and investigate the trade-off between number density and area surveyed. We then define the observational requirements to reach the detection of f{sub NL} of order 1. Our results show that power spectrum constraints on non-Gaussianity from future spectroscopic surveys can improve on currentmore » CMB limits, but the multi-tracer technique and higher order correlations will be needed in order to reach an even better precision in the measurements of the non-Gaussianity parameter f{sub NL}.« less

  14. Observational information for f(T) theories and dark torsion

    NASA Astrophysics Data System (ADS)

    Bengochea, Gabriel R.

    2011-01-01

    In the present work we analyze and compare the information coming from different observational data sets in the context of a sort of f(T) theories. We perform a joint analysis with measurements of the most recent type Ia supernovae (SNe Ia), Baryon Acoustic Oscillation (BAO), Cosmic Microwave Background radiation (CMB), Gamma-Ray Bursts data (GRBs) and Hubble parameter observations (OHD) to constraint the only new parameter these theories have. It is shown that when the new combined BAO/CMB parameter is used to put constraints, the result is different from previous works. We also show that when we include Observational Hubble Data (OHD) the simpler ΛCDM model is excluded to one sigma level, leading the effective equation of state of these theories to be of phantom type. Also, analyzing a tension criterion for SNe Ia and other observational sets, we obtain more consistent and better suited data sets to work with these theories.

  15. Constraints and tests of the OPERA superluminal neutrinos.

    PubMed

    Bi, Xiao-Jun; Yin, Peng-Fei; Yu, Zhao-Huan; Yuan, Qiang

    2011-12-09

    The superluminal neutrinos detected by OPERA indicate Lorentz invariance violation (LIV) of the neutrino sector at the order of 10(-5). We study the implications of the result in this work. We find that such a large LIV implied by OPERA data will make the neutrino production process π → μ + ν(μ) kinematically forbidden for a neutrino energy greater than about 5 GeV. The OPERA detection of neutrinos at 40 GeV can constrain the LIV parameter to be smaller than 3×10(-7). Furthermore, the neutrino decay in the LIV framework will modify the neutrino spectrum greatly. The atmospheric neutrino spectrum measured by the IceCube Collaboration can constrain the LIV parameter to the level of 10(-12). The future detection of astrophysical neutrinos of galactic sources is expected to be able to give an even stronger constraint on the LIV parameter of neutrinos.

  16. Constraints and Tests of the OPERA Superluminal Neutrinos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bi Xiaojun; Yin Pengfei; Yu Zhaohuan

    The superluminal neutrinos detected by OPERA indicate Lorentz invariance violation (LIV) of the neutrino sector at the order of 10{sup -5}. We study the implications of the result in this work. We find that such a large LIV implied by OPERA data will make the neutrino production process {pi}{yields}{mu}+{nu}{sub {mu}} kinematically forbidden for a neutrino energy greater than about 5 GeV. The OPERA detection of neutrinos at 40 GeV can constrain the LIV parameter to be smaller than 3x10{sup -7}. Furthermore, the neutrino decay in the LIV framework will modify the neutrino spectrum greatly. The atmospheric neutrino spectrum measured bymore » the IceCube Collaboration can constrain the LIV parameter to the level of 10{sup -12}. The future detection of astrophysical neutrinos of galactic sources is expected to be able to give an even stronger constraint on the LIV parameter of neutrinos.« less

  17. Connection dynamics of a gauge theory of gravity coupled with matter

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Banerjee, Kinjal; Ma, Yongge

    2013-10-01

    We study the coupling of the gravitational action, which is a linear combination of the Hilbert-Palatini term and the quadratic torsion term, to the action of Dirac fermions. The system possesses local Poincare invariance and hence belongs to Poincare gauge theory (PGT) with matter. The complete Hamiltonian analysis of the theory is carried out without gauge fixing but under certain ansatz on the coupling parameters, which leads to a consistent connection dynamics with second-class constraints and torsion. After performing a partial gauge fixing, all second-class constraints can be solved, and a SU(2)-connection dynamical formalism of the theory can be obtained. Hence, the techniques of loop quantum gravity (LQG) can be employed to quantize this PGT with non-zero torsion. Moreover, the Barbero-Immirzi parameter in LQG acquires its physical meaning as the coupling parameter between the Hilbert-Palatini term and the quadratic torsion term in this gauge theory of gravity.

  18. Design Constraints Regarding The Use Of Fluids In Emergency Medical Systems For Space Flight

    NASA Technical Reports Server (NTRS)

    McQuillen, John

    2013-01-01

    The Exploration Medical Capability Project of the Human Research Program is tasked with identifying, investigating and addressing gaps existing gaps in either knowledge or technology that need to be addressed in order to enable safer exploration missions. There are several gaps that involve treatment for emergency medical situations. Some of these treatments involve the handling of liquids in the spacecraft environment which involve gas-liquid mixtures handling, dissolution chemistry and thermal issues. Some of the recent technology efforts include the Intravenous fluid generation (IVGEN) experiment, the In-Suit Injection System (ISIS) experiment, and medical suction. Constraints include limited volume, shelf life, handling biohazards, availability of power, crew time and medical training.

  19. Evaluation of an artificial intelligence guided inverse planning system: clinical case study.

    PubMed

    Yan, Hui; Yin, Fang-Fang; Willett, Christopher

    2007-04-01

    An artificial intelligence (AI) guided method for parameter adjustment of inverse planning was implemented on a commercial inverse treatment planning system. For evaluation purpose, four typical clinical cases were tested and the results from both plans achieved by automated and manual methods were compared. The procedure of parameter adjustment mainly consists of three major loops. Each loop is in charge of modifying parameters of one category, which is carried out by a specially customized fuzzy inference system. A physician prescribed multiple constraints for a selected volume were adopted to account for the tradeoff between prescription dose to the PTV and dose-volume constraints for critical organs. The searching process for an optimal parameter combination began with the first constraint, and proceeds to the next until a plan with acceptable dose was achieved. The initial setup of the plan parameters was the same for each case and was adjusted independently by both manual and automated methods. After the parameters of one category were updated, the intensity maps of all fields were re-optimized and the plan dose was subsequently re-calculated. When final plan arrived, the dose statistics were calculated from both plans and compared. For planned target volume (PTV), the dose for 95% volume is up to 10% higher in plans using the automated method than those using the manual method. For critical organs, an average decrease of the plan dose was achieved. However, the automated method cannot improve the plan dose for some critical organs due to limitations of the inference rules currently employed. For normal tissue, there was no significant difference between plan doses achieved by either automated or manual method. With the application of AI-guided method, the basic parameter adjustment task can be accomplished automatically and a comparable plan dose was achieved in comparison with that achieved by the manual method. Future improvements to incorporate case-specific inference rules are essential to fully automate the inverse planning process.

  20. Cosmology from cosmic shear with Dark Energy Survey Science Verification data

    DOE PAGES

    Becker, M. R.

    2016-07-06

    We present the first constraints on cosmology from the Dark Energy Survey (DES), using weak lensing measurements from the preliminary Science Verification (SV) data. We use 139 square degrees of SV data, which is less than 3% of the full DES survey area. Using cosmic shear 2-point measurements over three redshift bins we find σ 8(m=0.3) 0.5 = 0:81 ± 0:06 (68% confidence), after marginalising over 7 systematics parameters and 3 other cosmological parameters. Furthermore, we examine the robustness of our results to the choice of data vector and systematics assumed, and find them to be stable. About 20% ofmore » our error bar comes from marginalising over shear and photometric redshift calibration uncertainties. The current state-of-the-art cosmic shear measurements from CFHTLenS are mildly discrepant with the cosmological constraints from Planck CMB data. Our results are consistent with both datasets. Our uncertainties are ~30% larger than those from CFHTLenS when we carry out a comparable analysis of the two datasets, which we attribute largely to the lower number density of our shear catalogue. We investigate constraints on dark energy and find that, with this small fraction of the full survey, the DES SV constraints make negligible impact on the Planck constraints. The moderate disagreement between the CFHTLenS and Planck values of σ 8(Ω m=0.3) 0.5 is present regardless of the value of w.« less

Top