Sample records for constrain model parameters

  1. Universally Sloppy Parameter Sensitivities in Systems Biology Models

    PubMed Central

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-01-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568

  2. Universally sloppy parameter sensitivities in systems biology models.

    PubMed

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  3. Astrophysical Model Selection in Gravitational Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  4. A viable dark fluid model

    NASA Astrophysics Data System (ADS)

    Elkhateeb, Esraa

    2018-01-01

    We consider a cosmological model based on a generalization of the equation of state proposed by Nojiri and Odintsov (2004) and Štefančić (2005, 2006). We argue that this model works as a dark fluid model which can interpolate between dust equation of state and the dark energy equation of state. We show how the asymptotic behavior of the equation of state constrained the parameters of the model. The causality condition for the model is also studied to constrain the parameters and the fixed points are tested to determine different solution classes. Observations of Hubble diagram of SNe Ia supernovae are used to further constrain the model. We present an exact solution of the model and calculate the luminosity distance and the energy density evolution. We also calculate the deceleration parameter to test the state of the universe expansion.

  5. OCO-2 Column Carbon Dioxide and Biometric Data Jointly Constrain Parameterization and Projection of a Global Land Model

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III

    2015-12-01

    Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.

  6. A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions

    NASA Astrophysics Data System (ADS)

    Lienert, Sebastian; Joos, Fortunat

    2018-05-01

    A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.

  7. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  8. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for ecosystem carbon cycle studies

    Treesearch

    Y. He; Q. Zhuang; A.D. McGuire; Y. Liu; M. Chen

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations inmodeling regional carbon dynamics and explore the...

  9. The added value of remote sensing products in constraining hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.

  10. Characterization of the High-Albedo NEA 3691 Bede

    NASA Technical Reports Server (NTRS)

    Wooden, Diane H.; Lederer, Susan M.; Jehin, Emmanuel; Rozitis, Benjamin; Jefferson, Jeffrey D.; Nelson, Tyler W.; Dotson, Jessie L.; Ryan, Erin L.; Howell, Ellen S.; Fernandez, Yanga R.; hide

    2016-01-01

    Characterization of NEAs provides important inputs to models for atmospheric entry, risk assessment and mitigation. Diameter is a key parameter because diameter translates to kinetic energy in atmospheric entry. Diameters can be derived from the absolute magnitude, H(PA=0deg), and from thermal modeling of observed IR fluxes. For both methods, the albedo (pv) is important - high pv surfaces have cooler temperatures, larger diameters for a given Hmag, and shallower phase curves (larger slope parameter G). Thermal model parameters are coupled, however, so that a higher thermal inertia also results in a cooler surface temperature. Multiple parameters contribute to constraining the diameter. Observations made at multiple observing geometries can contribute to understanding the relationships between and potentially breaking some of the degeneracies between parameters. We present data and analyses on NEA 3691 Bede with the aim of best constraining the diameter and pv from a combination of thermal modeling and light curve analyses. We employ our UKIRT+Michelle mid-IR photometric observations of 3691 Bede's thermal emission at 2 phase angles (27&43 deg 2015-03-19 & 04-13), in addition to WISE data (33deg 2010-05-27, Mainzer+2011). Observing geometries differ by solar phase angles and by moderate changes in heliocentric distance (e.g., further distances produce somewhat cooler surface temperatures). With the NEATM model and for a constant IR beaming parameter (eta=constant), there is a family of solutions for (diameter, pv, G, eta) where G is the slope parameter from the H-G Relation. NEATM models employing Pravec+2012's choice of G=0.43, produce D=1.8 km and pv˜0.4, given that G=0.43 is assumed from studies of main belt asteroids (Warner+2009). We present an analysis of the light curve of 3691 Bede to constrain G from observations. We also investigate fitting thermophysical models (TPM, Rozitis+11) to constrain the coupled parameters of thermal inertia (Gamma) and surface roughness, which in turn affect diameter and pv. Surface composition can be related to pv. This study focuses on understanding and characterizing the dependency of parameters with the aim of constraining diameter, pv and thermal inertia for 3691 Bede.

  11. Characterization of the high-albedo NEA 3691 Bede

    NASA Astrophysics Data System (ADS)

    Wooden, Diane H.; Lederer, Susan M.; Jehin, Emmanuel; Rozitis, Benjamin; Jefferson, Jeffrey D.; Nelson, Tyler W.; Dotson, Jessie L.; Ryan, Erin L.; Howell, Ellen S.; Fernandez, Yanga R.; Lovell, Amy J.; Woodward, Charles E.; Harker, David Emerson

    2016-10-01

    Characterization of NEAs provides important inputs to models for atmospheric entry, risk assessment and mitigation. Diameter is a key parameter because diameter translates to kinetic energy in atmospheric entry. Diameters can be derived from the absolute magnitude, H(PA=0deg), and from thermal modeling of observed IR fluxes. For both methods, the albedo (pv) is important - high pv surfaces have cooler temperatures, larger diameters for a given Hmag, and shallower phase curves (larger slope parameter G). Thermal model parameters are coupled, however, so that a higher thermal inertia also results in a cooler surface temperature. Multiple parameters contribute to constraining the diameter.Observations made at multiple observing geometries can contribute to understanding the relationships between and potentially breaking some of the degeneracies between parameters. We present data and analyses on NEA 3691 Bede with the aim of best constraining the diameter and pv from a combination of thermal modeling and light curve analyses. We employ our UKIRT+Michelle mid-IR photometric observations of 3691 Bede's thermal emission at 2 phase angles (27&43 deg 2015-03-19 & 04-13), in addition to WISE data (33deg 2010-05-27, Mainzer+2011).Observing geometries differ by solar phase angles and by moderate changes in heliocentric distance (e.g., further distances produce somewhat cooler surface temperatures). With the NEATM model and for a constant IR beaming parameter (eta=constant), there is a family of solutions for (diameter, pv, G, eta) where G is the slope parameter from the H-G Relation. NEATM models employing Pravec+2012's choice of G=0.43, produce D=1.8 km and pv≈0.4, given that G=0.43 is assumed from studies of main belt asteroids (Warner+2009). We present an analysis of the light curve of 3691 Bede to constrain G from observations. We also investigate fitting thermophysical models (TPM, Rozitis+11) to constrain the coupled parameters of thermal inertia (Gamma) and surface roughness, which in turn affect diameter and pv. Surface composition can be related to pv. This study focuses on understanding and characterizing the dependency of parameters with the aim of constraining diameter, pv and thermal inertia for 3691 Bede.

  12. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  13. Optimization of Modeled Land-Atmosphere Exchanges of Water and Energy in an Isotopically-Enabled Land Surface Model by Bayesian Parameter Calibration

    NASA Astrophysics Data System (ADS)

    Wong, T. E.; Noone, D. C.; Kleiber, W.

    2014-12-01

    The single largest uncertainty in climate model energy balance is the surface latent heating over tropical land. Furthermore, the partitioning of the total latent heat flux into contributions from surface evaporation and plant transpiration is of great importance, but notoriously poorly constrained. Resolving these issues will require better exploiting information which lies at the interface between observations and advanced modeling tools, both of which are imperfect. There are remarkably few observations which can constrain these fluxes, placing strict requirements on developing statistical methods to maximize the use of limited information to best improve models. Previous work has demonstrated the power of incorporating stable water isotopes into land surface models for further constraining ecosystem processes. We present results from a stable water isotopically-enabled land surface model (iCLM4), including model experiments partitioning the latent heat flux into contributions from plant transpiration and surface evaporation. It is shown that the partitioning results are sensitive to the parameterization of kinetic fractionation used. We discuss and demonstrate an approach to calibrating select model parameters to observational data in a Bayesian estimation framework, requiring Markov Chain Monte Carlo sampling of the posterior distribution, which is shown to constrain uncertain parameters as well as inform relevant values for operational use. Finally, we discuss the application of the estimation scheme to iCLM4, including entropy as a measure of information content and specific challenges which arise in calibration models with a large number of parameters.

  14. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  15. Dynamic Parameters of the 2015 Nepal Gorkha Mw7.8 Earthquake Constrained by Multi-observations

    NASA Astrophysics Data System (ADS)

    Weng, H.; Yang, H.

    2017-12-01

    Dynamic rupture model can provide much detailed insights into rupture physics that is capable of assessing future seismic risk. Many studies have attempted to constrain the slip-weakening distance, an important parameter controlling friction behavior of rock, for several earthquakes based on dynamic models, kinematic models, and direct estimations from near-field ground motion. However, large uncertainties of the values of the slip-weakening distance still remain, mostly because of the intrinsic trade-offs between the slip-weakening distance and fault strength. Here we use a spontaneously dynamic rupture model to constrain the frictional parameters of the 25 April 2015 Mw7.8 Nepal earthquake, by combining with multiple seismic observations such as high-rate cGPS data, strong motion data, and kinematic source models. With numerous tests we find the trade-off patterns of final slip, rupture speed, static GPS ground displacements, and dynamic ground waveforms are quite different. Combining all the seismic constraints we can conclude a robust solution without a substantial trade-off of average slip-weakening distance, 0.6 m, in contrast to previous kinematical estimation of 5 m. To our best knowledge, this is the first time to robustly determine the slip-weakening distance on seismogenic fault from seismic observations. The well-constrained frictional parameters may be used for future dynamic models to assess seismic hazard, such as estimating the peak ground acceleration (PGA) etc. Similar approach could also be conducted for other great earthquakes, enabling broad estimations of the dynamic parameters in global perspectives that can better reveal the intrinsic physics of earthquakes.

  16. A Multidimensional Item Response Model: Constrained Latent Class Analysis Using the Gibbs Sampler and Posterior Predictive Checks.

    ERIC Educational Resources Information Center

    Hoijtink, Herbert; Molenaar, Ivo W.

    1997-01-01

    This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)

  17. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    NASA Astrophysics Data System (ADS)

    Gato-Rivera, B.; Semikhatov, A. M.

    1992-08-01

    A direct relation between the conformal formalism for 2D quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the W ( l) -constrained KP hierarchy to the ( p‧, p‧) minimal model, with the tau function being given by the correlator of a product of (dressed) ( l, 1) [or (1, l)] operators, provided the Miwa parameter ni and the free parameter (an abstract bc spin) present in the constraint are expressed through the ratio p‧/ p and the level l.

  18. A Multiple Group Measurement Model of Children's Reports of Parental Socioeconomic Status. Discussion Papers No. 531-78.

    ERIC Educational Resources Information Center

    Mare, Robert D.; Mason, William M.

    An important class of applications of measurement error or constrained factor analytic models consists of comparing models for several populations. In such cases, it is appropriate to make explicit statistical tests of model similarity across groups and to constrain some parameters of the models to be equal across groups using a priori substantive…

  19. Modeling Coronal Mass Ejections with EUHFORIA: A Parameter Study of the Gibson-Low Flux Rope Model using Multi-Viewpoint Observations

    NASA Astrophysics Data System (ADS)

    Verbeke, C.; Asvestari, E.; Scolini, C.; Pomoell, J.; Poedts, S.; Kilpua, E.

    2017-12-01

    Coronal Mass Ejections (CMEs) are one of the big influencers on the coronal and interplanetary dynamics. Understanding their origin and evolution from the Sun to the Earth is crucial in order to determine the impact on our Earth and society. One of the key parameters that determine the geo-effectiveness of the coronal mass ejection is its internal magnetic configuration. We present a detailed parameter study of the Gibson-Low flux rope model. We focus on changes in the input parameters and how these changes affect the characteristics of the CME at Earth. Recently, the Gibson-Low flux rope model has been implemented into the inner heliosphere model EUHFORIA, a magnetohydrodynamics forecasting model of large-scale dynamics from 0.1 AU up to 2 AU. Coronagraph observations can be used to constrain the kinematics and morphology of the flux rope. One of the key parameters, the magnetic field, is difficult to determine directly from observations. In this work, we approach the problem by conducting a parameter study in which flux ropes with varying magnetic configurations are simulated. We then use the obtained dataset to look for signatures in imaging observations and in-situ observations in order to find an empirical way of constraining the parameters related to the magnetic field of the flux rope. In particular, we focus on events observed by at least two spacecraft (STEREO + L1) in order to discuss the merits of using observations from multiple viewpoints in constraining the parameters.

  20. Maximizing the information learned from finite data selects a simple model

    NASA Astrophysics Data System (ADS)

    Mattingly, Henry H.; Transtrum, Mark K.; Abbott, Michael C.; Machta, Benjamin B.

    2018-02-01

    We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We advocate for the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from limited data. When many parameters are poorly constrained by the available data, we find that this prior puts weight only on boundaries of the parameter space. Thus, it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. In the limit where there are sufficient data to tightly constrain any number of parameters, this reduces to the Jeffreys prior. However, we argue that this limit is pathological when applied to the hyperribbon parameter manifolds generic in science, because it leads to dramatic dependence on effects invisible to experiment.

  1. Terrestrial Sagnac delay constraining modified gravity models

    NASA Astrophysics Data System (ADS)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  2. A RSSI-based parameter tracking strategy for constrained position localization

    NASA Astrophysics Data System (ADS)

    Du, Jinze; Diouris, Jean-François; Wang, Yide

    2017-12-01

    In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.

  3. How CMB and large-scale structure constrain chameleon interacting dark energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength,more » can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.« less

  4. Model independent constraints on transition redshift

    NASA Astrophysics Data System (ADS)

    Jesus, J. F.; Holanda, R. F. L.; Pereira, S. H.

    2018-05-01

    This paper aims to put constraints on the transition redshift zt, which determines the onset of cosmic acceleration, in cosmological-model independent frameworks. In order to perform our analyses, we consider a flat universe and assume a parametrization for the comoving distance DC(z) up to third degree on z, a second degree parametrization for the Hubble parameter H(z) and a linear parametrization for the deceleration parameter q(z). For each case, we show that type Ia supernovae and H(z) data complement each other on the parameter space and tighter constrains for the transition redshift are obtained. By combining the type Ia supernovae observations and Hubble parameter measurements it is possible to constrain the values of zt, for each approach, as 0.806± 0.094, 0.870± 0.063 and 0.973± 0.058 at 1σ c.l., respectively. Then, such approaches provide cosmological-model independent estimates for this parameter.

  5. Attaining insight into interactions between hydrologic model parameters and geophysical attributes for national-scale model parameter estimation

    NASA Astrophysics Data System (ADS)

    Mizukami, N.; Clark, M. P.; Newman, A. J.; Wood, A.; Gutmann, E. D.

    2017-12-01

    Estimating spatially distributed model parameters is a grand challenge for large domain hydrologic modeling, especially in the context of hydrologic model applications such as streamflow forecasting. Multi-scale Parameter Regionalization (MPR) is a promising technique that accounts for the effects of fine-scale geophysical attributes (e.g., soil texture, land cover, topography, climate) on model parameters and nonlinear scaling effects on model parameters. MPR computes model parameters with transfer functions (TFs) that relate geophysical attributes to model parameters at the native input data resolution and then scales them using scaling functions to the spatial resolution of the model implementation. One of the biggest challenges in the use of MPR is identification of TFs for each model parameter: both functional forms and geophysical predictors. TFs used to estimate the parameters of hydrologic models typically rely on previous studies or were derived in an ad-hoc, heuristic manner, potentially not utilizing maximum information content contained in the geophysical attributes for optimal parameter identification. Thus, it is necessary to first uncover relationships among geophysical attributes, model parameters, and hydrologic processes (i.e., hydrologic signatures) to obtain insight into which and to what extent geophysical attributes are related to model parameters. We perform multivariate statistical analysis on a large-sample catchment data set including various geophysical attributes as well as constrained VIC model parameters at 671 unimpaired basins over the CONUS. We first calibrate VIC model at each catchment to obtain constrained parameter sets. Additionally, parameter sets sampled during the calibration process are used for sensitivity analysis using various hydrologic signatures as objectives to understand the relationships among geophysical attributes, parameters, and hydrologic processes.

  6. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  7. Performance Analysis of BDS Medium-Long Baseline RTK Positioning Using an Empirical Troposphere Model.

    PubMed

    Shu, Bao; Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong

    2018-04-14

    For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively.

  8. Performance Analysis of BDS Medium-Long Baseline RTK Positioning Using an Empirical Troposphere Model

    PubMed Central

    Liu, Hui; Xu, Longwei; Qian, Chuang; Gong, Xiaopeng; An, Xiangdong

    2018-01-01

    For GPS medium-long baseline real-time kinematic (RTK) positioning, the troposphere parameter is introduced along with coordinates, and the model is ill-conditioned due to its strong correlation with the height parameter. For BeiDou Navigation Satellite System (BDS), additional difficulties occur due to its special satellite constellation. In fact, relative zenith troposphere delay (RZTD) derived from high-precision empirical zenith troposphere models can be introduced. Thus, the model strength can be improved, which is also called the RZTD-constrained RTK model. In this contribution, we first analyze the factors affecting the precision of BDS medium-long baseline RTK; thereafter, 15 baselines ranging from 38 km to 167 km in different troposphere conditions are processed to assess the performance of RZTD-constrained RTK. Results show that the troposphere parameter is difficult to distinguish from the height component, even with long time filtering for BDS-only RTK. Due to the lack of variation in geometry for the BDS geostationary Earth orbit satellite, the long convergence time of ambiguity parameters may reduce the height precision of GPS/BDS-combined RTK in the initial period. When the RZTD-constrained model was used in BDS and GPS/BDS-combined situations compared with the traditional RTK, the standard deviation of the height component for the fixed solution was reduced by 52.4% and 34.0%, respectively. PMID:29661999

  9. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    NASA Astrophysics Data System (ADS)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.

  10. Fine-structure constant constraints on dark energy. II. Extending the parameter space

    NASA Astrophysics Data System (ADS)

    Martins, C. J. A. P.; Pinho, A. M. M.; Carreira, P.; Gusart, A.; López, J.; Rocha, C. I. S. A.

    2016-01-01

    Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α , are a powerful probe of new physics. Recently these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, were used to constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ , to the electromagnetic sector) the α variation. One caveat of these analyses was that it was based on fiducial models where the dark energy equation of state was described by a single parameter (effectively its present day value, w0). Here we relax this assumption and study broader dark energy model classes, including the Chevallier-Polarski-Linder and early dark energy parametrizations. Even in these extended cases we find that the current data constrains the coupling ζ at the 1 0-6 level and w0 to a few percent (marginalizing over other parameters), thus confirming the robustness of earlier analyses. On the other hand, the additional parameters are typically not well constrained. We also highlight the implications of our results for constraints on violations of the weak equivalence principle and improvements to be expected from forthcoming measurements with high-resolution ultrastable spectrographs.

  11. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  12. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  13. Sensitivity to gaze-contingent contrast increments in naturalistic movies: An exploratory report and model comparison

    PubMed Central

    Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.

    2015-01-01

    Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546

  14. Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.

    2016-12-01

    The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.

  15. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  16. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  17. Constraining dark sector perturbations I: cosmic shear and CMB lensing

    NASA Astrophysics Data System (ADS)

    Battye, Richard A.; Moss, Adam; Pearson, Jonathan A.

    2015-04-01

    We present current and future constraints on equations of state for dark sector perturbations. The equations of state considered are those corresponding to a generalized scalar field model and time-diffeomorphism invariant Script L(g) theories that are equivalent to models of a relativistic elastic medium and also Lorentz violating massive gravity. We develop a theoretical understanding of the observable impact of these models. In order to constrain these models we use CMB temperature data from Planck, BAO measurements, CMB lensing data from Planck and the South Pole Telescope, and weak galaxy lensing data from CFHTLenS. We find non-trivial exclusions on the range of parameters, although the data remains compatible with w=-1. We gauge how future experiments will help to constrain the parameters. This is done via a likelihood analysis for CMB experiments such as CoRE and PRISM, and tomographic galaxy weak lensing surveys, focussing in on the potential discriminatory power of Euclid on mildly non-linear scales.

  18. Quantifying the Uncertainties and Multi-parameter Trade-offs in Joint Inversion of Receiver Functions and Surface Wave Velocity and Ellipticity

    NASA Astrophysics Data System (ADS)

    Gao, C.; Lekic, V.

    2016-12-01

    When constraining the structure of the Earth's continental lithosphere, multiple seismic observables are often combined due to their complementary sensitivities.The transdimensional Bayesian (TB) approach in seismic inversion allows model parameter uncertainties and trade-offs to be quantified with few assumptions. TB sampling yields an adaptive parameterization that enables simultaneous inversion for different model parameters (Vp, Vs, density, radial anisotropy), without the need for strong prior information or regularization. We use a reversible jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate different seismic observables - surface wave dispersion (SWD), Rayleigh wave ellipticity (ZH ratio), and receiver functions - into the inversion for the profiles of shear velocity (Vs), compressional velocity (Vp), density (ρ), and radial anisotropy (ξ) beneath a seismic station. By analyzing all three data types individually and together, we show that TB sampling can eliminate the need for a fixed parameterization based on prior information, and reduce trade-offs in model estimates. We then explore the effect of different types of misfit functions for receiver function inversion, which is a highly non-unique problem. We compare the synthetic inversion results using the L2 norm, cross-correlation type and integral type misfit function by their convergence rates and retrieved seismic structures. In inversions in which only one type of model parameter (Vs for the case of SWD) is inverted, assumed scaling relationships are often applied to account for sensitivity to other model parameters (e.g. Vp, ρ, ξ). Here we show that under a TB framework, we can eliminate scaling assumptions, while simultaneously constraining multiple model parameters to varying degrees. Furthermore, we compare the performance of TB inversion when different types of model parameters either share the same or use independent parameterizations. We show that different parameterizations can lead to differences in retrieved model parameters, consistent with limited data constraints. We then quantitatively examine the model parameter trade-offs and find that trade-offs between Vp and radial anisotropy might limit our ability to constrain shallow-layer radial anisotropy using current seismic observables.

  19. Constrained Maximum Likelihood Estimation for Two-Level Mean and Covariance Structure Models

    ERIC Educational Resources Information Center

    Bentler, Peter M.; Liang, Jiajuan; Tang, Man-Lai; Yuan, Ke-Hai

    2011-01-01

    Maximum likelihood is commonly used for the estimation of model parameters in the analysis of two-level structural equation models. Constraints on model parameters could be encountered in some situations such as equal factor loadings for different factors. Linear constraints are the most common ones and they are relatively easy to handle in…

  20. Probing dark energy in the scope of a Bianchi type I spacetime

    NASA Astrophysics Data System (ADS)

    Amirhashchi, Hassan

    2018-03-01

    It is well known that the flat Friedmann-Robertson-Walker metric is a special case of Bianchi type I spacetime. In this paper, we use 38 Hubble parameter, H (z ), measurements at intermediate redshifts 0.07 ≤z ≤2.36 and its joint combination with the latest "joint light curves" (JLA) sample, comprising 740 type Ia supernovae in the redshift range of z ɛ [0.01 ,1.30 ] to constrain the parameters of the Bianchi type I dark energy model. We also use the same datasets to constrain flat a Λ CDM model. In both cases, we specifically address the expansion rate H0 as well as the transition redshift zt determinations out of these measurements. In both models, we found that using joint combination of datasets gives rise to lower values for model parameters. Also to compare the considered cosmologies, we have made Akaike information criterion and Bayes factor (Ψ ) tests.

  1. Constraining f(T) teleparallel gravity by big bang nucleosynthesis: f(T) cosmology and BBN.

    PubMed

    Capozziello, S; Lambiase, G; Saridakis, E N

    2017-01-01

    We use Big Bang Nucleosynthesis (BBN) observational data on the primordial abundance of light elements to constrain f ( T ) gravity. The three most studied viable f ( T ) models, namely the power law, the exponential and the square-root exponential are considered, and the BBN bounds are adopted in order to extract constraints on their free parameters. For the power-law model, we find that the constraints are in agreement with those obtained using late-time cosmological data. For the exponential and the square-root exponential models, we show that for reliable regions of parameters space they always satisfy the BBN bounds. We conclude that viable f ( T ) models can successfully satisfy the BBN constraints.

  2. Constraining new physics models with isotope shift spectroscopy

    NASA Astrophysics Data System (ADS)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  3. Using groundwater temperature data to constrain parameter estimation in a groundwater flow model of a wetland system

    USGS Publications Warehouse

    Bravo, Hector R.; Jiang, Feng; Hunt, Randall J.

    2002-01-01

    Parameter estimation is a powerful way to calibrate models. While head data alone are often insufficient to estimate unique parameters due to model nonuniqueness, flow‐and‐heat‐transport modeling can constrain estimation and allow simultaneous estimation of boundary fluxes and hydraulic conductivity. In this work, synthetic and field models that did not converge when head data were used did converge when head and temperature were used. Furthermore, frequency domain analyses of head and temperature data allowed selection of appropriate modeling timescales. Inflows in the Wilton, Wisconsin, wetlands could be estimated over periods such as a growing season and over periods of a few days when heads were nearly steady and groundwater temperature varied during the day. While this methodology is computationally more demanding than traditional head calibration, the results gained are unobtainable using the traditional approach. These results suggest that temperature can efficiently supplement head data in systems where accurate flux calibration targets are unavailable.

  4. Estimating contrast transfer function and associated parameters by constrained non-linear optimization.

    PubMed

    Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W

    2009-03-01

    The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.

  5. Hydrologic and hydraulic flood forecasting constrained by remote sensing data

    NASA Astrophysics Data System (ADS)

    Li, Y.; Grimaldi, S.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2017-12-01

    Flooding is one of the most destructive natural disasters, resulting in many deaths and billions of dollars of damages each year. An indispensable tool to mitigate the effect of floods is to provide accurate and timely forecasts. An operational flood forecasting system typically consists of a hydrologic model, converting rainfall data into flood volumes entering the river system, and a hydraulic model, converting these flood volumes into water levels and flood extents. Such a system is prone to various sources of uncertainties from the initial conditions, meteorological forcing, topographic data, model parameters and model structure. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using ground-based streamflow measurements, and such applications are limited to well-gauged areas. The recent increasing availability of spatially distributed remote sensing (RS) data offers new opportunities to improve flood forecasting skill. Based on an Australian case study, this presentation will discuss the use of 1) RS soil moisture to constrain a hydrologic model, and 2) RS flood extent and level to constrain a hydraulic model.The GRKAL hydrological model is calibrated through a joint calibration scheme using both ground-based streamflow and RS soil moisture observations. A lag-aware data assimilation approach is tested through a set of synthetic experiments to integrate RS soil moisture to constrain the streamflow forecasting in real-time.The hydraulic model is LISFLOOD-FP which solves the 2-dimensional inertial approximation of the Shallow Water Equations. Gauged water level time series and RS-derived flood extent and levels are used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space will be discussed.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petiteau, Antoine; Babak, Stanislav; Sesana, Alberto

    Gravitational wave (GW) signals from coalescing massive black hole (MBH) binaries could be used as standard sirens to measure cosmological parameters. The future space-based GW observatory Laser Interferometer Space Antenna (LISA) will detect up to a hundred of those events, providing very accurate measurements of their luminosity distances. To constrain the cosmological parameters, we also need to measure the redshift of the galaxy (or cluster of galaxies) hosting the merger. This requires the identification of a distinctive electromagnetic event associated with the binary coalescence. However, putative electromagnetic signatures may be too weak to be observed. Instead, we study here themore » possibility of constraining the cosmological parameters by enforcing statistical consistency between all the possible hosts detected within the measurement error box of a few dozen of low-redshift (z < 3) events. We construct MBH populations using merger tree realizations of the dark matter hierarchy in a {Lambda}CDM universe, and we use data from the Millennium simulation to model the galaxy distribution in the LISA error box. We show that, assuming that all the other cosmological parameters are known, the parameter w describing the dark energy equation of state can be constrained to a 4%-8% level (2{sigma} error), competitive with current uncertainties obtained by type Ia supernovae measurements, providing an independent test of our cosmological model.« less

  7. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    NASA Astrophysics Data System (ADS)

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.

    2011-04-01

    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between-group differences in specific binding.

  8. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    NASA Astrophysics Data System (ADS)

    Quinn Thomas, R.; Brooks, Evan B.; Jersild, Annika L.; Ward, Eric J.; Wynne, Randolph H.; Albaugh, Timothy J.; Dinon-Aldridge, Heather; Burkhart, Harold E.; Domec, Jean-Christophe; Fox, Thomas R.; Gonzalez-Benecke, Carlos A.; Martin, Timothy A.; Noormets, Asko; Sampson, David A.; Teskey, Robert O.

    2017-07-01

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model-data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.

  9. Modeling slow-slip segmentation in Cascadia subduction zone constrained by tremor locations and gravity anomalies

    NASA Astrophysics Data System (ADS)

    Li, Duo; Liu, Yajing

    2017-04-01

    Along-strike segmentation of slow-slip events (SSEs) and nonvolcanic tremors in Cascadia may reflect heterogeneities of the subducting slab or overlying continental lithosphere. However, the nature behind this segmentation is not fully understood. We develop a 3-D model for episodic SSEs in northern and central Cascadia, incorporating both seismological and gravitational observations to constrain the heterogeneities in the megathrust fault properties. The 6 year automatically detected tremors are used to constrain the rate-state friction parameters. The effective normal stress at SSE depths is constrained by along-margin free-air and Bouguer gravity anomalies. The along-strike variation in the long-term plate convergence rate is also taken into consideration. Simulation results show five segments of ˜Mw6.0 SSEs spontaneously appear along the strike, correlated to the distribution of tremor epicenters. Modeled SSE recurrence intervals are equally comparable to GPS observations using both types of gravity anomaly constraints. However, the model constrained by free-air anomaly does a better job in reproducing the cumulative slip as well as more consistent surface displacements with GPS observations. The modeled along-strike segmentation represents the averaged slip release over many SSE cycles, rather than permanent barriers. Individual slow-slip events can still propagate across the boundaries, which may cause interactions between adjacent SSEs, as observed in time-dependent GPS inversions. In addition, the moment-duration scaling is sensitive to the selection of velocity criteria for determining when SSEs occur. Hence, the detection ability of the current GPS network should be considered in the interpretation of slow earthquake source parameter scaling relations.

  10. Uncertainty assessment and implications for data acquisition in support of integrated hydrologic models

    NASA Astrophysics Data System (ADS)

    Brunner, Philip; Doherty, J.; Simmons, Craig T.

    2012-07-01

    The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.

  11. Gaining insight into the T _2^*-T2 relationship in surface NMR free-induction decay measurements

    NASA Astrophysics Data System (ADS)

    Grombacher, Denys; Auken, Esben

    2018-05-01

    One of the primary shortcomings of the surface nuclear magnetic resonance (NMR) free-induction decay (FID) measurement is the uncertainty surrounding which mechanism controls the signal's time dependence. Ideally, the FID-estimated relaxation time T_2^* that describes the signal's decay carries an intimate link to the geometry of the pore space. In this limit the parameter T_2^* is closely linked to a related parameter T2, which is more closely linked to pore-geometry. If T_2^* ˜eq {T_2} the FID can provide valuable insight into relative pore-size and can be used to make quantitative permeability estimates. However, given only FID measurements it is difficult to determine whether T_2^* is linked to pore geometry or whether it has been strongly influenced by background magnetic field inhomogeneity. If the link between an observed T_2^* and the underlying T2 could be further constrained the utility of the standard surface NMR FID measurement would be greatly improved. We hypothesize that an approach employing an updated surface NMR forward model that solves the full Bloch equations with appropriately weighted relaxation terms can be used to help constrain the T_2^*-T2 relationship. Weighting the relaxation terms requires estimating the poorly constrained parameters T2 and T1; to deal with this uncertainty we propose to conduct a parameter search involving multiple inversions that employ a suite of forward models each describing a distinct but plausible T_2^*-T2 relationship. We hypothesize that forward models given poor T2 estimates will produce poor data fits when using the complex-inversion, while forward models given reliable T2 estimates will produce satisfactory data fits. By examining the data fits produced by the suite of plausible forward models, the likely T_2^*-T2 can be constrained by identifying the range of T2 estimates that produce reliable data fits. Synthetic and field results are presented to investigate the feasibility of the proposed technique.

  12. Cosmological parameters, shear maps and power spectra from CFHTLenS using Bayesian hierarchical inference

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Heavens, Alan; Jaffe, Andrew H.

    2017-04-01

    We apply two Bayesian hierarchical inference schemes to infer shear power spectra, shear maps and cosmological parameters from the Canada-France-Hawaii Telescope (CFHTLenS) weak lensing survey - the first application of this method to data. In the first approach, we sample the joint posterior distribution of the shear maps and power spectra by Gibbs sampling, with minimal model assumptions. In the second approach, we sample the joint posterior of the shear maps and cosmological parameters, providing a new, accurate and principled approach to cosmological parameter inference from cosmic shear data. As a first demonstration on data, we perform a two-bin tomographic analysis to constrain cosmological parameters and investigate the possibility of photometric redshift bias in the CFHTLenS data. Under the baseline ΛCDM (Λ cold dark matter) model, we constrain S_8 = σ _8(Ω _m/0.3)^{0.5} = 0.67+0.03-0.03 (68 per cent), consistent with previous CFHTLenS analyses but in tension with Planck. Adding neutrino mass as a free parameter, we are able to constrain ∑mν < 4.6 eV (95 per cent) using CFHTLenS data alone. Including a linear redshift-dependent photo-z bias Δz = p2(z - p1), we find p_1=-0.25+0.53-0.60 and p_2 = -0.15+0.17-0.15, and tension with Planck is only alleviated under very conservative prior assumptions. Neither the non-minimal neutrino mass nor photo-z bias models are significantly preferred by the CFHTLenS (two-bin tomography) data.

  13. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less

  14. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    DOE PAGES

    Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.; ...

    2017-07-26

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less

  15. Constraining Secluded Dark Matter models with the public data from the 79-string IceCube search for dark matter in the Sun

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ardid, M.; Felis, I.; Martínez-Mora, J.A.

    The 79-string IceCube search for dark matter in the Sun public data is used to test Secluded Dark Matter models. No significant excess over background is observed and constraints on the parameters of the models are derived. Moreover, the search is also used to constrain the dark photon model in the region of the parameter space with dark photon masses between 0.22 and ∼ 1 GeV and a kinetic mixing parameter ε ∼ 10{sup −9}, which remains unconstrained. These are the first constraints of dark photons from neutrino telescopes. It is expected that neutrino telescopes will be efficient tools tomore » test dark photons by means of different searches in the Sun, Earth and Galactic Center, which could complement constraints from direct detection, accelerators, astrophysics and indirect detection with other messengers, such as gamma rays or antiparticles.« less

  16. GROWTH AND INEQUALITY: MODEL EVALUATION BASED ON AN ESTIMATION-CALIBRATION STRATEGY

    PubMed Central

    Jeong, Hyeok; Townsend, Robert

    2010-01-01

    This paper evaluates two well-known models of growth with inequality that have explicit micro underpinnings related to household choice. With incomplete markets or transactions costs, wealth can constrain investment in business and the choice of occupation and also constrain the timing of entry into the formal financial sector. Using the Thai Socio-Economic Survey (SES), we estimate the distribution of wealth and the key parameters that best fit cross-sectional data on household choices and wealth. We then simulate the model economies for two decades at the estimated initial wealth distribution and analyze whether the model economies at those micro-fit parameter estimates can explain the observed macro and sectoral aspects of income growth and inequality change. Both models capture important features of Thai reality. Anomalies and comparisons across the two distinct models yield specific suggestions for improved research on the micro foundations of growth and inequality. PMID:20448833

  17. Constraining the dark energy models with H (z ) data: An approach independent of H0

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Fotios K.; Basilakos, Spyros

    2018-03-01

    We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, Elise; Wechsler, Risa H.

    We present the nonlinear 2D galaxy power spectrum, P(k, µ), in redshift space, measured from the Dark Sky simulations, using galaxy catalogs constructed with both halo occupation distribution and subhalo abundance matching methods, chosen to represent an intermediate redshift sample of luminous red galaxies. We find that the information content in individual µ (cosine of the angle to the line of sight) bins is substantially richer then multipole moments, and show that this can be used to isolate the impact of nonlinear growth and redshift space distortion (RSD) effects. Using the µ < 0.2 simulation data, which we show ismore » not impacted by RSD effects, we can successfully measure the nonlinear bias to an accuracy of ~ 5% at k < 0.6hMpc-1 . This use of individual µ bins to extract the nonlinear bias successfully removes a large parameter degeneracy when constraining the linear growth rate of structure. We carry out a joint parameter estimation, using the low µ simulation data to constrain the nonlinear bias, and µ > 0.2 to constrain the growth rate and show that f can be constrained to ~ 26(22)% to a kmax < 0.4(0.6)hMpc-1 from clustering alone using a simple dispersion model, for a range of galaxy models. Our analysis of individual µ bins also reveals interesting physical effects which arise simply from different methods of populating halos with galaxies. We also find a prominent turnaround scale, at which RSD damping effects are greater then the nonlinear growth, which differs not only for each µ bin but also for each galaxy model. These features may provide unique signatures which could be used to shed light on the galaxy–dark matter connection. Furthermore, the idea of separating nonlinear growth and RSD effects making use of the full information in the 2D galaxy power spectrum yields significant improvements in constraining cosmological parameters and may be a promising probe of galaxy formation models.« less

  19. Disentangling Redshift-Space Distortions and Nonlinear Bias using the 2D Power Spectrum

    DOE PAGES

    Jennings, Elise; Wechsler, Risa H.

    2015-08-07

    We present the nonlinear 2D galaxy power spectrum, P(k, µ), in redshift space, measured from the Dark Sky simulations, using galaxy catalogs constructed with both halo occupation distribution and subhalo abundance matching methods, chosen to represent an intermediate redshift sample of luminous red galaxies. We find that the information content in individual µ (cosine of the angle to the line of sight) bins is substantially richer then multipole moments, and show that this can be used to isolate the impact of nonlinear growth and redshift space distortion (RSD) effects. Using the µ < 0.2 simulation data, which we show ismore » not impacted by RSD effects, we can successfully measure the nonlinear bias to an accuracy of ~ 5% at k < 0.6hMpc-1 . This use of individual µ bins to extract the nonlinear bias successfully removes a large parameter degeneracy when constraining the linear growth rate of structure. We carry out a joint parameter estimation, using the low µ simulation data to constrain the nonlinear bias, and µ > 0.2 to constrain the growth rate and show that f can be constrained to ~ 26(22)% to a kmax < 0.4(0.6)hMpc-1 from clustering alone using a simple dispersion model, for a range of galaxy models. Our analysis of individual µ bins also reveals interesting physical effects which arise simply from different methods of populating halos with galaxies. We also find a prominent turnaround scale, at which RSD damping effects are greater then the nonlinear growth, which differs not only for each µ bin but also for each galaxy model. These features may provide unique signatures which could be used to shed light on the galaxy–dark matter connection. Furthermore, the idea of separating nonlinear growth and RSD effects making use of the full information in the 2D galaxy power spectrum yields significant improvements in constraining cosmological parameters and may be a promising probe of galaxy formation models.« less

  20. Implementation of remote sensing data for flood forecasting

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2016-12-01

    Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.

  1. A Biologically Constrained, Mathematical Model of Cortical Wave Propagation Preceding Seizure Termination

    PubMed Central

    González-Ramírez, Laura R.; Ahmed, Omar J.; Cash, Sydney S.; Wayne, C. Eugene; Kramer, Mark A.

    2015-01-01

    Epilepsy—the condition of recurrent, unprovoked seizures—manifests in brain voltage activity with characteristic spatiotemporal patterns. These patterns include stereotyped semi-rhythmic activity produced by aggregate neuronal populations, and organized spatiotemporal phenomena, including waves. To assess these spatiotemporal patterns, we develop a mathematical model consistent with the observed neuronal population activity and determine analytically the parameter configurations that support traveling wave solutions. We then utilize high-density local field potential data recorded in vivo from human cortex preceding seizure termination from three patients to constrain the model parameters, and propose basic mechanisms that contribute to the observed traveling waves. We conclude that a relatively simple and abstract mathematical model consisting of localized interactions between excitatory cells with slow adaptation captures the quantitative features of wave propagation observed in the human local field potential preceding seizure termination. PMID:25689136

  2. Model selection as a science driver for dark energy surveys

    NASA Astrophysics Data System (ADS)

    Mukherjee, Pia; Parkinson, David; Corasaniti, Pier Stefano; Liddle, Andrew R.; Kunz, Martin

    2006-07-01

    A key science goal of upcoming dark energy surveys is to seek time-evolution of the dark energy. This problem is one of model selection, where the aim is to differentiate between cosmological models with different numbers of parameters. However, the power of these surveys is traditionally assessed by estimating their ability to constrain parameters, which is a different statistical problem. In this paper, we use Bayesian model selection techniques, specifically forecasting of the Bayes factors, to compare the abilities of different proposed surveys in discovering dark energy evolution. We consider six experiments - supernova luminosity measurements by the Supernova Legacy Survey, SNAP, JEDI and ALPACA, and baryon acoustic oscillation measurements by WFMOS and JEDI - and use Bayes factor plots to compare their statistical constraining power. The concept of Bayes factor forecasting has much broader applicability than dark energy surveys.

  3. Global optimization framework for solar building design

    NASA Astrophysics Data System (ADS)

    Silva, N.; Alves, N.; Pascoal-Faria, P.

    2017-07-01

    The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.

  4. Dynamical insurance models with investment: Constrained singular problems for integrodifferential equations

    NASA Astrophysics Data System (ADS)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2016-01-01

    Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.

  5. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  6. A Computational Model of Active Vision for Visual Search in Human-Computer Interaction

    DTIC Science & Technology

    2010-08-01

    processors that interact with the production rules to produce behavior, and (c) parameters that constrain the behavior of the model (e.g., the...velocity of a saccadic eye movement). While the parameters can be task-specific, the majority of the parameters are usually fixed across a wide variety...previously estimated durations. Hooge and Erkelens (1996) review these four explanations of fixation duration control. A variety of research

  7. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522

  8. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  9. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Tachyon warm-intermediate inflationary universe model in high dissipative regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Setare, M.R.; Kamali, V., E-mail: rezakord@ipm.ir, E-mail: vkamali1362@gmail.com

    2012-08-01

    We consider tachyonic warm-inflationary models in the context of intermediate inflation. We derive the characteristics of this model in slow-roll approximation and develop our model in two cases, 1- For a constant dissipative parameter Γ. 2- Γ as a function of tachyon field φ. We also describe scalar and tensor perturbations for this scenario. The parameters appearing in our model are constrained by recent observational data. We find that the level of non-Gaussianity for this model is comparable with non-tachyonic model.

  11. Traversable geometric dark energy wormholes constrained by astrophysical observations

    NASA Astrophysics Data System (ADS)

    Wang, Deng; Meng, Xin-he

    2016-09-01

    In this paper, we introduce the astrophysical observations into the wormhole research. We investigate the evolution behavior of the dark energy equation of state parameter ω by constraining the dark energy model, so that we can determine in which stage of the universe wormholes can exist by using the condition ω <-1. As a concrete instance, we study the Ricci dark energy (RDE) traversable wormholes constrained by astrophysical observations. Particularly, we find from Fig. 5 of this work, when the effective equation of state parameter ω _X<-1 (or z<0.109), i.e., the null energy condition (NEC) is violated clearly, the wormholes will exist (open). Subsequently, six specific solutions of statically and spherically symmetric traversable wormhole supported by the RDE fluids are obtained. Except for the case of a constant redshift function, where the solution is not only asymptotically flat but also traversable, the five remaining solutions are all non-asymptotically flat, therefore, the exotic matter from the RDE fluids is spatially distributed in the vicinity of the throat. Furthermore, we analyze the physical characteristics and properties of the RDE traversable wormholes. It is worth noting that, using the astrophysical observations, we obtain the constraints on the parameters of the RDE model, explore the types of exotic RDE fluids in different stages of the universe, limit the number of available models for wormhole research, reduce theoretically the number of the wormholes corresponding to different parameters for the RDE model, and provide a clearer picture for wormhole investigations from the new perspective of observational cosmology.

  12. The signal of mantle anisotropy in the coupling of normal modes

    NASA Astrophysics Data System (ADS)

    Beghein, Caroline; Resovsky, Joseph; van der Hilst, Robert D.

    2008-12-01

    We investigate whether the coupling of normal mode (NM) multiplets can help us constrain mantle anisotropy. We first derive explicit expressions of the generalized structure coefficients of coupled modes in terms of elastic coefficients, including the Love parameters describing radial anisotropy and the parameters describing azimuthal anisotropy (Jc, Js, Kc, Ks, Mc, Ms, Bc, Bs, Gc, Gs, Ec, Es, Hc, Hs, Dc and Ds). We detail the selection rules that describe which modes can couple together and which elastic parameters govern their coupling. We then focus on modes of type 0Sl - 0Tl+1 and determine whether they can be used to constrain mantle anisotropy. We show that they are sensitive to six elastic parameters describing azimuthal anisotropy, in addition to the two shear-wave elastic parameters L and N (i.e. VSV and VSH). We find that neither isotropic nor radially anisotropic mantle models can fully explain the observed degree two signal. We show that the NM signal that remains after correction for the effect of the crust and mantle radial anisotropy can be explained by the presence of azimuthal anisotropy in the upper mantle. Although the data favour locating azimuthal anisotropy below 400km, its depth extent and distribution is still not well constrained by the data. Consideration of NM coupling can thus help constrain azimuthal anisotropy in the mantle, but joint analyses with surface-wave phase velocities is needed to reduce the parameter trade-offs and improve our constraints on the individual elastic parameters and the depth location of the azimuthal anisotropy.

  13. A Method to Constrain Mass and Spin of GRB Black Holes within the NDAF Model

    NASA Astrophysics Data System (ADS)

    Liu, Tong; Xue, Li; Zhao, Xiao-Hong; Zhang, Fu-Wen; Zhang, Bing

    2016-04-01

    Black holes (BHs) hide themselves behind various astronomical phenomena and their properties, I.e., mass and spin, are usually difficult to constrain. One leading candidate for the central engine model of gamma-ray bursts (GRBs) invokes a stellar mass BH and a neutrino-dominated accretion flow (NDAF), with the relativistic jet launched due to neutrino-anti-neutrino annihilations. Such a model gives rise to a matter-dominated fireball, and is suitable to interpret GRBs with a dominant thermal component with a photospheric origin. We propose a method to constrain BH mass and spin within the framework of this model and apply the method to the thermally dominant GRB 101219B, whose initial jet launching radius, r0, is constrained from the data. Using our numerical model of NDAF jets, we estimate the following constraints on the central BH: mass MBH ˜ 5-9 M⊙, spin parameter a* ≳ 0.6, and disk mass 3 M⊙ ≲ Mdisk ≲ 4 M⊙. Our results also suggest that the NDAF model is a competitive candidate for the central engine of GRBs with a strong thermal component.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cembranos, Jose A. R.; Diaz-Cruz, J. Lorenzo; Prado, Lilian

    Dark Matter direct detection experiments are able to exclude interesting parameter space regions of particle models which predict an important amount of thermal relics. We use recent data to constrain the branon model and to compute the region that is favored by CDMS measurements. Within this work, we also update present colliders constraints with new studies coming from the LHC. Despite the present low luminosity, it is remarkable that for heavy branons, CMS and ATLAS measurements are already more constraining than previous analyses performed with TEVATRON and LEP data.

  15. Cosmological constraints on extended Galileon models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Antonio De; Tsujikawa, Shinji, E-mail: antoniod@nu.ac.th, E-mail: shinji@rs.kagu.tus.ac.jp

    2012-03-01

    The extended Galileon models possess tracker solutions with de Sitter attractors along which the dark energy equation of state is constant during the matter-dominated epoch, i.e. w{sub DE} = −1−s, where s is a positive constant. Even with this phantom equation of state there are viable parameter spaces in which the ghosts and Laplacian instabilities are absent. Using the observational data of the supernovae type Ia, the cosmic microwave background (CMB), and baryon acoustic oscillations, we place constraints on the tracker solutions at the background level and find that the parameter s is constrained to be s = 0.034{sub −0.034}{supmore » +0.327} (95 % CL) in the flat Universe. In order to break the degeneracy between the models we also study the evolution of cosmological density perturbations relevant to the large-scale structure (LSS) and the Integrated-Sachs-Wolfe (ISW) effect in CMB. We show that, depending on the model parameters, the LSS and the ISW effect is either positively or negatively correlated. It is then possible to constrain viable parameter spaces further from the observational data of the ISW-LSS cross-correlation as well as from the matter power spectrum.« less

  16. Figure of merit and different combinations of observational data sets

    NASA Astrophysics Data System (ADS)

    Su, Qiping; Tuo, Zhong-Liang; Cai, Rong-Gen

    2011-11-01

    To constrain cosmological parameters, one often makes a joint analysis with different combinations of observational data sets. In this paper we take the figure of merit (FoM) for Dark Energy Task Force fiducial model (Chevallier-Polarski-Linder model) to estimate goodness of different combinations of data sets, which include 11 widely used observational data sets (type Ia supernovae, observational hubble parameter, baryon acoustic oscillation, cosmic microwave background, x-ray cluster baryon mass fraction, and gamma-ray bursts). We analyze different combinations and make a comparison for two types of combinations based on two types of basic combinations, which are often adopted in the literature. We find two sets of combinations, which have a strong ability to constrain the dark energy parameters: one has the largest FoM, and the other contains less observational data with a relatively large FoM and a simple fitting procedure.

  17. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  18. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  19. A Model-Data Fusion Approach for Constraining Modeled GPP at Global Scales Using GOME2 SIF Data

    NASA Astrophysics Data System (ADS)

    MacBean, N.; Maignan, F.; Lewis, P.; Guanter, L.; Koehler, P.; Bacour, C.; Peylin, P.; Gomez-Dans, J.; Disney, M.; Chevallier, F.

    2015-12-01

    Predicting the fate of the ecosystem carbon, C, stocks and their sensitivity to climate change relies heavily on our ability to accurately model the gross carbon fluxes, i.e. photosynthesis and respiration. However, there are large differences in the Gross Primary Productivity (GPP) simulated by different land surface models (LSMs), not only in terms of mean value, but also in terms of phase and amplitude when compared to independent data-based estimates. This strongly limits our ability to provide accurate predictions of carbon-climate feedbacks. One possible source of this uncertainty is from inaccurate parameter values resulting from incomplete model calibration. Solar Induced Fluorescence (SIF) has been shown to have a linear relationship with GPP at the typical spatio-temporal scales used in LSMs (Guanter et al., 2011). New satellite-derived SIF datasets have the potential to constrain LSM parameters related to C uptake at global scales due to their coverage. Here we use SIF data derived from the GOME2 instrument (Köhler et al., 2014) to optimize parameters related to photosynthesis and leaf phenology of the ORCHIDEE LSM, as well as the linear relationship between SIF and GPP. We use a multi-site approach that combines many model grid cells covering a wide spatial distribution within the same optimization (e.g. Kuppel et al., 2014). The parameters are constrained per Plant Functional type as the linear relationship described above varies depending on vegetation structural properties. The relative skill of the optimization is compared to a case where only satellite-derived vegetation index data are used to constrain the model, and to a case where both data streams are used. We evaluate the results using an independent data-driven estimate derived from FLUXNET data (Jung et al., 2011) and with a new atmospheric tracer, Carbonyl sulphide (OCS) following the approach of Launois et al. (ACPD, in review). We show that the optimization reduces the strong positive bias of the ORCHIDEE model and increases the correlation compared to independent estimates. Differences in spatial patterns and gradients between simulated GPP and observed SIF remain largely unchanged however, suggesting that the underlying representation of vegetation type and/or structure and functioning in the model requires further investigation.

  20. Assimilating AmeriFlux Site Data into the Community Land Model with Carbon-Nitrogen Coupling via the Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Pettijohn, J. C.; Law, B. E.; Williams, M. D.; Stoeckli, R.; Thornton, P. E.; Hudiburg, T. M.; Thomas, C. K.; Martin, J.; Hill, T. C.

    2009-12-01

    The assimilation of terrestrial carbon, water and nutrient cycle measurements into land surface models of these processes is fundamental to improving our ability to predict how these ecosystems may respond to climate change. A combination of measurements and models, each with their own systematic biases, must be considered when constraining the nonlinear behavior of these coupled dynamics. As such, we use the sequential Ensemble Kalman Filter (EnKF) to assimilate eddy covariance (EC) and other site-level AmeriFlux measurements into the NCAR Community Land Model with Carbon-Nitrogen coupling (CLM-CN v3.5), run in single-column mode at a 30-minute time step, to improve estimates of relatively unconstrained model state variables and parameters. Specifically, we focus on a semi-arid ponderosa pine site (US-ME2) in the Pacific Northwest to identify the mechanisms by which this ecosystem responds to severe late summer drought. Our EnKF analysis includes water, carbon, energy and nitrogen state variables (e.g., 10 volumetric soil moisture levels (0-3.43 m), ponderosa pine and shrub evapotranspiration and net ecosystem exchange of carbon dioxide stocks and flux components, snow depth, etc.) and associated parameters (e.g., PFT-level rooting distribution parameters, maximum subsurface runoff coefficient, soil hydraulic conductivity decay factor, snow aging parameters, maximum canopy conductance, C:N ratios, etc.). The effectiveness of the EnKF in constraining state variables and associated parameters is sensitive to their relative frequencies, in that C-N state variables and parameters with long time constants require similarly long time series in the analysis. We apply the EnKF kernel perturbation routine to disrupt preliminary convergence of covariances, which has been found in recent studies to be a problem more characteristic of low frequency vegetation state variables and parameters than high frequency ones more heavily coupled with highly varying climate (e.g., shallow soil moisture, snow depth). Preliminary results demonstrate that the assimilation of EC and other available AmeriFlux site physical, chemical and biological data significantly helps quantify and reduce CLM-CN model uncertainties and helps to constrain ‘hidden’ states and parameters that are essential in the coupled water, carbon, energy and nutrient dynamics of these sites. Such site-level calibration of CLM-CN is an initial step in identifying model deficiencies and in forecasts of future ecosystem responses to climate change.

  1. Improving the realism of hydrologic model through multivariate parameter estimation

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis

    2017-04-01

    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10.1002/2016WR019430

  2. Thermodynamically consistent model calibration in chemical kinetics

    PubMed Central

    2011-01-01

    Background The dynamics of biochemical reaction systems are constrained by the fundamental laws of thermodynamics, which impose well-defined relationships among the reaction rate constants characterizing these systems. Constructing biochemical reaction systems from experimental observations often leads to parameter values that do not satisfy the necessary thermodynamic constraints. This can result in models that are not physically realizable and may lead to inaccurate, or even erroneous, descriptions of cellular function. Results We introduce a thermodynamically consistent model calibration (TCMC) method that can be effectively used to provide thermodynamically feasible values for the parameters of an open biochemical reaction system. The proposed method formulates the model calibration problem as a constrained optimization problem that takes thermodynamic constraints (and, if desired, additional non-thermodynamic constraints) into account. By calculating thermodynamically feasible values for the kinetic parameters of a well-known model of the EGF/ERK signaling cascade, we demonstrate the qualitative and quantitative significance of imposing thermodynamic constraints on these parameters and the effectiveness of our method for accomplishing this important task. MATLAB software, using the Systems Biology Toolbox 2.1, can be accessed from http://www.cis.jhu.edu/~goutsias/CSS lab/software.html. An SBML file containing the thermodynamically feasible EGF/ERK signaling cascade model can be found in the BioModels database. Conclusions TCMC is a simple and flexible method for obtaining physically plausible values for the kinetic parameters of open biochemical reaction systems. It can be effectively used to recalculate a thermodynamically consistent set of parameter values for existing thermodynamically infeasible biochemical reaction models of cellular function as well as to estimate thermodynamically feasible values for the parameters of new models. Furthermore, TCMC can provide dimensionality reduction, better estimation performance, and lower computational complexity, and can help to alleviate the problem of data overfitting. PMID:21548948

  3. Transfer-function-parameter estimation from frequency response data: A FORTRAN program

    NASA Technical Reports Server (NTRS)

    Seidel, R. C.

    1975-01-01

    A FORTRAN computer program designed to fit a linear transfer function model to given frequency response magnitude and phase data is presented. A conjugate gradient search is used that minimizes the integral of the absolute value of the error squared between the model and the data. The search is constrained to insure model stability. A scaling of the model parameters by their own magnitude aids search convergence. Efficient computer algorithms result in a small and fast program suitable for a minicomputer. A sample problem with different model structures and parameter estimates is reported.

  4. Rhelogical constraints on ridge formation on Icy Satellites

    NASA Astrophysics Data System (ADS)

    Rudolph, M. L.; Manga, M.

    2010-12-01

    The processes responsible for forming ridges on Europa remain poorly understood. We use a continuum damage mechanics approach to model ridge formation. The main objectives of this contribution are to constrain (1) choice of rheological parameters and (2) maximum ridge size and rate of formation. The key rheological parameters to constrain appear in the evolution equation for a damage variable (D): ˙ {D} = B <<σ >>r}(1-D){-k-α D (p)/(μ ) and in the equation relating damage accumulation to volumetric changes, Jρ 0 = δ (1-D). Similar damage evolution laws have been applied to terrestrial glaciers and to the analysis of rock mechanics experiments. However, it is reasonable to expect that, like viscosity, the rheological constants B, α , and δ depend strongly on temperature, composition, and ice grain size. In order to determine whether the damage model is appropriate for Europa’s ridges, we must find values of the unknown damage parameters that reproduce ridge topography. We perform a suite of numerical experiments to identify the region of parameter space conducive to ridge production and show the sensitivity to changes in each unknown parameter.

  5. An effective parameter optimization with radiation balance constraints in the CAM5

    NASA Astrophysics Data System (ADS)

    Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.

    2017-12-01

    Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.

  6. Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake

    NASA Astrophysics Data System (ADS)

    Muller, S. J.; Gerber, S.

    2013-12-01

    The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better constrain projections for the land carbon sink.

  7. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves.

    PubMed

    Ripepe, M; Barfucci, G; De Angelis, S; Delle Donne, D; Lacanna, G; Marchetti, E

    2016-11-10

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models.

  8. Modeling Volcanic Eruption Parameters by Near-Source Internal Gravity Waves

    PubMed Central

    Ripepe, M.; Barfucci, G.; De Angelis, S.; Delle Donne, D.; Lacanna, G.; Marchetti, E.

    2016-01-01

    Volcanic explosions release large amounts of hot gas and ash into the atmosphere to form plumes rising several kilometers above eruptive vents, which can pose serious risk on human health and aviation also at several thousands of kilometers from the volcanic source. However the most sophisticate atmospheric models and eruptive plume dynamics require input parameters such as duration of the ejection phase and total mass erupted to constrain the quantity of ash dispersed in the atmosphere and to efficiently evaluate the related hazard. The sudden ejection of this large quantity of ash can perturb the equilibrium of the whole atmosphere triggering oscillations well below the frequencies of acoustic waves, down to much longer periods typical of gravity waves. We show that atmospheric gravity oscillations induced by volcanic eruptions and recorded by pressure sensors can be modeled as a compact source representing the rate of erupted volcanic mass. We demonstrate the feasibility of using gravity waves to derive eruption source parameters such as duration of the injection and total erupted mass with direct application in constraining plume and ash dispersal models. PMID:27830768

  9. Constraints from triple gauge couplings on vectorlike leptons

    DOE PAGES

    Bertuzzo, Enrico; Machado, Pedro A. N.; Perez-Gonzalez, Yuber F.; ...

    2017-08-30

    Here, we study the contributions of colorless vectorlike fermions to the triple gauge couplings W +W -γ and W +W -Z 0. We consider models in which their coupling to the Standard Model Higgs boson is allowed or forbidden by quantum numbers. We assess the sensitivity of the future accelerators FCC-ee, ILC, and CLIC to the parameters of these models, assuming they will be able to constrain the anomalous triple gauge couplings with precision δ κV~O(10 -4), V = γ,Z 0. We show that the combination of measurements at different center-of-mass energies helps to improve the sensitivity to the contributionmore » of vectorlike fermions, in particular when they couple to the Higgs. In fact, the measurements at the FCC-ee and, especially, the ILC and the CLIC, may turn the triple gauge couplings into a new set of precision parameters able to constrain the models better than the oblique parameters or the H → γγ decay, even assuming the considerable improvement of the latter measurements achievable at the new machines.« less

  10. Constraints from triple gauge couplings on vectorlike leptons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertuzzo, Enrico; Machado, Pedro A. N.; Perez-Gonzalez, Yuber F.

    Here, we study the contributions of colorless vectorlike fermions to the triple gauge couplings W +W -γ and W +W -Z 0. We consider models in which their coupling to the Standard Model Higgs boson is allowed or forbidden by quantum numbers. We assess the sensitivity of the future accelerators FCC-ee, ILC, and CLIC to the parameters of these models, assuming they will be able to constrain the anomalous triple gauge couplings with precision δ κV~O(10 -4), V = γ,Z 0. We show that the combination of measurements at different center-of-mass energies helps to improve the sensitivity to the contributionmore » of vectorlike fermions, in particular when they couple to the Higgs. In fact, the measurements at the FCC-ee and, especially, the ILC and the CLIC, may turn the triple gauge couplings into a new set of precision parameters able to constrain the models better than the oblique parameters or the H → γγ decay, even assuming the considerable improvement of the latter measurements achievable at the new machines.« less

  11. Developing a particle tracking surrogate model to improve inversion of ground water - Surface water models

    NASA Astrophysics Data System (ADS)

    Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain

    2018-03-01

    The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.

  12. Exploring extended scalar sectors with di-Higgs signals: a Higgs EFT perspective

    NASA Astrophysics Data System (ADS)

    Corbett, Tyler; Joglekar, Aniket; Li, Hao-Lin; Yu, Jiang-Hao

    2018-05-01

    We consider extended scalar sectors of the Standard Model as ultraviolet complete motivations for studying the effective Higgs self-interaction operators of the Standard Model effective field theory. We investigate all motivated heavy scalar models which generate the dimension-six effective operator, | H|6, at tree level and proceed to identify the full set of tree-level dimension-six operators by integrating out the heavy scalars. Of seven models which generate | H|6 at tree level only two, quadruplets of hypercharge Y = 3 Y H and Y = Y H , generate only this operator. Next we perform global fits to constrain relevant Wilson coefficients from the LHC single Higgs measurements as well as the electroweak oblique parameters S and T. We find that the T parameter puts very strong constraints on the Wilson coefficient of the | H|6 operator in the triplet and quadruplet models, while the singlet and doublet models could still have Higgs self-couplings which deviate significantly from the standard model prediction. To determine the extent to which the | H|6 operator could be constrained, we study the di-Higgs signatures at the future 100 TeV collider and explore future sensitivity of this operator. Projected onto the Higgs potential parameters of the extended scalar sectors, with 30 ab-1 luminosity data we will be able to explore the Higgs potential parameters in all seven models.

  13. A predictive parameter estimation approach for the thermodynamically constrained averaging theory applied to diffusion in porous media

    NASA Astrophysics Data System (ADS)

    Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.

    2017-12-01

    Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.

  14. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, Ying -Qi; Segall, Paul; Bradley, Andrew

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less

  15. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    NASA Astrophysics Data System (ADS)

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle

    2017-10-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ˜10-11.4m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  16. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    DOE PAGES

    Wong, Ying -Qi; Segall, Paul; Bradley, Andrew; ...

    2017-10-04

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock andmore » magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.« less

  17. Constraining the magmatic system at Mount St. Helens (2004–2008) using Bayesian inversion with physics-based models including gas escape and crystallization

    USGS Publications Warehouse

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle R.

    2017-01-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5wt%) total volatiles and that the magma permeability scale is well constrained at ~10-11.4 m2 to reproduce observed dome rock porosities. Compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  18. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  19. Precision constraints on the top-quark effective field theory at future lepton colliders

    NASA Astrophysics Data System (ADS)

    Durieux, G.

    We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.

  20. A Bayesian approach to the modelling of α Cen A

    NASA Astrophysics Data System (ADS)

    Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.

    2012-12-01

    Determining the physical characteristics of a star is an inverse problem consisting of estimating the parameters of models for the stellar structure and evolution, and knowing certain observable quantities. We use a Bayesian approach to solve this problem for α Cen A, which allows us to incorporate prior information on the parameters to be estimated, in order to better constrain the problem. Our strategy is based on the use of a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior probability densities of the stellar parameters: mass, age, initial chemical composition, etc. We use the stellar evolutionary code ASTEC to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in ASTEC. We are thus able to show evidence that MCMC methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ˜40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance for our knowledge of the structure of this star.

  1. Estimation of sum-to-one constrained parameters with non-Gaussian extensions of ensemble-based Kalman filters: application to a 1D ocean biogeochemical model

    NASA Astrophysics Data System (ADS)

    Simon, E.; Bertino, L.; Samuelsen, A.

    2011-12-01

    Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.

  2. Model-independent constraints on modified gravity from current data and from the Euclid and SKA future surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddei, Laura; Martinelli, Matteo; Amendola, Luca, E-mail: taddei@thphys.uni-heidelberg.de, E-mail: martinelli@lorentz.leidenuniv.nl, E-mail: amendola@thphys.uni-heidelberg.de

    2016-12-01

    The aim of this paper is to constrain modified gravity with redshift space distortion observations and supernovae measurements. Compared with a standard ΛCDM analysis, we include three additional free parameters, namely the initial conditions of the matter perturbations, the overall perturbation normalization, and a scale-dependent modified gravity parameter modifying the Poisson equation, in an attempt to perform a more model-independent analysis. First, we constrain the Poisson parameter Y (also called G {sub eff}) by using currently available f σ{sub 8} data and the recent SN catalog JLA. We find that the inclusion of the additional free parameters makes the constraintsmore » significantly weaker than when fixing them to the standard cosmological value. Second, we forecast future constraints on Y by using the predicted growth-rate data for Euclid and SKA missions. Here again we point out the weakening of the constraints when the additional parameters are included. Finally, we adopt as modified gravity Poisson parameter the specific Horndeski form, and use scale-dependent forecasts to build an exclusion plot for the Yukawa potential akin to the ones realized in laboratory experiments, both for the Euclid and the SKA surveys.« less

  3. A probabilistic assessment of calcium carbonate export and dissolution in the modern ocean

    NASA Astrophysics Data System (ADS)

    Battaglia, Gianna; Steinacher, Marco; Joos, Fortunat

    2016-05-01

    The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 export fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Monte Carlo scheme to construct a 1000-member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean-sediment fluxes are considered. For local dissolution rates, either a strong or a weak dependency on CaCO3 saturation is assumed. In addition, there is the option to have saturation-independent dissolution above the saturation horizon. The median (and 68 % confidence interval) of the constrained model ensemble for global biogenic CaCO3 export is 0.90 (0.72-1.05) Gt C yr-1, that is within the lower half of previously published estimates (0.4-1.8 Gt C yr-1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo-Pacific, the northern Pacific and relatively small in the Atlantic. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport timescales for the different set-ups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest applying saturation-independent dissolution rates in Earth system models to minimise computational costs.

  4. Ten years of multiple data stream assimilation with the ORCHIDEE land surface model to improve regional to global simulated carbon budgets: synthesis and perspectives on directions for the future

    NASA Astrophysics Data System (ADS)

    Peylin, P. P.; Bacour, C.; MacBean, N.; Maignan, F.; Bastrikov, V.; Chevallier, F.

    2017-12-01

    Predicting the fate of carbon stocks and their sensitivity to climate change and land use/management strongly relies on our ability to accurately model net and gross carbon fluxes. However, simulated carbon and water fluxes remain subject to large uncertainties, partly because of unknown or poorly calibrated parameters. Over the past ten years, the carbon cycle data assimilation system at the Laboratoire des Sciences du Climat et de l'Environnement has investigated the benefit of assimilating multiple carbon cycle data streams into the ORCHIDEE LSM, the land surface component of the Institut Pierre Simon Laplace Earth System Model. These datasets have included FLUXNET eddy covariance data (net CO2 flux and latent heat flux) to constrain hourly to seasonal time-scale carbon cycle processes, remote sensing of the vegetation activity (MODIS NDVI) to constrain the leaf phenology, biomass data to constrain "slow" (yearly to decadal) processes of carbon allocation, and atmospheric CO2 concentrations to provide overall large scale constraints on the land carbon sink. Furthermore, we have investigated technical issues related to multiple data stream assimilation and choice of optimization algorithm. This has provided a wide-ranging perspective on the challenges we face in constraining model parameters and thus better quantifying, and reducing, model uncertainty in projections of the future global carbon sink. We review our past studies in terms of the impact of the optimization on key characteristics of the carbon cycle, e.g. the partition of the northern latitudes vs tropical land carbon sink, and compare to the classic atmospheric flux inversion approach. Throughout, we discuss our work in context of the abovementioned challenges, and propose solutions for the community going forward, including the potential of new observations such as atmospheric COS concentrations and satellite-derived Solar Induced Fluorescence to constrain the gross carbon fluxes of the ORCHIDEE model.

  5. Direct reconstruction of dark energy.

    PubMed

    Clarkson, Chris; Zunckel, Caroline

    2010-05-28

    An important issue in cosmology is reconstructing the effective dark energy equation of state directly from observations. With so few physically motivated models, future dark energy studies cannot only be based on constraining a dark energy parameter space. We present a new nonparametric method which can accurately reconstruct a wide variety of dark energy behavior with no prior assumptions about it. It is simple, quick and relatively accurate, and involves no expensive explorations of parameter space. The technique uses principal component analysis and a combination of information criteria to identify real features in the data, and tailors the fitting functions to pick up trends and smooth over noise. We find that we can constrain a large variety of w(z) models to within 10%-20% at redshifts z≲1 using just SNAP-quality data.

  6. Parameter and prediction uncertainty in an optimized terrestrial carbon cycle model: Effects of constraining variables and data record length

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.

    2011-03-01

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.

  7. Constraining 3-PG with a new δ13C submodel: a test using the δ13C of tree rings.

    PubMed

    Wei, Liang; Marshall, John D; Link, Timothy E; Kavanagh, Kathleen L; DU, Enhao; Pangle, Robert E; Gag, Peter J; Ubierna, Nerea

    2014-01-01

    A semi-mechanistic forest growth model, 3-PG (Physiological Principles Predicting Growth), was extended to calculate δ(13)C in tree rings. The δ(13)C estimates were based on the model's existing description of carbon assimilation and canopy conductance. The model was tested in two ~80-year-old natural stands of Abies grandis (grand fir) in northern Idaho. We used as many independent measurements as possible to parameterize the model. Measured parameters included quantum yield, specific leaf area, soil water content and litterfall rate. Predictions were compared with measurements of transpiration by sap flux, stem biomass, tree diameter growth, leaf area index and δ(13)C. Sensitivity analysis showed that the model's predictions of δ(13)C were sensitive to key parameters controlling carbon assimilation and canopy conductance, which would have allowed it to fail had the model been parameterized or programmed incorrectly. Instead, the simulated δ(13)C of tree rings was no different from measurements (P > 0.05). The δ(13)C submodel provides a convenient means of constraining parameter space and avoiding model artefacts. This δ(13)C test may be applied to any forest growth model that includes realistic simulations of carbon assimilation and transpiration. © 2013 John Wiley & Sons Ltd.

  8. Multi-objective optimization in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  9. Imposing constraints on parameter values of a conceptual hydrological model using baseflow response

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.

    Calibration of conceptual hydrological models is frequently limited by a lack of data about the area that is being studied. The result is that a broad range of parameter values can be identified that will give an equally good calibration to the available observations, usually of stream flow. The use of total stream flow can bias analyses towards interpretation of rapid runoff, whereas water quality issues are more frequently associated with low flow condition. This paper demonstrates how model distinctions between surface an sub-surface runoff can be used to define a likelihood measure based on the sub-surface (or baseflow) response. This helps to provide more information about the model behaviour, constrain the acceptable parameter sets and reduce uncertainty in streamflow prediction. A conceptual model, DIY, is applied to two contrasting catchments in Scotland, the Ythan and the Carron Valley. Parameter ranges and envelopes of prediction are identified using criteria based on total flow efficiency, baseflow efficiency and combined efficiencies. The individual parameter ranges derived using the combined efficiency measures still cover relatively wide bands, but are better constrained for the Carron than the Ythan. This reflects the fact that hydrological behaviour in the Carron is dominated by a much flashier surface response than in the Ythan. Hence, the total flow efficiency is more strongly controlled by surface runoff in the Carron and there is a greater contrast with the baseflow efficiency. Comparisons of the predictions using different efficiency measures for the Ythan also suggest that there is a danger of confusing parameter uncertainties with data and model error, if inadequate likelihood measures are defined.

  10. Electric dipole moments in natural supersymmetry

    NASA Astrophysics Data System (ADS)

    Nakai, Yuichiro; Reece, Matthew

    2017-08-01

    We discuss electric dipole moments (EDMs) in the framework of CP-violating natural supersymmetry (SUSY). Recent experimental results have significantly tightened constraints on the EDMs of electrons and of mercury, and substantial further progress is expected in the near future. We assess how these results constrain the parameter space of natural SUSY. In addition to our discussion of SUSY, we provide a set of general formulas for two-loop fermion EDMs, which can be applied to a wide range of models of new physics. In the SUSY context, the two-loop effects of stops and charginos respectively constrain the phases of A t μ and M 2 μ to be small in the natural part of parameter space. If the Higgs mass is lifted to 125 GeV by a new tree-level superpotential interaction and soft term with CP-violating phases, significant EDMs can arise from the two-loop effects of W bosons and tops. We compare the bounds arising from EDMs to those from other probes of new physics including colliders, b → sγ, and dark matter searches. Importantly, improvements in reach not only constrain higher masses, but require the phases to be significantly smaller in the natural parameter space at low mass. The required smallness of phases sharpens the CP problem of natural SUSY model building.

  11. Stability analysis in tachyonic potential chameleon cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farajollahi, H.; Salehi, A.; Tayebi, F.

    2011-05-01

    We study general properties of attractors for tachyonic potential chameleon scalar-field model which possess cosmological scaling solutions. An analytic formulation is given to obtain fixed points with a discussion on their stability. The model predicts a dynamical equation of state parameter with phantom crossing behavior for an accelerating universe. We constrain the parameters of the model by best fitting with the recent data-sets from supernovae and simulated data points for redshift drift experiment generated by Monte Carlo simulations.

  12. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  13. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  14. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  15. Recovering a Probabilistic Knowledge Structure by Constraining Its Parameter Space

    ERIC Educational Resources Information Center

    Stefanutti, Luca; Robusto, Egidio

    2009-01-01

    In the Basic Local Independence Model (BLIM) of Doignon and Falmagne ("Knowledge Spaces," Springer, Berlin, 1999), the probabilistic relationship between the latent knowledge states and the observable response patterns is established by the introduction of a pair of parameters for each of the problems: a lucky guess probability and a careless…

  16. A Global Analysis of Light and Charge Yields in Liquid Xenon

    DOE PAGES

    Lenardo, Brian; Kazkaz, Kareem; Manalaysay, Aaron; ...

    2015-11-04

    Here, we present an updated model of light and charge yields from nuclear recoils in liquid xenon with a simultaneously constrained parameter set. A global analysis is performed using measurements of electron and photon yields compiled from all available historical data, as well as measurements of the ratio of the two. These data sweep over energies from keV and external applied electric fields from V/cm. The model is constrained by constructing global cost functions and using a simulated annealing algorithm and a Markov Chain Monte Carlo approach to optimize and find confidence intervals on all free parameters in the model.more » This analysis contrasts with previous work in that we do not unnecessarily exclude datasets nor impose artificially conservative assumptions, do not use spline functions, and reduce the number of parameters used in NEST v 0.98. Here, we report our results and the calculated best-fit charge and light yields. These quantities are crucial to understanding the response of liquid xenon detectors in the energy regime important for rare event searches such as the direct detection of dark matter particles.« less

  17. Constraining magma physical properties and its temporal evolution from InSAR and topographic data only: a physics-based eruption model for the effusive phase of the Cordon Caulle 2011-2012 rhyodacitic eruption

    NASA Astrophysics Data System (ADS)

    Delgado, F.; Kubanek, J.; Anderson, K. R.; Lundgren, P.; Pritchard, M. E.

    2017-12-01

    The 2011-2012 eruption of Cordón Caulle volcano in Chile is the best scientifically observed rhyodacitic eruption and is thus a key place to understand the dynamics of these rare but powerful explosive rhyodacitic eruptions. Because the volatile phase controls both the eruption temporal evolution and the eruptive style, either explosive or effusive, it is important to constrain the physical parameters that drive these eruptions. The eruption began explosively and after two weeks evolved into a hybrid explosive - lava flow effusion whose volume-time evolution we constrain with a series of TanDEM-X Digital Elevation Models. Our data shows the intrusion of a large volume laccolith or cryptodome during the first 2.5 months of the eruption and lava flow effusion only afterwards, with a total volume of 1.4 km3. InSAR data from the ENVISAT and TerraSAR-X missions shows more than 2 m of subsidence during the effusive eruption phase produced by deflation of a finite spheroidal source at a depth of 5 km. In order to constrain the magma total H2O content, crystal cargo, and reservoir pressure drop we numerically solve the coupled set of equations of a pressurized magma reservoir, magma conduit flow and time dependent density, volatile exsolution and viscosity that we use to invert the InSAR and topographic data time series. We compare the best-fit model parameters with independent estimates of magma viscosity and total gas content measured from lava samples. Preliminary modeling shows that although it is not possible to model both the InSAR and the topographic data during the onset of the laccolith emplacement, it is possible to constrain the magma H2O and crystal content, to 4% wt and 30% which agree well with published literature values.

  18. A Short Note on Estimating the Testlet Model with Different Estimators in Mplus

    ERIC Educational Resources Information Center

    Luo, Yong

    2018-01-01

    Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and…

  19. On the mutual relationship between conceptual models and datasets in geophysical monitoring of volcanic systems

    NASA Astrophysics Data System (ADS)

    Neuberg, J. W.; Thomas, M.; Pascal, K.; Karl, S.

    2012-04-01

    Geophysical datasets are essential to guide particularly short-term forecasting of volcanic activity. Key parameters are derived from these datasets and interpreted in different ways, however, the biggest impact on the interpretation is not determined by the range of parameters but controlled through the parameterisation and the underlying conceptual model of the volcanic process. On the other hand, the increasing number of sophisticated geophysical models need to be constrained by monitoring data, to transform a merely numerical exercise into a useful forecasting tool. We utilise datasets from the "big three", seismology, deformation and gas emissions, to gain insight in the mutual relationship between conceptual models and constraining data. We show that, e.g. the same seismic dataset can be interpreted with respect to a wide variety of different models with very different implications to forecasting. In turn, different data processing procedures lead to different outcomes even though they are based on the same conceptual model. Unsurprisingly, the most reliable interpretation will be achieved by employing multi-disciplinary models with overlapping constraints.

  20. Using dry and wet year hydroclimatic extremes to guide future hydrologic projections

    NASA Astrophysics Data System (ADS)

    Oni, Stephen; Futter, Martyn; Ledesma, Jose; Teutschbein, Claudia; Buttle, Jim; Laudon, Hjalmar

    2016-07-01

    There are growing numbers of studies on climate change impacts on forest hydrology, but limited attempts have been made to use current hydroclimatic variabilities to constrain projections of future climatic conditions. Here we used historical wet and dry years as a proxy for expected future extreme conditions in a boreal catchment. We showed that runoff could be underestimated by at least 35 % when dry year parameterizations were used for wet year conditions. Uncertainty analysis showed that behavioural parameter sets from wet and dry years separated mainly on precipitation-related parameters and to a lesser extent on parameters related to landscape processes, while uncertainties inherent in climate models (as opposed to differences in calibration or performance metrics) appeared to drive the overall uncertainty in runoff projections under dry and wet hydroclimatic conditions. Hydrologic model calibration for climate impact studies could be based on years that closely approximate anticipated conditions to better constrain uncertainty in projecting extreme conditions in boreal and temperate regions.

  1. The importance of diverse data types to calibrate a watershed model of the Trout Lake Basin, Northern Wisconsin, USA

    USGS Publications Warehouse

    Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.

    2006-01-01

    As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.

  2. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  3. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  4. Deduction as Stochastic Simulation

    DTIC Science & Technology

    2013-07-01

    different tokens representing entities that it contains. The second parameter constrains the contents of a model, and in particular the different...of premises. In summary, the system manipulates stochastically the size, the contents , and the revisions of models. We now describe in detail each...9 10 0. 0 0. 1 0. 2 0. 3 λ = 4 0 1 2 3 4 5 6 7 8 9 10 0. 0 0. 1 0. 2 0. 3 λ = 5 The contents of a mental model (parameter ε) The second component

  5. Fermion masses in SO(10)

    NASA Astrophysics Data System (ADS)

    Jungman, Gerard

    1992-11-01

    Yukawa-coupling-constant unification together with the known fermion masses is used to constrain SO(10) models. We consider the case of one (heavy) generation, with the tree-level relation mb=mτ, calculating the limits on the intermediate scales due to the known limits on fermion masses. This analysis extends previous analyses which addressed only the simplest symmetry-breaking schemes. In the case where the low-energy model is the standard model with one Higgs doublet, there are very strong constraints due to the known limits on the top-quark mass and the τ-neutrino mass. The two-Higgs-doublet case is less constrained, though we can make progress in constraining this model also. We identify those parameters to which the viability of the model is most sensitive. We also discuss the ``triviality'' bounds on mt obtained from the analysis of the Yukawa renormalization-group equations. Finally we address the role of a speculative constraint on the τ-neutrino mass, arising from the cosmological implications of anomalous B+L violation in the early Universe.

  6. Constraining Unsaturated Hydraulic Parameters Using the Latin Hypercube Sampling Method and Coupled Hydrogeophysical Approach

    NASA Astrophysics Data System (ADS)

    Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.

    2017-12-01

    The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.

  7. Enhancing model prediction reliability through improved soil representation and constrained model auto calibration - A paired waterhsed study

    USDA-ARS?s Scientific Manuscript database

    Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...

  8. Warm inflationary model in loop quantum cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herrera, Ramon

    A warm inflationary universe model in loop quantum cosmology is studied. In general we discuss the condition of inflation in this framework. By using a chaotic potential, V({phi}){proportional_to}{phi}{sup 2}, we develop a model where the dissipation coefficient {Gamma}={Gamma}{sub 0}=constant. We use recent astronomical observations for constraining the parameters appearing in our model.

  9. Lensing convergence in galaxy clustering in ΛCDM and beyond

    NASA Astrophysics Data System (ADS)

    Villa, Eleonora; Di Dio, Enea; Lepori, Francesca

    2018-04-01

    We study the impact of neglecting lensing magnification in galaxy clustering analyses for future galaxy surveys, considering the ΛCDM model and two extensions: massive neutrinos and modifications of General Relativity. Our study focuses on the biases on the constraints and on the estimation of the cosmological parameters. We perform a comprehensive investigation of these two effects for the upcoming photometric and spectroscopic galaxy surveys Euclid and SKA for different redshift binning configurations. We also provide a fitting formula for the magnification bias of SKA. Our results show that the information present in the lensing contribution does improve the constraints on the modified gravity parameters whereas the lensing constraining power is negligible for the ΛCDM parameters. For photometric surveys the estimation is biased for all the parameters if lensing is not taken into account. This effect is particularly significant for the modified gravity parameters. Conversely for spectroscopic surveys the bias is below one sigma for all the parameters. Our findings show the importance of including lensing in galaxy clustering analyses for testing General Relativity and to constrain the parameters which describe its modifications.

  10. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  11. Supplementary data of “Impacts of mesic and xeric urban vegetation on outdoor thermal comfort and microclimate in Phoenix, AZ”

    PubMed Central

    Song, Jiyun; Wang, Zhi-Hua

    2015-01-01

    An advanced Markov-Chain Monte Carlo approach called Subset Simulation is described in Au and Beck (2001) [1] was used to quantify parameter uncertainty and model sensitivity of the urban land-atmospheric framework, viz. the coupled urban canopy model-single column model (UCM-SCM). The results show that the atmospheric dynamics are sensitive to land surface conditions. The most sensitive parameters are dimensional parameters, i.e. roof width, aspect ratio, roughness length of heat and momentum, since these parameters control the magnitude of sensible heat flux. The relative insensitive parameters are hydrological parameters since the lawns or green roofs in urban areas are regularly irrigated so that the water availability for evaporation is never constrained. PMID:26702421

  12. Constraining sterile neutrinos with AMANDA and IceCube atmospheric neutrino data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Esmaili, Arman; Peres, O.L.G.; Halzen, Francis, E-mail: aesmaili@ifi.unicamp.br, E-mail: halzen@icecube.wisc.edu, E-mail: orlando@ifi.unicamp.br

    2012-11-01

    We demonstrate that atmospheric neutrino data accumulated with the AMANDA and the partially deployed IceCube experiments constrain the allowed parameter space for a hypothesized fourth sterile neutrino beyond the reach of a combined analysis of all other experiments, for Δm{sup 2}{sub 41}∼<1 eV{sup 2}. Although the IceCube data wins the statistics in the analysis, the advantage of a combined analysis of AMANDA and IceCube data is the partial remedy of yet unknown instrumental systematic uncertainties. We also illustrate the sensitivity of the completed IceCube detector, that is now taking data, to the parameter space of 3+1 model.

  13. Comparing Multiple-Group Multinomial Log-Linear Models for Multidimensional Skill Distributions in the General Diagnostic Model. Research Report. ETS RR-08-35

    ERIC Educational Resources Information Center

    Xu, Xueli; von Davier, Matthias

    2008-01-01

    The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…

  14. Validity of strong lensing statistics for constraints on the galaxy evolution model

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akiko; Futamase, Toshifumi

    2008-02-01

    We examine the usefulness of the strong lensing statistics to constrain the evolution of the number density of lensing galaxies by adopting the values of the cosmological parameters determined by recent Wilkinson Microwave Anisotropy Probe observation. For this purpose, we employ the lens-redshift test proposed by Kochanek and constrain the parameters in two evolution models, simple power-law model characterized by the power-law indexes νn and νv, and the evolution model by Mitchell et al. based on cold dark matter structure formation scenario. We use the well-defined lens sample from the Sloan Digital Sky Survey (SDSS) and this is similarly sized samples used in the previous studies. Furthermore, we adopt the velocity dispersion function of early-type galaxies based on SDSS DR1 and DR5. It turns out that the indexes of power-law model are consistent with the previous studies, thus our results indicate the mild evolution in the number and velocity dispersion of early-type galaxies out to z = 1. However, we found that the values for p and q used by Mitchell et al. are inconsistent with the presently available observational data. More complete sample is necessary to withdraw more realistic determination on these parameters.

  15. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  16. Constraints on the dark matter and dark energy interactions from weak lensing bispectrum tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Rui; Feng, Chang; Wang, Bin, E-mail: an_rui@sjtu.edu.cn, E-mail: chang.feng@uci.edu, E-mail: wang_b@sjtu.edu.cn

    We estimate uncertainties of cosmological parameters for phenomenological interacting dark energy models using weak lensing convergence power spectrum and bispectrum. We focus on the bispectrum tomography and examine how well the weak lensing bispectrum with tomography can constrain the interactions between dark sectors, as well as other cosmological parameters. Employing the Fisher matrix analysis, we forecast parameter uncertainties derived from weak lensing bispectra with a two-bin tomography and place upper bounds on strength of the interactions between the dark sectors. The cosmic shear will be measured from upcoming weak lensing surveys with high sensitivity, thus it enables us to usemore » the higher order correlation functions of weak lensing to constrain the interaction between dark sectors and will potentially provide more stringent results with other observations combined.« less

  17. Nucleosynthesis of Iron-Peak Elements in Type-Ia Supernovae

    NASA Astrophysics Data System (ADS)

    Leung, Shing-Chi; Nomoto, Ken'ichi

    The observed features of typical Type Ia supernovae are well-modeled as the explosions of carbon-oxygen white dwarfs both near Chandrasekhar mass and sub-Chandrasekhar mass. However, observations in the last decade have shown that Type Ia supernovae exhibit a wide diversity, which implies models for wider range of parameters are necessary. Based on the hydrodynamics code we developed, we carry out a parameter study of Chandrasekhar mass models for Type Ia supernovae. We conduct a series of two-dimensional hydrodynamics simulations of the explosion phase using the turbulent flame model with the deflagration-detonation-transition (DDT). To reconstruct the nucleosynthesis history, we use the particle tracer scheme. We examine the role of model parameters by examining their influences on the final product of nucleosynthesis. The parameters include the initial density, metallicity, initial flame structure, detonation criteria and so on. We show that the observed chemical evolution of galaxies can help constrain these model parameters.

  18. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.

  19. Model-independent cosmological constraints from growth and expansion

    NASA Astrophysics Data System (ADS)

    L'Huillier, Benjamin; Shafieloo, Arman; Kim, Hyungjin

    2018-05-01

    Reconstructing the expansion history of the Universe from Type Ia supernovae data, we fit the growth rate measurements and put model-independent constraints on some key cosmological parameters, namely, Ωm, γ, and σ8. The constraints are consistent with those from the concordance model within the framework of general relativity, but the current quality of the data is not sufficient to rule out modified gravity models. Adding the condition that dark energy density should be positive at all redshifts, independently of its equation of state, further constrains the parameters and interestingly supports the concordance model.

  20. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  1. Neutrino-two-Higgs-doublet model with the inverse seesaw mechanisms

    NASA Astrophysics Data System (ADS)

    Tang, Yi-Lei; Zhu, Shou-hua

    2017-09-01

    In this paper, we combine the ν -two-Higgs-doublet-model with the inverse seesaw mechanisms. In this model, the Yukawa couplings involving the sterile neutrinos and the exotic Higgs bosons can be of order 1 in the case of a large tan β . We calculated the corrections to the Z -resonance parameters Rli,Al i, and Nν, together with the l1→l2γ branching ratios and the muon anomalous g -2 . Compared with the current bounds and plans for the future colliders, we find that the corrections to the electroweak parameters can be constrained or discovered in much of the parameter space.

  2. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    USGS Publications Warehouse

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  3. Lepton flavor violating B meson decays via a scalar leptoquark

    NASA Astrophysics Data System (ADS)

    Sahoo, Suchismita; Mohanta, Rukmani

    2016-06-01

    We study the effect of scalar leptoquarks in the lepton flavor violating B meson decays induced by the flavor-changing transitions b →q li+lj- with q =s , d . In the standard model, these transitions are extremely rare as they are either two-loop suppressed or proceed via box diagrams with tiny neutrino masses in the loop. However, in the leptoquark model, they can occur at tree level and are expected to have significantly large branching ratios. The leptoquark parameter space is constrained using the experimental limits on the branching ratios of Bq→l+l- processes. Using such constrained parameter space, we predict the branching ratios of LFV semileptonic B meson decays, such as B+→K+(π+)li+lj-, B+→(K*+,ρ+)li+lj-, and Bs→ϕ li+lj-, which are found to be within the experimental reach of LHCb and the upcoming Belle II experiments. We also investigate the rare leptonic KL ,S→μ+μ-(e+e-) and KL→μ∓e± decays in the leptoquark model.

  4. Automated parameter tuning applied to sea ice in a global climate model

    NASA Astrophysics Data System (ADS)

    Roach, Lettie A.; Tett, Simon F. B.; Mineter, Michael J.; Yamazaki, Kuniko; Rae, Cameron D.

    2018-01-01

    This study investigates the hypothesis that a significant portion of spread in climate model projections of sea ice is due to poorly-constrained model parameters. New automated methods for optimization are applied to historical sea ice in a global coupled climate model (HadCM3) in order to calculate the combination of parameters required to reduce the difference between simulation and observations to within the range of model noise. The optimized parameters result in a simulated sea-ice time series which is more consistent with Arctic observations throughout the satellite record (1980-present), particularly in the September minimum, than the standard configuration of HadCM3. Divergence from observed Antarctic trends and mean regional sea ice distribution reflects broader structural uncertainty in the climate model. We also find that the optimized parameters do not cause adverse effects on the model climatology. This simple approach provides evidence for the contribution of parameter uncertainty to spread in sea ice extent trends and could be customized to investigate uncertainties in other climate variables.

  5. Experimental design approach to the process parameter optimization for laser welding of martensitic stainless steels in a constrained overlap configuration

    NASA Astrophysics Data System (ADS)

    Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.

    2011-02-01

    This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.

  6. Constraints on Dark Energy from Baryon Acoustic Peak and Galaxy Cluster Gas Mass Measurements

    NASA Astrophysics Data System (ADS)

    Samushia, Lado; Ratra, Bharat

    2009-10-01

    We use baryon acoustic peak measurements by Eisenstein et al. and Percival et al., together with the Wilkinson Microwave Anisotropy Probe (WMAP) measurement of the apparent acoustic horizon angle, and galaxy cluster gas mass fraction measurements of Allen et al., to constrain a slowly rolling scalar field dark energy model, phiCDM, in which dark energy's energy density changes in time. We also compare our phiCDM results with those derived for two more common dark energy models: the time-independent cosmological constant model, ΛCDM, and the XCDM parameterization of dark energy's equation of state. For time-independent dark energy, the Percival et al. measurements effectively constrain spatial curvature and favor a close to the spatially flat model, mostly due to the WMAP cosmic microwave background prior used in the analysis. In a spatially flat model the Percival et al. data less effectively constrain time-varying dark energy. The joint baryon acoustic peak and galaxy cluster gas mass constraints on the phiCDM model are consistent with but tighter than those derived from other data. A time-independent cosmological constant in a spatially flat model provides a good fit to the joint data, while the α parameter in the inverse power-law potential phiCDM model is constrained to be less than about 4 at 3σ confidence level.

  7. Observational Role of Dark Matter in f(R) Models for Structure Formation

    NASA Astrophysics Data System (ADS)

    Verma, Murli Manohar; Yadav, Bal Krishna

    The fixed points for the dynamical system in the phase space have been calculated with dark matter in the f(R) gravity models. The stability conditions of these fixed points are obtained in the ongoing accelerated phase of the universe, and the values of the Hubble parameter and Ricci scalar are obtained for various evolutionary stages of the universe. We present a range of some modifications of general relativistic action consistent with the ΛCDM model. We elaborate upon the fact that the upcoming cosmological observations would further constrain the bounds on the possible forms of f(R) with greater precision that could in turn constrain the search for dark matter in colliders.

  8. A constrained reconstruction technique of hyperelasticity parameters for breast cancer assessment

    NASA Astrophysics Data System (ADS)

    Mehrabian, Hatef; Campbell, Gordon; Samani, Abbas

    2010-12-01

    In breast elastography, breast tissue usually undergoes large compression resulting in significant geometric and structural changes. This implies that breast elastography is associated with tissue nonlinear behavior. In this study, an elastography technique is presented and an inverse problem formulation is proposed to reconstruct parameters characterizing tissue hyperelasticity. Such parameters can potentially be used for tumor classification. This technique can also have other important clinical applications such as measuring normal tissue hyperelastic parameters in vivo. Such parameters are essential in planning and conducting computer-aided interventional procedures. The proposed parameter reconstruction technique uses a constrained iterative inversion; it can be viewed as an inverse problem. To solve this problem, we used a nonlinear finite element model corresponding to its forward problem. In this research, we applied Veronda-Westmann, Yeoh and polynomial models to model tissue hyperelasticity. To validate the proposed technique, we conducted studies involving numerical and tissue-mimicking phantoms. The numerical phantom consisted of a hemisphere connected to a cylinder, while we constructed the tissue-mimicking phantom from polyvinyl alcohol with freeze-thaw cycles that exhibits nonlinear mechanical behavior. Both phantoms consisted of three types of soft tissues which mimic adipose, fibroglandular tissue and a tumor. The results of the simulations and experiments show feasibility of accurate reconstruction of tumor tissue hyperelastic parameters using the proposed method. In the numerical phantom, all hyperelastic parameters corresponding to the three models were reconstructed with less than 2% error. With the tissue-mimicking phantom, we were able to reconstruct the ratio of the hyperelastic parameters reasonably accurately. Compared to the uniaxial test results, the average error of the ratios of the parameters reconstructed for inclusion to the middle and external layers were 13% and 9.6%, respectively. Given that the parameter ratios of the abnormal tissues to the normal ones range from three times to more than ten times, this accuracy is sufficient for tumor classification.

  9. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    NASA Astrophysics Data System (ADS)

    Dossett, Jason; Hu, Bin; Parkinson, David

    2014-03-01

    In order to explain cosmic acceleration without invoking ``dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B0, we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B0 by an order of magnitude, giving log10(B0) < -4.07 at 95% confidence limit. Finally, we test whether the effect of adding the lensing amplitude ALens and the sum of the neutrino mass ∑mν is able to reconcile current tensions present in these parameters, but find f(R) gravity an inadequate explanation.

  10. Top ten models constrained by b {yields} s{gamma}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hewett, J.L.

    1994-12-01

    The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found for the parameters in some cases.

  11. Finding viable models in SUSY parameter spaces with signal specific discovery potential

    NASA Astrophysics Data System (ADS)

    Burgess, Thomas; Lindroos, Jan Øye; Lipniacka, Anna; Sandaker, Heidi

    2013-08-01

    Recent results from ATLAS giving a Higgs mass of 125.5 GeV, further constrain already highly constrained supersymmetric models such as pMSSM or CMSSM/mSUGRA. As a consequence, finding potentially discoverable and non-excluded regions of model parameter space is becoming increasingly difficult. Several groups have invested large effort in studying the consequences of Higgs mass bounds, upper limits on rare B-meson decays, and limits on relic dark matter density on constrained models, aiming at predicting superpartner masses, and establishing likelihood of SUSY models compared to that of the Standard Model vis-á-vis experimental data. In this paper a framework for efficient search for discoverable, non-excluded regions of different SUSY spaces giving specific experimental signature of interest is presented. The method employs an improved Markov Chain Monte Carlo (MCMC) scheme exploiting an iteratively updated likelihood function to guide search for viable models. Existing experimental and theoretical bounds as well as the LHC discovery potential are taken into account. This includes recent bounds on relic dark matter density, the Higgs sector and rare B-mesons decays. A clustering algorithm is applied to classify selected models according to expected phenomenology enabling automated choice of experimental benchmarks and regions to be used for optimizing searches. The aim is to provide experimentalist with a viable tool helping to target experimental signatures to search for, once a class of models of interest is established. As an example a search for viable CMSSM models with τ-lepton signatures observable with the 2012 LHC data set is presented. In the search 105209 unique models were probed. From these, ten reference benchmark points covering different ranges of phenomenological observables at the LHC were selected.

  12. Parameter Tuning and Calibration of RegCM3 with MIT-Emanuel Cumulus Parameterization Scheme over CORDEX East Asian Domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Liwei; Qian, Yun; Zhou, Tianjun

    2014-10-01

    In this study, we calibrated the performance of regional climate model RegCM3 with Massachusetts Institute of Technology (MIT)-Emanuel cumulus parameterization scheme over CORDEX East Asia domain by tuning the selected seven parameters through multiple very fast simulated annealing (MVFSA) sampling method. The seven parameters were selected based on previous studies, which customized the RegCM3 with MIT-Emanuel scheme through three different ways by using the sensitivity experiments. The responses of model results to the seven parameters were investigated. Since the monthly total rainfall is constrained, the simulated spatial pattern of rainfall and the probability density function (PDF) distribution of daily rainfallmore » rates are significantly improved in the optimal simulation. Sensitivity analysis suggest that the parameter “relative humidity criteria” (RH), which has not been considered in the default simulation, has the largest effect on the model results. The responses of total rainfall over different regions to RH were examined. Positive responses of total rainfall to RH are found over northern equatorial western Pacific, which are contributed by the positive responses of explicit rainfall. Followed by an increase of RH, the increases of the low-level convergence and the associated increases in cloud water favor the increase of the explicit rainfall. The identified optimal parameters constrained by the total rainfall have positive effects on the low-level circulation and the surface air temperature. Furthermore, the optimized parameters based on the extreme case are suitable for a normal case and the model’s new version with mixed convection scheme.« less

  13. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    NASA Astrophysics Data System (ADS)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z < 3.2km) as well the more detailed level perspective of clouds (40 levels from 0 to 19km). Results show that in most models an increase of the SST leads to a decrease of the low-layer cloud fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental parameters.

  14. Improved Parameter-Estimation With MRI-Constrained PET Kinetic Modeling: A Simulation Study

    NASA Astrophysics Data System (ADS)

    Erlandsson, Kjell; Liljeroth, Maria; Atkinson, David; Arridge, Simon; Ourselin, Sebastien; Hutton, Brian F.

    2016-10-01

    Kinetic analysis can be applied both to dynamic PET and dynamic contrast enhanced (DCE) MRI data. We have investigated the potential of MRI-constrained PET kinetic modeling using simulated [ 18F]2-FDG data for skeletal muscle. The volume of distribution, Ve, for the extra-vascular extra-cellular space (EES) is the link between the two models: It can be estimated by DCE-MRI, and then used to reduce the number of parameters to estimate in the PET model. We used a 3 tissue-compartment model with 5 rate constants (3TC5k), in order to distinguish between EES and the intra-cellular space (ICS). Time-activity curves were generated by simulation using the 3TC5k model for 3 different Ve values under basal and insulin stimulated conditions. Noise was added and the data were fitted with the 2TC3k model and with the 3TC5k model with and without Ve constraint. One hundred noise-realisations were generated at 4 different noise-levels. The results showed reductions in bias and variance with Ve constraint in the 3TC5k model. We calculated the parameter k3", representing the combined effect of glucose transport across the cellular membrane and phosphorylation, as an extra outcome measure. For k3", the average coefficient of variation was reduced from 52% to 9.7%, while for k3 in the standard 2TC3k model it was 3.4%. The accuracy of the parameters estimated with our new modeling approach depends on the accuracy of the assumed Ve value. In conclusion, we have shown that, by utilising information that could be obtained from DCE-MRI in the kinetic analysis of [ 18F]2-FDG-PET data, it is in principle possible to obtain better parameter estimates with a more complex model, which may provide additional information as compared to the standard model.

  15. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  16. Understanding leachate flow in municipal solid waste landfills by combining time-lapse ERT and subsurface flow modelling - Part II: Constraint methodology of hydrodynamic models.

    PubMed

    Audebert, M; Oxarango, L; Duquennoi, C; Touze-Foltz, N; Forquet, N; Clément, R

    2016-09-01

    Leachate recirculation is a key process in the operation of municipal solid waste landfills as bioreactors. To ensure optimal water content distribution, bioreactor operators need tools to design leachate injection systems. Prediction of leachate flow by subsurface flow modelling could provide useful information for the design of such systems. However, hydrodynamic models require additional data to constrain them and to assess hydrodynamic parameters. Electrical resistivity tomography (ERT) is a suitable method to study leachate infiltration at the landfill scale. It can provide spatially distributed information which is useful for constraining hydrodynamic models. However, this geophysical method does not allow ERT users to directly measure water content in waste. The MICS (multiple inversions and clustering strategy) methodology was proposed to delineate the infiltration area precisely during time-lapse ERT survey in order to avoid the use of empirical petrophysical relationships, which are not adapted to a heterogeneous medium such as waste. The infiltration shapes and hydrodynamic information extracted with MICS were used to constrain hydrodynamic models in assessing parameters. The constraint methodology developed in this paper was tested on two hydrodynamic models: an equilibrium model where, flow within the waste medium is estimated using a single continuum approach and a non-equilibrium model where flow is estimated using a dual continuum approach. The latter represents leachate flows into fractures. Finally, this methodology provides insight to identify the advantages and limitations of hydrodynamic models. Furthermore, we suggest an explanation for the large volume detected by MICS when a small volume of leachate is injected. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Empirical models of Jupiter's interior from Juno data. Moment of inertia and tidal Love number k2

    NASA Astrophysics Data System (ADS)

    Ni, Dongdong

    2018-05-01

    Context. The Juno spacecraft has significantly improved the accuracy of gravitational harmonic coefficients J4, J6 and J8 during its first two perijoves. However, there are still differences in the interior model predictions of core mass and envelope metallicity because of the uncertainties in the hydrogen-helium equations of state. New theoretical approaches or observational data are hence required in order to further constrain the interior models of Jupiter. A well constrained interior model of Jupiter is helpful for understanding not only the dynamic flows in the interior, but also the formation history of giant planets. Aims: We present the radial density profiles of Jupiter fitted to the Juno gravity field observations. Also, we aim to investigate our ability to constrain the core properties of Jupiter using its moment of inertia and tidal Love number k2 which could be accessible by the Juno spacecraft. Methods: In this work, the radial density profile was constrained by the Juno gravity field data within the empirical two-layer model in which the equations of state are not needed as an input model parameter. Different two-layer models are constructed in terms of core properties. The dependence of the calculated moment of inertia and tidal Love number k2 on the core properties was investigated in order to discern their abilities to further constrain the internal structure of Jupiter. Results: The calculated normalized moment of inertia (NMOI) ranges from 0.2749 to 0.2762, in reasonable agreement with the other predictions. There is a good correlation between the NMOI value and the core properties including masses and radii. Therefore, measurements of NMOI by Juno can be used to constrain both the core mass and size of Jupiter's two-layer interior models. For the tidal Love number k2, the degeneracy of k2 is found and analyzed within the two-layer interior model. In spite of this, measurements of k2 can still be used to further constrain the core mass and size of Jupiter's two-layer interior models.

  18. A hybrid model of cell cycle in mammals.

    PubMed

    Behaegel, Jonathan; Comet, Jean-Paul; Bernot, Gilles; Cornillon, Emilien; Delaunay, Franck

    2016-02-01

    Time plays an essential role in many biological systems, especially in cell cycle. Many models of biological systems rely on differential equations, but parameter identification is an obstacle to use differential frameworks. In this paper, we present a new hybrid modeling framework that extends René Thomas' discrete modeling. The core idea is to associate with each qualitative state "celerities" allowing us to compute the time spent in each state. This hybrid framework is illustrated by building a 5-variable model of the mammalian cell cycle. Its parameters are determined by applying formal methods on the underlying discrete model and by constraining parameters using timing observations on the cell cycle. This first hybrid model presents the most important known behaviors of the cell cycle, including quiescent phase and endoreplication.

  19. Comparison of techniques for approximating ocean bottom topography in a wave-refraction computer model

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1975-01-01

    A study of the effects of using different methods for approximating bottom topography in a wave-refraction computer model was conducted. Approximation techniques involving quadratic least squares, cubic least squares, and constrained bicubic polynomial interpolation were compared for computed wave patterns and parameters in the region of Saco Bay, Maine. Although substantial local differences can be attributed to use of the different approximation techniques, results indicated that overall computed wave patterns and parameter distributions were quite similar.

  20. Bayesian inference of Earth's radial seismic structure from body-wave traveltimes using neural networks

    NASA Astrophysics Data System (ADS)

    de Wit, Ralph W. L.; Valentine, Andrew P.; Trampert, Jeannot

    2013-10-01

    How do body-wave traveltimes constrain the Earth's radial (1-D) seismic structure? Existing 1-D seismological models underpin 3-D seismic tomography and earthquake location algorithms. It is therefore crucial to assess the quality of such 1-D models, yet quantifying uncertainties in seismological models is challenging and thus often ignored. Ideally, quality assessment should be an integral part of the inverse method. Our aim in this study is twofold: (i) we show how to solve a general Bayesian non-linear inverse problem and quantify model uncertainties, and (ii) we investigate the constraint on spherically symmetric P-wave velocity (VP) structure provided by body-wave traveltimes from the EHB bulletin (phases Pn, P, PP and PKP). Our approach is based on artificial neural networks, which are very common in pattern recognition problems and can be used to approximate an arbitrary function. We use a Mixture Density Network to obtain 1-D marginal posterior probability density functions (pdfs), which provide a quantitative description of our knowledge on the individual Earth parameters. No linearization or model damping is required, which allows us to infer a model which is constrained purely by the data. We present 1-D marginal posterior pdfs for the 22 VP parameters and seven discontinuity depths in our model. P-wave velocities in the inner core, outer core and lower mantle are resolved well, with standard deviations of ˜0.2 to 1 per cent with respect to the mean of the posterior pdfs. The maximum likelihoods of VP are in general similar to the corresponding ak135 values, which lie within one or two standard deviations from the posterior means, thus providing an independent validation of ak135 in this part of the radial model. Conversely, the data contain little or no information on P-wave velocity in the D'' layer, the upper mantle and the homogeneous crustal layers. Further, the data do not constrain the depth of the discontinuities in our model. Using additional phases available in the ISC bulletin, such as PcP, PKKP and the converted phases SP and ScP, may enhance the resolvability of these parameters. Finally, we show how the method can be extended to obtain a posterior pdf for a multidimensional model space. This enables us to investigate correlations between model parameters.

  1. Limiting the effective mass and new physics parameters from 0 ν β β

    NASA Astrophysics Data System (ADS)

    Awasthi, Ram Lal; Dasgupta, Arnab; Mitra, Manimala

    2016-10-01

    In the light of the recent result from KamLAND-Zen (KLZ) and GERDA Phase-II, we update the bounds on the effective mass and the new physics parameters, relevant for neutrinoless double beta decay (0 ν β β ). In addition to the light Majorana neutrino exchange, we analyze beyond standard model contributions that arise in left-right symmetry and R-parity violating supersymmetry. The improved limit from KLZ constrains the effective mass of light neutrino exchange down to sub-eV mass regime 0.06 eV. Using the correlation between the 136Xe and 76 half-lives, we show that the KLZ limit individually rules out the positive claim of observation of 0 ν β β for all nuclear matrix element compilation. For the left-right symmetry and R-parity violating supersymmetry, the KLZ bound implies a factor of 2 improvement of the effective mass and the new physics parameters. The future ton scale experiments such as, nEXO will further constrain these models, in particular, will rule out standard as well as Type-II dominating LRSM inverted hierarchy scenario.

  2. Probing Models of Dark Matter and the Early Universe

    NASA Astrophysics Data System (ADS)

    Orlofsky, Nicholas David

    This thesis discusses models for dark matter (DM) and their behavior in the early universe. An important question is how phenomenological probes can directly search for signals of DM today. Another topic of investigation is how the DM and other processes in the early universe must evolve. Then, astrophysical bounds on early universe dynamics can constrain DM. We will consider these questions in the context of three classes of DM models--weakly interacting massive particles (WIMPs), axions, and primordial black holes (PBHs). Starting with WIMPs, we consider models where the DM is charged under the electroweak gauge group of the Standard Model. Such WIMPs, if generated by a thermal cosmological history, are constrained by direct detection experiments. To avoid present or near-future bounds, the WIMP model or cosmological history must be altered in some way. This may be accomplished by the inclusion of new states that coannihilate with the WIMP or a period of non-thermal evolution in the early universe. Future experiments are likely to probe some of these altered scenarios, and a non-observation would require a high degree of tuning in some of the model parameters in these scenarios. Next, axions, as light pseudo-Nambu-Goldstone bosons, are susceptible to quantum fluctuations in the early universe that lead to isocurvature perturbations, which are constrained by observations of the cosmic microwave background (CMB). We ask what it would take to allow axion models in the face of these strong CMB bounds. We revisit models where inflationary dynamics modify the axion potential and discuss how isocurvature bounds can be relaxed, elucidating the difficulties in these constructions. Avoiding disruption of inflationary dynamics provides important limits on the parameter space. Finally, PBHs have received interest in part due to observations by LIGO of merging black hole binaries. We ask how these PBHs could arise through inflationary models and investigate the opportunity for corroboration through experimental probes of gravitational waves at pulsar timing arrays. We provide examples of theories that are already ruled out, theories that will soon be probed, and theories that will not be tested in the foreseeable future. The models that are most strongly constrained are those with relatively broad primordial power spectra.

  3. Extending amulti-scale parameter regionalization (MPR) method by introducing parameter constrained optimization and flexible transfer functions

    NASA Astrophysics Data System (ADS)

    Klotz, Daniel; Herrnegger, Mathew; Schulz, Karsten

    2015-04-01

    A multi-scale parameter-estimation method, as presented by Samaniego et al. (2010), is implemented and extended for the conceptual hydrological model COSERO. COSERO is a HBV-type model that is specialized for alpine-environments, but has been applied over a wide range of basins all over the world (see: Kling et al., 2014 for an overview). Within the methodology available small-scale information (DEM, soil texture, land cover, etc.) is used to estimate the coarse-scale model parameters by applying a set of transfer-functions (TFs) and subsequent averaging methods, whereby only TF hyper-parameters are optimized against available observations (e.g. runoff data). The parameter regionalisation approach was extended in order to allow for a more meta-heuristical handling of the transfer-functions. The two main novelties are: 1. An explicit introduction of constrains into parameter estimation scheme: The constraint scheme replaces invalid parts of the transfer-function-solution space with valid solutions. It is inspired by applications in evolutionary algorithms and related to the combination of learning and evolution. This allows the consideration of physical and numerical constraints as well as the incorporation of a priori modeller-experience into the parameter estimation. 2. Spline-based transfer-functions: Spline-based functions enable arbitrary forms of transfer-functions: This is of importance since in many cases the general relationship between sub-grid information and parameters are known, but not the form of the transfer-function itself. The contribution presents the results and experiences with the adopted method and the introduced extensions. Simulation are performed for the pre-alpine/alpine Traisen catchment in Lower Austria. References: Samaniego, L., Kumar, R., Attinger, S. (2010): Multiscale parameter regionalization of a grid-based hydrologic model at the mesoscale, Water Resour. Res., doi: 10.1029/2008WR007327 Kling, H., Stanzel, P., Fuchs, M., and Nachtnebel, H. P. (2014): Performance of the COSERO precipitation-runoff model under non-stationary conditions in basins with different climates, Hydrolog. Sci. J., doi: 10.1080/02626667.2014.959956.

  4. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Treesearch

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  5. DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, M.; Manchester, W. B.; Van der Holst, B.

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful ofmore » observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).« less

  6. Vectorlike fermions and Higgs effective field theory revisited

    DOE PAGES

    Chen, Chien-Yi; Dawson, S.; Furlan, Elisabetta

    2017-07-10

    Heavy vectorlike quarks (VLQs) appear in many models of beyond the Standard Model physics. Direct experimental searches require these new quarks to be heavy, ≳ 800 – 1000 GeV . Here, we perform a global fit of the parameters of simple VLQ models in minimal representations of S U ( 2 ) L to precision data and Higgs rates. One interesting connection between anomalous Z bmore » $$\\bar{b}$$ interactions and Higgs physics in VLQ models is discussed. Finally, we present our analysis in an effective field theory (EFT) framework and show that the parameters of VLQ models are already highly constrained. Exact and approximate analytical formulas for the S and T parameters in the VLQ models we consider are available in the Supplemental Material as Mathematica files.« less

  7. Tests of chameleon gravity

    NASA Astrophysics Data System (ADS)

    Burrage, Clare; Sakstein, Jeremy

    2018-03-01

    Theories of modified gravity, where light scalars with non-trivial self-interactions and non-minimal couplings to matter—chameleon and symmetron theories—dynamically suppress deviations from general relativity in the solar system. On other scales, the environmental nature of the screening means that such scalars may be relevant. The highly-nonlinear nature of screening mechanisms means that they evade classical fifth-force searches, and there has been an intense effort towards designing new and novel tests to probe them, both in the laboratory and using astrophysical objects, and by reinterpreting existing datasets. The results of these searches are often presented using different parametrizations, which can make it difficult to compare constraints coming from different probes. The purpose of this review is to summarize the present state-of-the-art searches for screened scalars coupled to matter, and to translate the current bounds into a single parametrization to survey the state of the models. Presently, commonly studied chameleon models are well-constrained but less commonly studied models have large regions of parameter space that are still viable. Symmetron models are constrained well by astrophysical and laboratory tests, but there is a desert separating the two scales where the model is unconstrained. The coupling of chameleons to photons is tightly constrained but the symmetron coupling has yet to be explored. We also summarize the current bounds on f( R) models that exhibit the chameleon mechanism (Hu and Sawicki models). The simplest of these are well constrained by astrophysical probes, but there are currently few reported bounds for theories with higher powers of R. The review ends by discussing the future prospects for constraining screened modified gravity models further using upcoming and planned experiments.

  8. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  9. An iterative hyperelastic parameters reconstruction for breast cancer assessment

    NASA Astrophysics Data System (ADS)

    Mehrabian, Hatef; Samani, Abbas

    2008-03-01

    In breast elastography, breast tissues usually undergo large compressions resulting in significant geometric and structural changes, and consequently nonlinear mechanical behavior. In this study, an elastography technique is presented where parameters characterizing tissue nonlinear behavior is reconstructed. Such parameters can be used for tumor tissue classification. To model the nonlinear behavior, tissues are treated as hyperelastic materials. The proposed technique uses a constrained iterative inversion method to reconstruct the tissue hyperelastic parameters. The reconstruction technique uses a nonlinear finite element (FE) model for solving the forward problem. In this research, we applied Yeoh and Polynomial models to model the tissue hyperelasticity. To mimic the breast geometry, we used a computational phantom, which comprises of a hemisphere connected to a cylinder. This phantom consists of two types of soft tissue to mimic adipose and fibroglandular tissues and a tumor. Simulation results show the feasibility of the proposed method in reconstructing the hyperelastic parameters of the tumor tissue.

  10. Cosmology with Strong-lensing Systems

    NASA Astrophysics Data System (ADS)

    Cao, Shuo; Biesiada, Marek; Gavazzi, Raphaël; Piórkowska, Aleksandra; Zhu, Zong-Hong

    2015-06-01

    In this paper, we assemble a catalog of 118 strong gravitational lensing systems from the Sloan Lens ACS Survey, BOSS emission-line lens survey, Lens Structure and Dynamics, and Strong Lensing Legacy Survey and use them to constrain the cosmic equation of state. In particular, we consider two cases of dark energy phenomenology: the XCDM model, where dark energy is modeled by a fluid with constant w equation-of-state parameter, and in the Chevalier-Polarski-Linder (CPL) parameterization, where w is allowed to evolve with redshift, w(z)={{w}0}+{{w}1}\\frac{z}{1 + z} . We assume spherically symmetric mass distribution in lensing galaxies, but we relax the rigid assumption of the SIS model in favor of a more general power-law index γ, also allowing it to evolve with redshifts γ (z). Our results for the XCDM cosmology show agreement with values (concerning both w and γ parameters) obtained by other authors. We go further and constrain the CPL parameters jointly with γ (z). The resulting confidence regions for the parameters are much better than those obtained with a similar method in the past. They are also showing a trend of being complementary to the Type Ia supernova data. Our analysis demonstrates that strong gravitational lensing systems can be used to probe cosmological parameters like the cosmic equation of state for dark energy. Moreover, they have a potential to judge whether the cosmic equation of state evolved with time or not.

  11. Inverting multiple suites of thermal indicator data to constrain the heat flow history: A case study from east Kalimantan, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudford, B.S.

    1996-12-31

    The determination of an appropriate thermal history in an exploration area is of fundamental importance when attempting to understand the evolution of the petroleum system. In this talk we present the results of a single-well modelling study in which bottom hole temperature data, vitrinite reflectance data and three different biomarker ratio datasets were available to constrain the modelling. Previous modelling studies using biomarker ratios have been hampered by the wide variety of published kinetic parameters for biomarker evolution. Generally, these parameters have been determined either from measurements in the laboratory and extrapolation to the geological setting, or from downhole measurementsmore » where the heat flow history is assumed to be known. In the first case serious errors can arise because the heating rate is being extrapolated over many orders of magnitude, while in the second case errors can arise if the assumed heat flow history is incorrect. To circumvent these problems we carried out a parameter optimization in which the heat flow history was treated as an unknown in addition to the biomarker ratio kinetic parameters. This method enabled the heat flow history for the area to be determined together with appropriate kinetic parameters for the three measured biomarker ratios. Within the resolution of the data, the heat flow since the early Miocene has been relatively constant at levels required to yield good agreement between predicted and measured subsurface temperatures.« less

  12. Inverting multiple suites of thermal indicator data to constrain the heat flow history: A case study from east Kalimantan, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudford, B.S.

    1996-01-01

    The determination of an appropriate thermal history in an exploration area is of fundamental importance when attempting to understand the evolution of the petroleum system. In this talk we present the results of a single-well modelling study in which bottom hole temperature data, vitrinite reflectance data and three different biomarker ratio datasets were available to constrain the modelling. Previous modelling studies using biomarker ratios have been hampered by the wide variety of published kinetic parameters for biomarker evolution. Generally, these parameters have been determined either from measurements in the laboratory and extrapolation to the geological setting, or from downhole measurementsmore » where the heat flow history is assumed to be known. In the first case serious errors can arise because the heating rate is being extrapolated over many orders of magnitude, while in the second case errors can arise if the assumed heat flow history is incorrect. To circumvent these problems we carried out a parameter optimization in which the heat flow history was treated as an unknown in addition to the biomarker ratio kinetic parameters. This method enabled the heat flow history for the area to be determined together with appropriate kinetic parameters for the three measured biomarker ratios. Within the resolution of the data, the heat flow since the early Miocene has been relatively constant at levels required to yield good agreement between predicted and measured subsurface temperatures.« less

  13. Exploring theory space with Monte Carlo reweighting

    DOE PAGES

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.; ...

    2014-10-13

    Theories of new physics often involve a large number of unknown parameters which need to be scanned. Additionally, a putative signal in a particular channel may be due to a variety of distinct models of new physics. This makes experimental attempts to constrain the parameter space of motivated new physics models with a high degree of generality quite challenging. We describe how the reweighting of events may allow this challenge to be met, as fully simulated Monte Carlo samples generated for arbitrary benchmark models can be effectively re-used. Specifically, we suggest procedures that allow more efficient collaboration between theorists andmore » experimentalists in exploring large theory parameter spaces in a rigorous way at the LHC.« less

  14. Constrained maximum likelihood modal parameter identification applied to structural dynamics

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Peeters, Bart; Guillaume, Patrick; De Troyer, Tim

    2016-05-01

    A new modal parameter estimation method to directly establish modal models of structural dynamic systems satisfying two physically motivated constraints will be presented. The constraints imposed in the identified modal model are the reciprocity of the frequency response functions (FRFs) and the estimation of normal (real) modes. The motivation behind the first constraint (i.e. reciprocity) comes from the fact that modal analysis theory shows that the FRF matrix and therefore the residue matrices are symmetric for non-gyroscopic, non-circulatory, and passive mechanical systems. In other words, such types of systems are expected to obey Maxwell-Betti's reciprocity principle. The second constraint (i.e. real mode shapes) is motivated by the fact that analytical models of structures are assumed to either be undamped or proportional damped. Therefore, normal (real) modes are needed for comparison with these analytical models. The work done in this paper is a further development of a recently introduced modal parameter identification method called ML-MM that enables us to establish modal model that satisfies such motivated constraints. The proposed constrained ML-MM method is applied to two real experimental datasets measured on fully trimmed cars. This type of data is still considered as a significant challenge in modal analysis. The results clearly demonstrate the applicability of the method to real structures with significant non-proportional damping and high modal densities.

  15. Spatially constrained incoherent motion method improves diffusion-weighted MRI signal decay analysis in the liver and spleen

    PubMed Central

    Taimouri, Vahid; Afacan, Onur; Perez-Rossello, Jeannette M.; Callahan, Michael J.; Mulkern, Robert V.; Warfield, Simon K.; Freiman, Moti

    2015-01-01

    Purpose: To evaluate the effect of the spatially constrained incoherent motion (SCIM) method on improving the precision and robustness of fast and slow diffusion parameter estimates from diffusion-weighted MRI in liver and spleen in comparison to the independent voxel-wise intravoxel incoherent motion (IVIM) model. Methods: We collected diffusion-weighted MRI (DW-MRI) data of 29 subjects (5 healthy subjects and 24 patients with Crohn’s disease in the ileum). We evaluated parameters estimates’ robustness against different combinations of b-values (i.e., 4 b-values and 7 b-values) by comparing the variance of the estimates obtained with the SCIM and the independent voxel-wise IVIM model. We also evaluated the improvement in the precision of parameter estimates by comparing the coefficient of variation (CV) of the SCIM parameter estimates to that of the IVIM. Results: The SCIM method was more robust compared to IVIM (up to 70% in liver and spleen) for different combinations of b-values. Also, the CV values of the parameter estimations using the SCIM method were significantly lower compared to repeated acquisition and signal averaging estimated using IVIM, especially for the fast diffusion parameter in liver (CVIV IM = 46.61 ± 11.22, CVSCIM = 16.85 ± 2.160, p < 0.001) and spleen (CVIV IM = 95.15 ± 19.82, CVSCIM = 52.55 ± 1.91, p < 0.001). Conclusions: The SCIM method characterizes fast and slow diffusion more precisely compared to the independent voxel-wise IVIM model fitting in the liver and spleen. PMID:25832079

  16. Biological data assimilation for parameter estimation of a phytoplankton functional type model for the western North Pacific

    NASA Astrophysics Data System (ADS)

    Hoshiba, Yasuhiro; Hirata, Takafumi; Shigemitsu, Masahito; Nakano, Hideyuki; Hashioka, Taketo; Masuda, Yoshio; Yamanaka, Yasuhiro

    2018-06-01

    Ecosystem models are used to understand ecosystem dynamics and ocean biogeochemical cycles and require optimum physiological parameters to best represent biological behaviours. These physiological parameters are often tuned up empirically, while ecosystem models have evolved to increase the number of physiological parameters. We developed a three-dimensional (3-D) lower-trophic-level marine ecosystem model known as the Nitrogen, Silicon and Iron regulated Marine Ecosystem Model (NSI-MEM) and employed biological data assimilation using a micro-genetic algorithm to estimate 23 physiological parameters for two phytoplankton functional types in the western North Pacific. The estimation of the parameters was based on a one-dimensional simulation that referenced satellite data for constraining the physiological parameters. The 3-D NSI-MEM optimized by the data assimilation improved the timing of a modelled plankton bloom in the subarctic and subtropical regions compared to the model without data assimilation. Furthermore, the model was able to improve not only surface concentrations of phytoplankton but also their subsurface maximum concentrations. Our results showed that surface data assimilation of physiological parameters from two contrasting observatory stations benefits the representation of vertical plankton distribution in the western North Pacific.

  17. A Bayesian approach to earthquake source studies

    NASA Astrophysics Data System (ADS)

    Minson, Sarah

    Bayesian sampling has several advantages over conventional optimization approaches to solving inverse problems. It produces the distribution of all possible models sampled proportionally to how much each model is consistent with the data and the specified prior information, and thus images the entire solution space, revealing the uncertainties and trade-offs in the model. Bayesian sampling is applicable to both linear and non-linear modeling, and the values of the model parameters being sampled can be constrained based on the physics of the process being studied and do not have to be regularized. However, these methods are computationally challenging for high-dimensional problems. Until now the computational expense of Bayesian sampling has been too great for it to be practicable for most geophysical problems. I present a new parallel sampling algorithm called CATMIP for Cascading Adaptive Tempered Metropolis In Parallel. This technique, based on Transitional Markov chain Monte Carlo, makes it possible to sample distributions in many hundreds of dimensions, if the forward model is fast, or to sample computationally expensive forward models in smaller numbers of dimensions. The design of the algorithm is independent of the model being sampled, so CATMIP can be applied to many areas of research. I use CATMIP to produce a finite fault source model for the 2007 Mw 7.7 Tocopilla, Chile earthquake. Surface displacements from the earthquake were recorded by six interferograms and twelve local high-rate GPS stations. Because of the wealth of near-fault data, the source process is well-constrained. I find that the near-field high-rate GPS data have significant resolving power above and beyond the slip distribution determined from static displacements. The location and magnitude of the maximum displacement are resolved. The rupture almost certainly propagated at sub-shear velocities. The full posterior distribution can be used not only to calculate source parameters but also to determine their uncertainties. So while kinematic source modeling and the estimation of source parameters is not new, with CATMIP I am able to use Bayesian sampling to determine which parts of the source process are well-constrained and which are not.

  18. Multi-Objective vs. Single Objective Calibration of a Hydrologic Model using Either Different Hydrologic Signatures or Complementary Data Sources

    NASA Astrophysics Data System (ADS)

    Mai, J.; Cuntz, M.; Zink, M.; Schaefer, D.; Thober, S.; Samaniego, L. E.; Shafii, M.; Tolson, B.

    2015-12-01

    Hydrologic models are traditionally calibrated against discharge. Recent studies have shown however, that only a few global model parameters are constrained using the integral discharge measurements. It is therefore advisable to use additional information to calibrate those models. Snow pack data, for example, could improve the parametrization of snow-related processes, which might be underrepresented when using only discharge. One common approach is to combine these multiple objectives into one single objective function and allow the use of a single-objective algorithm. Another strategy is to consider the different objectives separately and apply a Pareto-optimizing algorithm. Both methods are challenging in the choice of appropriate multiple objectives with either conflicting interests or the focus on different model processes. A first aim of this study is to compare the two approaches employing the mesoscale Hydrologic Model mHM at several distinct river basins over Europe and North America. This comparison will allow the identification of the single-objective solution on the Pareto front. It is elucidated if this position is determined by the weighting and scaling of the multiple objectives when combing them to the single objective. The principal second aim is to guide the selection of proper objectives employing sensitivity analyses. These analyses are used to determine if an additional information would help to constrain additional model parameters. The additional information are either multiple data sources or multiple signatures of one measurement. It is evaluated if specific discharge signatures can inform different parts of the hydrologic model. The results show that an appropriate selection of discharge signatures increased the number of constrained parameters by more than 50% compared to using only NSE of the discharge time series. It is further assessed if the use of these signatures impose conflicting objectives on the hydrologic model. The usage of signatures is furthermore contrasted to the use of additional observations such as soil moisture or snow height. The gain of using an auxiliary dataset is determined using the parametric sensitivity on the respective modeled variable.

  19. Conjoined constraints on modified gravity from the expansion history and cosmic growth

    NASA Astrophysics Data System (ADS)

    Basilakos, Spyros; Nesseris, Savvas

    2017-09-01

    In this paper we present conjoined constraints on several cosmological models from the expansion history H (z ) and cosmic growth f σ8. The models we study include the CPL w0wa parametrization, the holographic dark energy (HDE) model, the time-varying vacuum (ΛtCDM ) model, the Dvali, Gabadadze and Porrati (DGP) and Finsler-Randers (FRDE) models, a power-law f (T ) model, and finally the Hu-Sawicki f (R ) model. In all cases we perform a simultaneous fit to the SnIa, CMB, BAO, H (z ) and growth data, while also following the conjoined visualization of H (z ) and f σ8 as in Linder (2017). Furthermore, we introduce the figure of merit (FoM) in the H (z )-f σ8 parameter space as a way to constrain models that jointly fit both probes well. We use both the latest H (z ) and f σ8 data, but also LSST-like mocks with 1% measurements, and we find that the conjoined method of constraining the expansion history and cosmic growth simultaneously is able not only to place stringent constraints on these parameters, but also to provide an easy visual way to discriminate cosmological models. Finally, we confirm the existence of a tension between the growth-rate and Planck CMB data, and we find that the FoM in the conjoined parameter space of H (z )-f σ8(z ) can be used to discriminate between the Λ CDM model and certain classes of modified gravity models, namely the DGP and f (T ).

  20. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  1. Distributed Soil Moisture Estimation in a Mountainous Semiarid Basin: Constraining Soil Parameter Uncertainty through Field Studies

    NASA Astrophysics Data System (ADS)

    Yatheendradas, S.; Vivoni, E.

    2007-12-01

    A common practice in distributed hydrological modeling is to assign soil hydraulic properties based on coarse textural datasets. For semiarid regions with poor soil information, the performance of a model can be severely constrained due to the high model sensitivity to near-surface soil characteristics. Neglecting the uncertainty in soil hydraulic properties, their spatial variation and their naturally-occurring horizonation can potentially affect the modeled hydrological response. In this study, we investigate such effects using the TIN-based Real-time Integrated Basin Simulator (tRIBS) applied to the mid-sized (100 km2) Sierra Los Locos watershed in northern Sonora, Mexico. The Sierra Los Locos basin is characterized by complex mountainous terrain leading to topographic organization of soil characteristics and ecosystem distributions. We focus on simulations during the 2004 North American Monsoon Experiment (NAME) when intensive soil moisture measurements and aircraft- based soil moisture retrievals are available in the basin. Our experiments focus on soil moisture comparisons at the point, topographic transect and basin scales using a range of different soil characterizations. We compare the distributed soil moisture estimates obtained using (1) a deterministic simulation based on soil texture from coarse soil maps, (2) a set of ensemble simulations that capture soil parameter uncertainty and their spatial distribution, and (3) a set of simulations that conditions the ensemble on recent soil profile measurements. Uncertainties considered in near-surface soil characterization provide insights into their influence on the modeled uncertainty, into the value of soil profile observations, and into effective use of on-going field observations for constraining the soil moisture response uncertainty.

  2. Constraining ecosystem processes from tower fluxes and atmospheric profiles.

    PubMed

    Hill, T C; Williams, M; Woodward, F I; Moncrieff, J B

    2011-07-01

    The planetary boundary layer (PBL) provides an important link between the scales and processes resolved by global atmospheric sampling/modeling and site-based flux measurements. The PBL is in direct contact with the land surface, both driving and responding to ecosystem processes. Measurements within the PBL (e.g., by radiosondes, aircraft profiles, and flask measurements) have a footprint, and thus an integrating scale, on the order of 1-100 km. We use the coupled atmosphere-biosphere model (CAB) and a Bayesian data assimilation framework to investigate the amount of biosphere process information that can be inferred from PBL measurements. We investigate the information content of PBL measurements in a two-stage study. First, we demonstrate consistency between the coupled model (CAB) and measurements, by comparing the model to eddy covariance flux tower measurements (i.e., water and carbon fluxes) and also PBL scalar profile measurements (i.e., water, carbon dioxide, and temperature) from Canadian boreal forest. Second, we use the CAB model in a set of Bayesian inversions experiments using synthetic data for a single day. In the synthetic experiment, leaf area and respiration were relatively well constrained, whereas surface albedo and plant hydraulic conductance were only moderately constrained. Finally, the abilities of the PBL profiles and the eddy covariance data to constrain the parameters were largely similar and only slightly lower than the combination of both observations.

  3. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?

    NASA Astrophysics Data System (ADS)

    Lin, Guangxing; Wan, Hui; Zhang, Kai; Qian, Yun; Ghan, Steven J.

    2016-09-01

    Efficient simulation strategies are crucial for the development and evaluation of high-resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity of the constrained simulations depends on the detailed implementation of nudging and the mechanism through which the perturbed parameter affects precipitation and cloud. The relative computational costs of nudged and free-running simulations are determined by the magnitude of internal variability in the physical quantities of interest, as well as the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature, and/or wind nudging with a 6 h relaxation time scale leads to nonnegligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while 1 year free-running simulations can satisfactorily capture the annual mean precipitation and cloud forcing sensitivities. In the case of a relatively weak perturbation in the large-scale condensation scheme, results from 1 year free-running simulations are strongly affected by natural noise, while nudging winds effectively reduces the noise, and reasonably reproduces the sensitivities. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  4. Closed inflationary universe in patch cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campo, Sergio del; Herrera, Ramon; Saavedra, Joel

    2009-09-15

    In this paper, we study closed inflationary universe models using the Gauss-Bonnet Brane. We determine and characterize the existence of a universe with {omega}>1, with an appropriate period of inflation. We have found that this model is less restrictive in comparison with the standard approach where a scalar field is considered. We use recent astronomical observations to constrain the parameters appearing in the model.

  5. Constraints on Lunar Structure from Combined Geochemical, Mineralogical, and Geophysical modeling

    NASA Astrophysics Data System (ADS)

    Bremner, P. M.; Fuqua, H.; Mallik, A.; Diamond, M. R.; Lock, S. J.; Panovska, S.; Nishikawa, Y.; Jiménez-Pérez, H.; Shahar, A.; Panero, W. R.; Lognonne, P. H.; Faul, U.

    2016-12-01

    The internal physical and geochemical structure of the Moon is still poorly constrained. Here, we take a multidisciplinary approach to attempt to constrain key parameters of the lunar structure. We use an ensemble of 1-D lunar compositional models with chemically and mineralogically distinct layers, and forward calculated physical parameters, in order to constrain the internal structure. We consider both a chemically well-mixed model with uniform bulk composition, and a chemically stratified model that includes a mantle with preserved mineralogical stratigraphy from magma ocean crystallization. Additionally, we use four different lunar temperature profiles that span the range of proposed selenotherms, giving eight separate sets of lunar models. In each set, we employed a grid search and a differential evolution genetic search algorithm to extensively explore model space, where the thickness of individual compositional layers was varied. In total, we forward calculated over one hundred thousand lunar models. It has been proposed that a dense, partially molten layer exists at the CMB to explain the lack of observed far-side deep moonquakes, the observation of reflected seismic phases from deep moonquakes, and enhanced tidal dissipation. However, subsequent models have proposed that these observables can be explained in other ways. In this study, using a variety of modeling techniques, we find that such a layer may have been formed by overturn of an ilmenite-rich layer, formed after the crystallization of a magma ocean. We therefore include a denser layer (modeled as an ilmenite-rich layer) at both the top and bottom of the lunar mantle in our models. For each set of models, we find models that explain the observed lunar mass and moment of inertia. We find that only a narrow range of core radii are consistent with the mass and moment of inertia constraints. Furthermore, in the chemically well-mixed models, we find that a dense layer is required in the upper mantle to meet the moment of inertia requirement. In no set of models is the mass of the lower dense layer well constrained. For the models that fit the observed mass and moment of inertia, we calculated 1-D seismic velocity profiles, the majority of which compare well with those determined by inverting the Apollo seismic data (Garcia et al., 2011 and Weber et al., 2011).

  6. Halo effective field theory constrains the solar 7Be + p → 8B + γ rate

    DOE PAGES

    Zhang, Xilin; Nollett, Kenneth M.; Phillips, D. R.

    2015-11-06

    In this study, we report an improved low-energy extrapolation of the cross section for the process 7Be(p,γ) 8B, which determines the 8B neutrino flux from the Sun. Our extrapolant is derived from Halo Effective Field Theory (EFT) at next-to-leading order. We apply Bayesian methods to determine the EFT parameters and the low-energy S-factor, using measured cross sections and scattering lengths as inputs. Asymptotic normalization coefficients of 8B are tightly constrained by existing radiative capture data, and contributions to the cross section beyond external direct capture are detected in the data at E < 0.5 MeV. Most importantly, the S-factor atmore » zero energy is constrained to be S(0) = 21.3 ± 0.7 eV b, which is an uncertainty smaller by a factor of two than previously recommended. That recommendation was based on the full range for S(0) obtained among a discrete set of models judged to be reasonable. In contrast, Halo EFT subsumes all models into a controlled low-energy approximant, where they are characterized by nine parameters at next-to-leading order. These are fit to data, and marginalized over via Monte Carlo integration to produce the improved prediction for S(E).« less

  7. Effect of soil property uncertainties on permafrost thaw projections: a calibration-constrained analysis

    NASA Astrophysics Data System (ADS)

    Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.

    2016-02-01

    The effects of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The null-space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of predictive uncertainty (due to soil property (parametric) uncertainty) and the inter-annual climate variability due to year to year differences in CESM climate forcings. After calibrating to measured borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant predictive uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Inter-annual climate variability in projected soil moisture content and Stefan number are small. A volume- and time-integrated Stefan number decreases significantly, indicating a shift in subsurface energy utilization in the future climate (latent heat of phase change becomes more important than heat conduction). Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we quantify the relative magnitude of soil property uncertainty to another source of permafrost uncertainty, structural climate model uncertainty. We show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.

  8. Application of advanced data assimilation techniques to the study of cloud and precipitation feedbacks in the tropical climate system

    NASA Astrophysics Data System (ADS)

    Posselt, Derek J.

    The research documented in this study centers around two topics: evaluation of the response of precipitating cloud systems to changes in the tropical climate system, and assimilation of cloud and precipitation information from remote-sensing platforms. The motivation for this work proceeds from the following outstanding problems: (1) Use of models to study the response of clouds to perturbations in the climate system is hampered by uncertainties in cloud microphysical parameterizations. (2) Though there is an ever-growing set of available observations, cloud and precipitation assimilation remains a difficult problem, particularly in the tropics. (3) Though it is widely acknowledged that cloud and precipitation processes play a key role in regulating the Earth's response to surface warming, the response of the tropical hydrologic cycle to climate perturbations remains largely unknown. The above issues are addressed in the following manner. First, Markov chain Monte Carlo (MCMC) methods are used to quantify the sensitivity of the NASA Goddard Cumulus Ensemble (GCE) cloud resolving model (CRM) to changes in its cloud odcrnpbymiC8l parameters. TRMM retrievals of precipitation rate, cloud properties, and radiative fluxes and heating rates over the South China Sea are then assimilated into the GCE model to constrain cloud microphysical parameters to values characteristic of convection in the tropics, and the resulting observation-constrained model is used to assess the response of the tropical hydrologic cycle to surface warming. The major findings of this study are the following: (1) MCMC provides an effective tool with which to evaluate both model parameterizations and the assumption of Gaussian statistics used in optimal estimation procedures. (2) Statistics of the tropical radiation budget and hydrologic cycle can be used to effectively constrain CRM cloud microphysical parameters. (3) For 2D CRM simulations run with and without shear, the precipitation efficiency of cloud systems increases with increasing sea surface temperature, while the high cloud fraction and outgoing shortwave radiation decrease.

  9. Global Gross Primary Productivity for 2015 Inferred from OCO-2 SIF and a Carbon-Cycle Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Norton, A.; Rayner, P. J.; Scholze, M.; Koffi, E. N. D.

    2016-12-01

    The intercomparison study CMIP5 among other studies (e.g. Bodman et al., 2013) has shown that the land carbon flux contributes significantly to the uncertainty in projections of future CO2 concentration and climate (Friedlingstein et al., 2014)). The main challenge lies in disaggregating the relatively well-known net land carbon flux into its component fluxes, gross primary production (GPP) and respiration. Model simulations of these processes disagree considerably, and accurate observations of photosynthetic activity have proved a hindrance. Here we build upon the Carbon Cycle Data Assimilation System (CCDAS) (Rayner et al., 2005) to constrain estimates of one of these uncertain fluxes, GPP, using satellite observations of Solar Induced Fluorescence (SIF). SIF has considerable benefits over other proxy observations as it tracks not just the presence of vegetation but actual photosynthetic activity (Walther et al., 2016; Yang et al., 2015). To combine these observations with process-based simulations of GPP we have coupled the model SCOPE with the CCDAS model BETHY. This provides a mechanistic relationship between SIF and GPP, and the means to constrain the processes relevant to SIF and GPP via model parameters in a data assimilation system. We ingest SIF observations from NASA's Orbiting Carbon Observatory 2 (OCO-2) for 2015 into the data assimilation system to constrain estimates of GPP in space and time, while allowing for explicit consideration of uncertainties in parameters and observations. Here we present first results of the assimilation with SIF. Preliminary results indicate a constraint on global annual GPP of at least 75% when using SIF observations, reducing the uncertainty to < 3 PgC yr-1. A large portion of the constraint is propagated via parameters that describe leaf phenology. These results help to bring together state-of-the-art observations and model to improve understanding and predictive capability of GPP.

  10. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis shows that terrestrial carbon and water cycle simulations in monsoon Asia were greatly improved, and the use of multiple satellite observations with this framework is an effective way for improving terrestrial biosphere models.

  11. CONSTRAINTS ON HYBRID METRIC-PALATINI GRAVITY FROM BACKGROUND EVOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lima, N. A.; Barreto, V. S., E-mail: ndal@roe.ac.uk, E-mail: vsm@roe.ac.uk

    2016-02-20

    In this work, we introduce two models of the hybrid metric-Palatini theory of gravitation. We explore their background evolution, showing explicitly that one recovers standard General Relativity with an effective cosmological constant at late times. This happens because the Palatini Ricci scalar evolves toward and asymptotically settles at the minimum of its effective potential during cosmological evolution. We then use a combination of cosmic microwave background, supernovae, and baryonic accoustic oscillations background data to constrain the models’ free parameters. For both models, we are able to constrain the maximum deviation from the gravitational constant G one can have at earlymore » times to be around 1%.« less

  12. Phenomenological Consequences of the Constrained Exceptional Supersymmetric Standard Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athron, Peter; King, S. F.; Miller, D. J.

    2010-02-10

    The Exceptional Supersymmetric Standard Model (E{sub 6}SSM) provides a low energy alternative to the MSSM, with an extra gauged U(1){sub N} symmetry, solving the mu-problem of the MSSM. Inspired by the possible embedding into an E{sub 6} GUT, the matter content fills three generations of E{sub 6} multiplets, thus predicting exciting exotic matter such as diquarks or leptoquarks. We present predictions from a constrained version of the model (cE{sub 6}SSM), with a universal scalar mass m{sub 0}, trilinear mass A and gaugino mass M{sub 1/2}. We reveal a large volume of the cE{sub 6}SSM parameter space where the correct breakdownmore » of the gauge symmetry is achieved and all experimental constraints satisfied. We predict a hierarchical particle spectrum with heavy scalars and light gauginos, while the new exotic matter can be light or heavy depending on parameters. We present representative cE{sub 6}SSM scenarios, demonstrating that there could be light exotic particles, like leptoquarks and a U(1){sub N} Z' boson, with spectacular signals at the LHC.« less

  13. Dark energy and equivalence principle constraints from astrophysical tests of the stability of the fine-structure constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martins, C.J.A.P.; Pinho, A.M.M.; Alves, R.F.C.

    2015-08-01

    Astrophysical tests of the stability of fundamental couplings, such as the fine-structure constant α, are becoming an increasingly powerful probe of new physics. Here we discuss how these measurements, combined with local atomic clock tests and Type Ia supernova and Hubble parameter data, constrain the simplest class of dynamical dark energy models where the same degree of freedom is assumed to provide both the dark energy and (through a dimensionless coupling, ζ, to the electromagnetic sector) the α variation. Specifically, current data tightly constrains a combination of ζ and the present dark energy equation of state w{sub 0}. Moreover, inmore » these models the new degree of freedom inevitably couples to nucleons (through the α dependence of their masses) and leads to violations of the Weak Equivalence Principle. We obtain indirect bounds on the Eötvös parameter η that are typically stronger than the current direct ones. We discuss the model-dependence of our results and briefly comment on how the forthcoming generation of high-resolution ultra-stable spectrographs will enable significantly tighter constraints.« less

  14. A Four-parameter Budyko Equation for Mean Annual Water Balance

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Wang, D.

    2016-12-01

    In this study, a four-parameter Budyko equation for long-term water balance at watershed scale is derived based on the proportionality relationships of the two-stage partitioning of precipitation. The four-parameter Budyko equation provides a practical solution to balance model simplicity and representation of dominated hydrologic processes. Under the four-parameter Budyko framework, the key hydrologic processes related to the lower bound of Budyko curve are determined, that is, the lower bound is corresponding to the situation when surface runoff and initial evaporation not competing with base flow generation are zero. The derived model is applied to 166 MOPEX watersheds in United States, and the dominant controlling factors on each parameter are determined. Then, four statistical models are proposed to predict the four model parameters based on the dominant controlling factors, e.g., saturated hydraulic conductivity, fraction of sand, time period between two storms, watershed slope, and Normalized Difference Vegetation Index. This study shows a potential application of the four-parameter Budyko equation to constrain land-surface parameterizations in ungauged watersheds or general circulation models.

  15. Constraining viscous dark energy models with the latest cosmological data

    NASA Astrophysics Data System (ADS)

    Wang, Deng; Yan, Yang-Jie; Meng, Xin-He

    2017-10-01

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H_0 tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios.

  16. The bulk composition of Titan's atmosphere.

    NASA Technical Reports Server (NTRS)

    Trafton, L.

    1972-01-01

    Consideration of the physical constraints for Titan's atmosphere leads to a model which describes the bulk composition of the atmosphere in terms of observable parameters. Intermediate-resolution photometric scans of both Saturn and Titan, including scans of the Q branch of Titan's methane band, constrain these parameters in such a way that the model indicates the presence of another important atmospheric gas, namely, another bulk constituent or a significant thermal opacity. Further progress in determining the composition and state of Titan's atmosphere requires additional observations to eliminate present ambiguities. For this purpose, particular observational targets are suggested.

  17. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  18. Stochastic reduced order models for inverse problems under uncertainty

    PubMed Central

    Warner, James E.; Aquino, Wilkins; Grigoriu, Mircea D.

    2014-01-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well. PMID:25558115

  19. An articulatorily constrained, maximum entropy approach to speech recognition and speech coding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, J.

    Hidden Markov models (HMM`s) are among the most popular tools for performing computer speech recognition. One of the primary reasons that HMM`s typically outperform other speech recognition techniques is that the parameters used for recognition are determined by the data, not by preconceived notions of what the parameters should be. This makes HMM`s better able to deal with intra- and inter-speaker variability despite the limited knowledge of how speech signals vary and despite the often limited ability to correctly formulate rules describing variability and invariance in speech. In fact, it is often the case that when HMM parameter values aremore » constrained using the limited knowledge of speech, recognition performance decreases. However, the structure of an HMM has little in common with the mechanisms underlying speech production. Here, the author argues that by using probabilistic models that more accurately embody the process of speech production, he can create models that have all the advantages of HMM`s, but that should more accurately capture the statistical properties of real speech samples--presumably leading to more accurate speech recognition. The model he will discuss uses the fact that speech articulators move smoothly and continuously. Before discussing how to use articulatory constraints, he will give a brief description of HMM`s. This will allow him to highlight the similarities and differences between HMM`s and the proposed technique.« less

  20. Diagnostics of models and observations in the contexts of exoplanets, brown dwarfs, and very low-mass stars.

    NASA Astrophysics Data System (ADS)

    Kopytova, Taisiya

    2016-01-01

    When studying isolated brown dwarfs and directly imaged exoplanets with insignificant orbital motion,we have to rely on theoretical models to determine basic parameters such as mass, age, effective temperature, and surface gravity.While stellar and atmospheric models are rapidly evolving, we need a powerful tool to test and calibrate them.In my thesis, I focussed on comparing interior and atmospheric models with observational data, in the effort of taking into account various systematic effects that can significantly influence the data analysis.As a first step, about 460 candidate member os the Hyades were screened for companions using diffraction limited imaging observation (both our own data and archival data). As a result I could establish the single star sequence for the Hyades comprising about 250 stars (Kopytova et al. 2015, accepted to A&A). Open clusters contain many coeval objects of the same chemical composition and age, and spanning a range of masses. We compare the obtained sequence with a set of theoretical isochrones identifying systematic offsets and revealing probable issues in the models.However, there are many cases when it is impossible to test models before comparing them with observations.As a second step, we apply atmospheric models for constraining parameters of WISE 0855-07, the coolest known Y dwarf(Kopytova et al. 2014, ApJ 797, 3). We demonstrate the limits of constraining effective temperature and the presence/absence of water clouds.As a third step, we introduce a novel method to take into account the above-mentioned systematics. We construct a "systematics vector" that allows us to reveal problematic wavelength ranges when fitting atmospheric models to observed near-infrared spectraof brown dwarfs and exoplanets (Kopytova et al., in prep.). This approach plays a crucial role when retrieving abundances for these objects, in particularly, a C/O ratio. The latter parameter is an important key to formation scenarios of brown dwarf and exoplanets. We show the way to constrain a C/O ratio while eliminating systematics effects, which significantly improves the reliability of a final result and our conclusions about formation history of certain exoplanets and brown dwarfs.

  1. Effect of soil property uncertainties on permafrost thaw projections: a calibration-constrained analysis: Modeling Archive

    DOE Data Explorer

    J.C. Rowland; D.R. Harp; C.J. Wilson; A.L. Atchley; V.E. Romanovsky; E.T. Coon; S.L. Painter

    2016-02-02

    This Modeling Archive is in support of an NGEE Arctic publication available at doi:10.5194/tc-10-341-2016. This dataset contains an ensemble of thermal-hydro soil parameters including porosity, thermal conductivity, thermal conductivity shape parameters, and residual saturation of peat and mineral soil. The ensemble was generated using a Null-Space Monte Carlo analysis of parameter uncertainty based on a calibration to soil temperatures collected at the Barrow Environmental Observatory site by the NGEE team. The micro-topography of ice wedge polygons present at the site is included in the analysis using three 1D column models to represent polygon center, rim and trough features. The Arctic Terrestrial Simulator (ATS) was used in the calibration to model multiphase thermal and hydrological processes in the subsurface.

  2. Measuring tongue shapes and positions with ultrasound imaging: a validation experiment using an articulatory model.

    PubMed

    Ménard, Lucie; Aubin, Jérôme; Thibeault, Mélanie; Richard, Gabrielle

    2012-01-01

    The goal of this paper is to assess the validity of various metrics developed to characterize tongue shapes and positions collected through ultrasound imaging in experimental setups where the probe is not constrained relative to the subject's head. Midsagittal contours were generated using an articulatory-acoustic model of the vocal tract. Sections of the tongue were extracted to simulate ultrasound imaging. Various transformations were applied to the tongue contours in order to simulate ultrasound probe displacements: vertical displacement, horizontal displacement, and rotation. The proposed data analysis method reshapes tongue contours into triangles and then extracts measures of angles, x and y coordinates of the highest point of the tongue, curvature degree, and curvature position. Parameters related to the absolute tongue position (tongue height and front/back position) are more sensitive to horizontal and vertical displacements of the probe, whereas parameters related to tongue curvature are less sensitive to such displacements. Because of their robustness to probe displacements, parameters related to tongue shape (especially curvature) are particularly well suited to cases where the transducer is not constrained relative to the head (studies with clinical populations or children). Copyright © 2011 S. Karger AG, Basel.

  3. Explaining postseismic and aseismic transient deformation in subduction zones with rate and state friction modeling constrained by lab and geodetic observations

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Dedontney, N. L.; Rice, J. R.

    2007-12-01

    Rate and state friction, as applied to modeling subduction earthquake sequences, routinely predicts postseismic slip. It also predicts spontaneous aseismic slip transients, at least when pore pressure p is highly elevated near and downdip from the stability transition [Liu and Rice, 2007]. Here we address how to make such postseismic and transient predictions more fully compatible with geophysical observations. For example, lab observations can determine the a, b parameters and state evolution slip L of rate and state friction as functions of lithology and temperature and, with aid of a structural and thermal model of the subduction zone, as functions of downdip distance. Geodetic observations constrain interseismic, postseismic and aseismic transient deformations, which are controlled in the modeling by the distributions of a \\barσ and b \\barσ (parameters which also partly control the seismic rupture phase), where \\barσ = σ - p. Elevated p, controlled by tectonic compression and dehydration, may be constrained by petrologic and seismic observations. The amount of deformation and downdip extent of the slipping zone associated with the spontaneous quasi- periodic transients, as thus far modeled [Liu and Rice, 2007], is generally smaller than that observed during episodes of slow slip events in northern Cascadia and SW Japan subduction zones. However, the modeling was based on lab data for granite gouge under hydrothermal conditions because data is most complete for that case. We here report modeling based on lab data on dry granite gouge [Stesky, 1975; Lockner et al., 1986], involving no or lessened chemical interaction with water and hence being a possibly closer analog to dehydrated oceanic crust, and limited data on gabbro gouge [He et al., 2007], an expected lithology. Both data sets show a much less rapid increase of a-b with temperature above the stability transition (~ 350 °C) than does wet granite gouge; a-b increases to ~ 0.08 for wet granite at 600 °C, but to only ~ 0.01 in the dry granite and gabbro cases. We find that the lessened high-T a - b does, for the same \\barσ, modestly extend the transient slip episodes further downdip, although a majority of slip is still contributed near and in the updip rate-weakening region. However, postseismic slip, for the same \\barσ, propagates much further downdip into the rate-strengthening region. To better constrain the downdip distribution of (a - b) \\barσ, and possibly a \\barσ and L, we focus on the geodetically constrained [Hutton et al., 2001] space-time distribution of postseismic slip for the 1995 Mw = 8.0 Colima-Jalisco earthquake. This is a similarly shallow dipping subduction zone with a thermal profile [Currie et al., 2001] comparable to those that have thus far been shown to exhibit aseismic transients and non-volcanic tremor [Peacock et al., 2002]. We extrapolate the modeled 2-D postseismic slip, following a thrust earthquake with a coseismic slip similar to the 1995 event, to a spatial-temporal 3-D distribution. Surface deformation due to such slips on the thrust fault in an elastic half space is calculated and compared to that observed at western Mexico GPS stations, to constrain the above depth-variable model parameters.

  4. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    NASA Astrophysics Data System (ADS)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar nucleosynthesis with far more complex abundance data sets and to refine the physical processes governing the chemical evolution of stellar systems.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Tong; Xue, Li; Zhao, Xiao-Hong

    Black holes (BHs) hide themselves behind various astronomical phenomena and their properties, i.e., mass and spin, are usually difficult to constrain. One leading candidate for the central engine model of gamma-ray bursts (GRBs) invokes a stellar mass BH and a neutrino-dominated accretion flow (NDAF), with the relativistic jet launched due to neutrino-anti-neutrino annihilations. Such a model gives rise to a matter-dominated fireball, and is suitable to interpret GRBs with a dominant thermal component with a photospheric origin. We propose a method to constrain BH mass and spin within the framework of this model and apply the method to the thermallymore » dominant GRB 101219B, whose initial jet launching radius, r {sub 0}, is constrained from the data. Using our numerical model of NDAF jets, we estimate the following constraints on the central BH: mass M {sub BH} ∼ 5–9 M {sub ⊙}, spin parameter a {sub *} ≳ 0.6, and disk mass 3 M {sub ⊙} ≲ M {sub disk} ≲ 4 M {sub ⊙}. Our results also suggest that the NDAF model is a competitive candidate for the central engine of GRBs with a strong thermal component.« less

  6. Uncertainty in the fate of soil organic carbon: A comparison of three conceptually different soil decomposition models

    USGS Publications Warehouse

    He, Yujie; Yang, Jinyan; Zhuang, Qianlai; McGuire, A. David; Zhu, Qing; Liu, Yaling; Teskey, Robert O.

    2014-01-01

    Conventional Q10 soil organic matter decomposition models and more complex microbial models are available for making projections of future soil carbon dynamics. However, it is unclear (1) how well the conceptually different approaches can simulate observed decomposition and (2) to what extent the trajectories of long-term simulations differ when using the different approaches. In this study, we compared three structurally different soil carbon (C) decomposition models (one Q10 and two microbial models of different complexity), each with a one- and two-horizon version. The models were calibrated and validated using 4 years of measurements of heterotrophic soil CO2 efflux from trenched plots in a Dahurian larch (Larix gmelinii Rupr.) plantation. All models reproduced the observed heterotrophic component of soil CO2 efflux, but the trajectories of soil carbon dynamics differed substantially in 100 year simulations with and without warming and increased litterfall input, with microbial models that produced better agreement with observed changes in soil organic C in long-term warming experiments. Our results also suggest that both constant and varying carbon use efficiency are plausible when modeling future decomposition dynamics and that the use of a short-term (e.g., a few years) period of measurement is insufficient to adequately constrain model parameters that represent long-term responses of microbial thermal adaption. These results highlight the need to reframe the representation of decomposition models and to constrain parameters with long-term observations and multiple data streams. We urge caution in interpreting future soil carbon responses derived from existing decomposition models because both conceptual and parameter uncertainties are substantial.

  7. Growth rate in the dynamical dark energy models.

    PubMed

    Avsajanishvili, Olga; Arkhipova, Natalia A; Samushia, Lado; Kahniashvili, Tina

    Dark energy models with a slowly rolling cosmological scalar field provide a popular alternative to the standard, time-independent cosmological constant model. We study the simultaneous evolution of background expansion and growth in the scalar field model with the Ratra-Peebles self-interaction potential. We use recent measurements of the linear growth rate and the baryon acoustic oscillation peak positions to constrain the model parameter [Formula: see text] that describes the steepness of the scalar field potential.

  8. Constraining Cosmological Models with Different Observations

    NASA Astrophysics Data System (ADS)

    Wei, J. J.

    2016-07-01

    With the observations of Type Ia supernovae (SNe Ia), scientists discovered that the Universe is experiencing an accelerated expansion, and then revealed the existence of dark energy in 1998. Since the amazing discovery, cosmology has became a hot topic in the physical research field. Cosmology is a subject that strongly depends on the astronomical observations. Therefore, constraining different cosmological models with all kinds of observations is one of the most important research works in the modern cosmology. The goal of this thesis is to investigate cosmology using the latest observations. The observations include SNe Ia, Type Ic Super Luminous supernovae (SLSN Ic), Gamma-ray bursts (GRBs), angular diameter distance of galaxy cluster, strong gravitational lensing, and age measurements of old passive galaxies, etc. In Chapter 1, we briefly review the research background of cosmology, and introduce some cosmological models. Then we summarize the progress on cosmology from all kinds of observations in more details. In Chapter 2, we present the results of our studies on the supernova cosmology. The main difficulty with the use of SNe Ia as standard candles is that one must optimize three or four nuisance parameters characterizing SN luminosities simultaneously with the parameters of an expansion model of the Universe. We have confirmed that one should optimize all of the parameters by carrying out the method of maximum likelihood estimation in any situation where the parameters include an unknown intrinsic dispersion. The commonly used method, which estimates the dispersion by requiring the reduced χ^{2} to equal unity, does not take into account all possible variances among the parameters. We carry out such a comparison of the standard ΛCDM cosmology and the R_{h}=ct Universe using the SN Legacy Survey sample of 252 SN events, and show that each model fits its individually reduced data very well. Moreover, it is quite evident that SLSNe Ic may be useful cosmological probes, perhaps even out to redshifts much greater (z≫2) than those accessible using SNe Ia. However, the currently available sample of SNe Ia is still quite small. Our simulations have shown that if SLSNe Ic can be commonly detected in the future, they have the potential of greatly refining the measurement of cosmological parameters, particularly the parameter w_{de} of the dark energy equation of state. In Chapter 3, we focus on GRB cosmology. We firstly use GRBs as standard candles in constructing the Hubble diagram at redshifts beyond the current reach of SNe Ia observations. Then we measure high-z star formation rate (SFR) using GRBs. We confirm that the latest Swift sample of GRBs reveals an increasing evolution in the GRB rate relative to SFR at high redshifts. The observed discrepancy between the GRB rate and the SFR may be eliminated by assuming a cosmic evolution in metallicity. Assuming that the SFR and GRB rate are related via an evolving metallicity, we find that the GRB data constrain the slope of the high-z SFR to be -2.41_{-2.09}^{+1.87}. In addition, first stars can only form in structures that are suitably dense, which can be parameterized by the minimum dark matter halo mass M_{min}. M_{min} must play an important role in star formation. We can constrain M_{min}<10^{12.5} M_{⊙} at 68% confidence level from the GRB data. In Chapter 4, we assemble a catalog of 69 strong gravitational lensing systems, and carefully introduce how to constrain cosmological parameters using these important data. We find that both ΛCDM and the R_{h}=ct Universe account for the lens observations quite well, though the precision of these measurements does not appear to be good enough to favor one model over the other. In Chapters 5 and 6, we use measurements of the galaxy-cluster angular diameter distances and 32 age measurements of passively evolving galaxies to test and compare the standard model (ΛCDM) and the R_{h}=ct Universe, respectively. We show that both models appear to account for these two data very well. However, because of the different number of free parameters in these models, we have to judge the goodness-of-fit of cosmological models with selection tools, such as the Akaike, Kullback, and Bayes Information Criteria, favoring R_{h}=ct over ΛCDM with a likelihood of about 70%, 75%, and 80%, respectively. Finally, some open questions and an outlook in the cosmology field are summarized in Chapter 7.

  9. Constraints on CDM cosmology from galaxy power spectrum, CMB and SNIa evolution

    NASA Astrophysics Data System (ADS)

    Ferramacho, L. D.; Blanchard, A.; Zolnierowski, Y.

    2009-05-01

    Aims: We examine the constraints that can be obtained on standard cold dark matter models from the most currently used data set: CMB anisotropies, type Ia supernovae and the SDSS luminous red galaxies. We also examine how these constraints are widened when the equation of state parameter w and the curvature parameter Ωk are left as free parameters. Finally, we investigate the impact on these constraints of a possible form of evolution in SNIa intrinsic luminosity. Methods: We obtained our results from MCMC analysis using the full likelihood of each data set. Results: For the ΛCDM model, our “vanilla” model, cosmological parameters are tightly constrained and consistent with current estimates from various methods. When the dark energy parameter w is free we find that the constraints remain mostly unchanged, i.e. changes are smaller than the 1 sigma uncertainties. Similarly, relaxing the assumption of a flat universe leads to nearly identical constraints on the dark energy density parameter of the universe Ω_Λ , baryon density of the universe Ω_b, the optical depth τ, the index of the power spectrum of primordial fluctuations n_S, with most one sigma uncertainties better than 5%. More significant changes appear on other parameters: while preferred values are almost unchanged, uncertainties for the physical dark matter density Ω_ch^2, Hubble constant H0 and σ8 are typically twice as large. The constraint on the age of the Universe, which is very accurate for the vanilla model, is the most degraded. We found that different methodological approaches on large scale structure estimates lead to appreciable differences in preferred values and uncertainty widths. We found that possible evolution in SNIa intrinsic luminosity does not alter these constraints by much, except for w, for which the uncertainty is twice as large. At the same time, this possible evolution is severely constrained. Conclusions: We conclude that systematic uncertainties for some estimated quantities are similar or larger than statistical ones.

  10. Seismic structure of the European upper mantle based on adjoint tomography

    NASA Astrophysics Data System (ADS)

    Zhu, Hejun; Bozdağ, Ebru; Tromp, Jeroen

    2015-04-01

    We use adjoint tomography to iteratively determine seismic models of the crust and upper mantle beneath the European continent and the North Atlantic Ocean. Three-component seismograms from 190 earthquakes recorded by 745 seismographic stations are employed in the inversion. Crustal model EPcrust combined with mantle model S362ANI comprise the 3-D starting model, EU00. Before the structural inversion, earthquake source parameters, for example, centroid moment tensors and locations, are reinverted based on global 3-D Green's functions and Fréchet derivatives. This study consists of three stages. In stage one, frequency-dependent phase differences between observed and simulated seismograms are used to constrain radially anisotropic wave speed variations. In stage two, frequency-dependent phase and amplitude measurements are combined to simultaneously constrain elastic wave speeds and anelastic attenuation. In these two stages, long-period surface waves and short-period body waves are combined to simultaneously constrain shallow and deep structures. In stage three, frequency-dependent phase and amplitude anomalies of three-component surface waves are used to simultaneously constrain radial and azimuthal anisotropy. After this three-stage inversion, we obtain a new seismic model of the European curst and upper mantle, named EU60. Improvements in misfits and histograms in both phase and amplitude help us to validate this three-stage inversion strategy. Long-wavelength elastic wave speed variations in model EU60 compare favourably with previous body- and surface wave tomographic models. Some hitherto unidentified features, such as the Adria microplate, naturally emerge from the smooth starting model. Subducting slabs, slab detachments, ancient suture zones, continental rifts and backarc basins are well resolved in model EU60. We find an anticorrelation between shear wave speed and anelastic attenuation at depths < 100 km. At greater depths, this anticorrelation becomes relatively weak, in agreement with previous global attenuation studies. Furthermore, enhanced attenuation is observed within the mantle transition zone beneath the North Atlantic Ocean. Consistent with typical radial anisotropy in 1-D reference models, the European continent is dominated by features with a radially anisotropic parameter ξ > 1, indicating predominantly horizontal flow within the upper mantle. In addition, subduction zones, such as the Apennines and Hellenic arcs, are characterized by vertical flow with ξ < 1 at depths greater than 150 km. We find that the direction of the fast anisotropic axis is closely tied to the tectonic evolution of the region. Averaged radial peak-to-peak anisotropic strength profiles identify distinct brittle-ductile deformation in lithospheric strength beneath oceans and continents. Finally, we use the `point-spread function' to assess image quality and analyse trade-offs between different model parameters.

  11. How well can we measure supermassive black hole spin?

    NASA Astrophysics Data System (ADS)

    Bonson, K.; Gallo, L. C.

    2016-05-01

    Being one of only two fundamental properties black holes possess, the spin of supermassive black holes (SMBHs) is of great interest for understanding accretion processes and galaxy evolution. However, in these early days of spin measurements, consistency and reproducibility of spin constraints have been a challenge. Here, we focus on X-ray spectral modelling of active galactic nuclei (AGN), examining how well we can truly return known reflection parameters such as spin under standard conditions. We have created and fit over 4000 simulated Seyfert 1 spectra each with 375±1k counts. We assess the fits with reflection fraction of R = 1 as well as reflection-dominated AGN with R = 5. We also examine the consequence of permitting fits to search for retrograde spin. In general, we discover that most parameters are overestimated when spectroscopy is restricted to the 2.5-10.0 keV regime and that models are insensitive to inner emissivity index and ionization. When the bandpass is extended out to 70 keV, parameters are more accurately estimated. Repeating the process for R = 5 reduces our ability to measure photon index (˜3 to 8 per cent error and overestimated), but increases precision in all other parameters - most notably ionization, which becomes better constrained (±45 erg cm s^{-1}) for low-ionization parameters (ξ < 200 erg cm s^{-1}). In all cases, we find the spin parameter is only well measured for the most rapidly rotating SMBHs (I.e. a > 0.8 to about ±0.10) and that inner emissivity index is never well constrained. Allowing our model to search for retrograde spin did not improve the results.

  12. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2011-12-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  13. Uncertainty Quantification and Parameter Tuning: A Case Study of Convective Parameterization Scheme in the WRF Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.

    2012-04-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  14. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2012-03-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  15. Understanding the Physical Structure of the Comet Shoemaker-Levy 9 Fragments

    NASA Astrophysics Data System (ADS)

    Rettig, Terrence

    2000-07-01

    Images of the fragmented comet Shoemaker-Levy 9 {SL9} as it approached Jupiter in 1994 provided a unique opportunity to {1} probe the comae, {2} understand the structure of the 20 cometary objects, and {3} provide limits on the Jovian impact parameters. The primary cometary questions were: how were the fragments formed and what was their central structure? There still remains a diversity of opinion regarding the structure of the 21 comet-like fragments as well as the specifics of the disruption event itself. We have shown from Monte Carlo modeling of surface brightness profiles that SL9 fragments had unusual dust size distributions and outflow velocities. Further work of a preliminary nature showed some of the central reflecting area excesses derived from surface brightness profile fitting {w/psf} appeared distributed rather than centrally concentrated as would be expected for comet- like objects, some central excesses were negative and also, the excesses could vary with time. With an improved coma subtraction technique we propose to model each coma surface brightness profile, extract central reflecting areas or central brightness excesses for the non-star-contaminated WFPC-2 SL9, to determine the behavior and characteristics of the central excesses as the fragments approached Jupiter. A second phase of the proposal will be to use numerical techniques {in conjunction with D. Richardson} to investigate the various fragment models. This is a difficult modeling process that will allow us to model the structure and physical characteristics of the fragments and thus constrain parameters for the Jovian impact events. The results will be used to constrain the structure of the central fragment cores of SL9 and how the observed dust comae were produced. The results will provide evidence to discriminate between the parent nucleus models {i.e., were the fragments solid objects or swarms of particles?} and provide better constraints on the atmospheric impact models. The physical characteristics of cometary nuclei are not well understood and the SL9 data provides an important opportunity to constrain these parameters.

  16. Transition disks: four candidates for ongoing giant planet formation in Ophiuchus

    NASA Astrophysics Data System (ADS)

    Orellana, M.; Cieza, L. A.; Schreiber, M. R.; Merín, B.; Brown, J. M.; Pellizza, L. J.; Romero, G. A.

    2012-03-01

    Among the large set of Spitzer-selected transitional disks that we have examined in the Ophiuchus molecular, four disks have been identified as (giant) planet-forming candidates based on the morphology of their spectral energy distributions (SEDs), their apparent lack of stellar companions, and evidence of accretion. Here we characterize the structures of these disks modeling their optical, infrared, and (sub)millimeter SEDs. We use the Monte Carlo radiative transfer package RADMC to construct a parametric model of the dust distribution in a flared disk with an inner cavity and calculate the temperature structure that is consistent with the density profile, when the disk is in thermal equilibrium with the irradiating star. For each object, we conducted a Bayesian exploration of the parameter space generating Monte Carlo Markov chains (MCMC) that allow the identification of the best-fit model parameters and to constrain their range of statistical confidence. Our calculations imply that evacuated cavities with radii ~2-8 AU are present that appear to have been carved by embedded giant planets. We found parameter values that are consistent with those previously given in the literature, indicating that there has been a mild degree of grain growth and dust settling, which deserves to be investigated with further modeling and follow-up observations. Resolved images with (sub)millimeter interferometers would be required to break some of the degeneracies of the models and more tightly constrain the physical properties of these fascinating disks.

  17. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    PubMed

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  18. Upper bounds on superpartner masses from upper bounds on the Higgs boson mass.

    PubMed

    Cabrera, M E; Casas, J A; Delgado, A

    2012-01-13

    The LHC is putting bounds on the Higgs boson mass. In this Letter we use those bounds to constrain the minimal supersymmetric standard model (MSSM) parameter space using the fact that, in supersymmetry, the Higgs mass is a function of the masses of sparticles, and therefore an upper bound on the Higgs mass translates into an upper bound for the masses for superpartners. We show that, although current bounds do not constrain the MSSM parameter space from above, once the Higgs mass bound improves big regions of this parameter space will be excluded, putting upper bounds on supersymmetry (SUSY) masses. On the other hand, for the case of split-SUSY we show that, for moderate or large tanβ, the present bounds on the Higgs mass imply that the common mass for scalars cannot be greater than 10(11)  GeV. We show how these bounds will evolve as LHC continues to improve the limits on the Higgs mass.

  19. A Morphological Analysis of Gamma-Ray Burst Early-optical Afterglows

    NASA Astrophysics Data System (ADS)

    Gao, He; Wang, Xiang-Gao; Mészáros, Peter; Zhang, Bing

    2015-09-01

    Within the framework of the external shock model of gamma-ray burst (GRB) afterglows, we perform a morphological analysis of the early-optical light curves to directly constrain model parameters. We define four morphological types, i.e., the reverse shock-dominated cases with/without the emergence of the forward shock peak (Type I/Type II), and the forward shock-dominated cases without/with νm crossing the band (Type III/IV). We systematically investigate all of the Swift GRBs that have optical detection earlier than 500 s and find 3/63 Type I bursts (4.8%), 12/63 Type II bursts (19.0%), 30/63 Type III bursts (47.6%), 8/63 Type IV bursts (12.7%), and 10/63 Type III/IV bursts (15.9%). We perform Monte Carlo simulations to constrain model parameters in order to reproduce the observations. We find that the favored value of the magnetic equipartition parameter in the forward shock ({ɛ }B{{f}}) ranges from 10-6 to 10-2, and the reverse-to-forward ratio of ɛB ({{R}}B) is about 100. The preferred electron equipartition parameter {ɛ }{{e}}{{r},{{f}}} value is 0.01, which is smaller than the commonly assumed value, e.g., 0.1. This could mitigate the so-called “efficiency problem” for the internal shock model, if ɛe during the prompt emission phase (in the internal shocks) is large (say, ˜0.1). The preferred {{R}}B value is in agreement with the results in previous works that indicate a moderately magnetized baryonic jet for GRBs.

  20. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  1. Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF.

    PubMed

    Duan, Chong; Kallehauge, Jesper F; Pérez-Torres, Carlos J; Bretthorst, G Larry; Beeman, Scott C; Tanderup, Kari; Ackerman, Joseph J H; Garbow, Joel R

    2018-02-01

    This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. Bayesian probability theory-based parameter estimation and model selection were used to compare tracer kinetic modeling employing either the measured remote-AIF (R-AIF, i.e., the traditional approach) or an inferred cL-AIF against both in silico DCE-MRI data and clinical, cervical cancer DCE-MRI data. When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels of the 16 patients (35,602 voxels in total). Among those voxels, a tracer kinetic model that employed the voxel-specific cL-AIF was preferred (i.e., had a higher posterior probability) in 80 % of the voxels compared to the direct use of a single R-AIF. Maps of spatial variation in voxel-specific AIF bolus amplitude and arrival time for heterogeneous tissues, such as cervical cancer, are accessible with the cL-AIF approach. The cL-AIF method, which estimates unique local-AIF amplitude and arrival time for each voxel within the tissue of interest, provides better modeling of DCE-MRI data than the use of a single, measured R-AIF. The Bayesian-based data analysis described herein affords estimates of uncertainties for each model parameter, via posterior probability density functions, and voxel-wise comparison across methods/models, via model selection in data modeling.

  2. Constrained optimization of image restoration filters

    NASA Technical Reports Server (NTRS)

    Riemer, T. E.; Mcgillem, C. D.

    1973-01-01

    A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.

  3. Common reflection point migration and velocity analysis for anisotropic media

    NASA Astrophysics Data System (ADS)

    Oropeza, Ernesto V.

    An efficient Kirchhoff-style prestack depth migration, called 'parsimonious' migration was developed a decade ago for isotropic 2D and 3D media. The common-reflection point (CRP) migration velocity analysis (MVA) was developed later for isotropic media. The isotropic parsimonious migration produces incorrect images when the media is actually anisotropic. Similarly, isotropic CRP MVA produces incorrect inversions when the medium is anisotropic. In this study both parsimonious depth migration and common-reflection point migration velocity analysis are extended for application to 2D tilted transversely isotropic (TTI) media and illustrated with synthetic P-wave data. While the framework of isotropic parsimonious migration may be retained, the extension to TTI media requires redevelopment of each of the numerical components, including calculation of the phase and group velocity for TTI media, development of a new two-point anisotropic ray tracer, and substitution of an initial-angle and anisotropic shooting ray-trace algorithm to replace the isotropic one. The 2D model parameterization consists of Thomsen's parameters (Vpo, epsilon, delta) and the tilt angle of the symmetry axis of the TI medium. The parsimonious anisotropic migration algorithm is successfully applied to synthetic data from a TTI version of the Marmousi-2 model. The quality of the image improves by weighting the impulse response by the calculation of the anisotropic Fresnel radius. The accuracy and speed of this migration makes it useful for anisotropic velocity model building. The common-reflection point migration velocity analysis for TTI media for P-waves includes (and inverts for) Vpo, epsilon, and delta. The orientation of the anisotropic symmetry axis have to be constrained. If it constrained orthogonal to the layer bottom (as it conventionally is), it is estimated at each CRP and updated at each iteration without intermediate picking. The extension to TTI media requires development of a new inversion procedure to include Vpo, epsilon, and delta in the perturbations. The TTI CRP MVA is applied to a single layer to demonstrate its feasibility. Errors in the estimation of the orientation of the symmetry axis larger that 5 degrees affect the inversion of epsilon and delta while Vpo is less sensitive to this parameter. The TTI CRP MVA is also applied to a version of the TTI BP model by layer stripping so one group of CRPs are used do to inversion top to bottom, constraining the model parameter after each previous group of CRPs converges. Vpo, delta and the orientation of the anisotropic symmetry axis (constrained orthogonal to the local reflector orientation) are successfully inverted. epsilon is less well constrained by the small acquisition aperture in the data .

  4. Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey

    NASA Astrophysics Data System (ADS)

    Peel, Austin; Lin, Chieh-An; Lanusse, François; Leonard, Adrienne; Starck, Jean-Luc; Kilbinger, Martin

    2017-03-01

    Peak statistics in weak-lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complementary probe to two-point and higher-order statistics to constrain our cosmological models. Next-generation galaxy surveys, with their advanced optics and large areas, will measure the cosmic weak-lensing signal with unprecedented precision. To prepare for these anticipated data sets, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm, σ8, and w0de. In particular, we study how Camelus, a fast stochastic model for predicting peaks, can be applied to such large surveys. The algorithm avoids the need for time-costly N-body simulations, and its stochastic approach provides full PDF information of observables. Considering peaks with a signal-to-noise ratio ≥ 1, we measure the abundance histogram in a mock shear catalogue of approximately 5000 deg2 using a multiscale mass-map filtering technique. We constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation, a robust likelihood-free inference algorithm. Peak statistics yield a tight but significantly biased constraint in the σ8-Ωm plane, as measured by the width ΔΣ8 of the 1σ contour. We find Σ8 = σ8(Ωm/ 0.27)α = 0.77-0.05+0.06 with α = 0.75 for a flat ΛCDM model. The strong bias indicates the need to better understand and control the model systematics before applying it to a real survey of this size or larger. We perform a calibration of the model and compare results to those from the two-point correlation functions ξ± measured on the same field. We calibrate the ξ± result as well, since its contours are also biased, although not as severely as for peaks. In this case, we find for peaks Σ8 = 0.76-0.03+0.02 with α = 0.65, while for the combined ξ+ and ξ- statistics the values are Σ8 = 0.76-0.01+0.02 and α = 0.70. We conclude that the constraining power can therefore be comparable between the two weak-lensing observables in large-field surveys. Furthermore, the tilt in the σ8-Ωm degeneracy direction for peaks with respect to that of ξ± suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0de cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and ξ±.

  5. Effect of soil property uncertainties on permafrost thaw projections: A calibration-constrained analysis

    DOE PAGES

    Harp, Dylan R.; Atchley, Adam L.; Painter, Scott L.; ...

    2016-02-11

    Here, the effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21more » $$^{st}$$ century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.« less

  6. Effect of soil property uncertainties on permafrost thaw projections: A calibration-constrained analysis

    DOE PAGES

    Harp, D. R.; Atchley, A. L.; Painter, S. L.; ...

    2015-06-29

    The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows formore » the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. As a result, by comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.« less

  7. Effect of soil property uncertainties on permafrost thaw projections: a calibration-constrained analysis

    NASA Astrophysics Data System (ADS)

    Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.

    2015-06-01

    The effect of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The Null-Space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of intra-annual uncertainty due to soil properties and the inter-annual variability due to year to year differences in CESM climate forcings. After calibrating to borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant intra-annual uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Intra-annual uncertainties in projected soil moisture content and Stefan number are small. A volume and time integrated Stefan number decreases significantly in the future climate, indicating that latent heat of phase change becomes more important than heat conduction in future climates. Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.

  8. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  9. Testing a generalized cubic Galileon gravity model with the Coma Cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terukina, Ayumu; Yamamoto, Kazuhiro; Okabe, Nobuhiro

    2015-10-01

    We obtain a constraint on the parameters of a generalized cubic Galileon gravity model exhibiting the Vainshtein mechanism by using multi-wavelength observations of the Coma Cluster. The generalized cubic Galileon model is characterized by three parameters of the turning scale associated with the Vainshtein mechanism, and the amplitude of modifying a gravitational potential and a lensing potential. X-ray and Sunyaev-Zel'dovich (SZ) observations of the intra-cluster medium are sensitive to the gravitational potential, while the weak-lensing (WL) measurement is specified by the lensing potential. A joint fit of a complementary multi-wavelength dataset of X-ray, SZ and WL measurements enables us tomore » simultaneously constrain these three parameters of the generalized cubic Galileon model for the first time. We also find a degeneracy between the cluster mass parameters and the gravitational modification parameters, which is influential in the limit of the weak screening of the fifth force.« less

  10. Search for supersymmetry in pp collisions at sqrt[s]=1.96 TeV Using the trilepton signature for chargino-neutralino production.

    PubMed

    Aaltonen, T; Adelman, J; Akimoto, T; Albrow, M G; Alvarez González, B; Amerio, S; Amidei, D; Anastassov, A; Annovi, A; Antos, J; Apollinari, G; Apresyan, A; Arisawa, T; Artikov, A; Ashmanskas, W; Attal, A; Aurisano, A; Azfar, F; Azzurri, P; Badgett, W; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Bartsch, V; Bauer, G; Beauchemin, P-H; Bedeschi, F; Bednar, P; Beecher, D; Behari, S; Bellettini, G; Bellinger, J; Benjamin, D; Beretvas, A; Beringer, J; Bhatti, A; Binkley, M; Bisello, D; Bizjak, I; Blair, R E; Blocker, C; Blumenfeld, B; Bocci, A; Bodek, A; Boisvert, V; Bolla, G; Bortoletto, D; Boudreau, J; Boveia, A; Brau, B; Bridgeman, A; Brigliadori, L; Bromberg, C; Brubaker, E; Budagov, J; Budd, H S; Budd, S; Burkett, K; Busetto, G; Bussey, P; Buzatu, A; Byrum, K L; Cabrera, S; Calancha, C; Campanelli, M; Campbell, M; Canelli, F; Canepa, A; Carlsmith, D; Carosi, R; Carrillo, S; Carron, S; Casal, B; Casarsa, M; Castro, A; Catastini, P; Cauz, D; Cavaliere, V; Cavalli-Sforza, M; Cerri, A; Cerrito, L; Chang, S H; Chen, Y C; Chertok, M; Chiarelli, G; Chlachidze, G; Chlebana, F; Cho, K; Chokheli, D; Chou, J P; Choudalakis, G; Chuang, S H; Chung, K; Chung, W H; Chung, Y S; Ciobanu, C I; Ciocci, M A; Clark, A; Clark, D; Compostella, G; Convery, M E; Conway, J; Copic, K; Cordelli, M; Cortiana, G; Cox, D J; Crescioli, F; Almenar, C Cuenca; Cuevas, J; Culbertson, R; Cully, J C; Dagenhart, D; Datta, M; Davies, T; de Barbaro, P; De Cecco, S; Deisher, A; De Lorenzo, G; Dell'orso, M; Deluca, C; Demortier, L; Deng, J; Deninno, M; Derwent, P F; Devlin, T; di Giovanni, G P; Dionisi, C; Di Ruzza, B; Dittmann, J R; D'Onofrio, M; Donati, S; Dong, P; Donini, J; Dorigo, T; Dube, S; Efron, J; Elagin, A; Erbacher, R; Errede, D; Errede, S; Eusebi, R; Fang, H C; Farrington, S; Fedorko, W T; Feild, R G; Feindt, M; Fernandez, J P; Ferrazza, C; Field, R; Flanagan, G; Forrest, R; Franklin, M; Freeman, J C; Furic, I; Gallinaro, M; Galyardt, J; Garberson, F; Garcia, J E; Garfinkel, A F; Genser, K; Gerberich, H; Gerdes, D; Gessler, A; Giagu, S; Giakoumopoulou, V; Giannetti, P; Gibson, K; Gimmell, J L; Ginsburg, C M; Giokaris, N; Giordani, M; Giromini, P; Giunta, M; Giurgiu, G; Glagolev, V; Glatzer, J; Glenzinski, D; Gold, M; Goldschmidt, N; Golossanov, A; Gomez, G; Gomez-Ceballos, G; Goncharov, M; González, O; Gorelov, I; Goshaw, A T; Goulianos, K; Gresele, A; Grinstein, S; Grosso-Pilcher, C; Grundler, U; Guimaraes da Costa, J; Gunay-Unalan, Z; Haber, C; Hahn, K; Hahn, S R; Halkiadakis, E; Han, B-Y; Han, J Y; Handler, R; Happacher, F; Hara, K; Hare, D; Hare, M; Harper, S; Harr, R F; Harris, R M; Hartz, M; Hatakeyama, K; Hauser, J; Hays, C; Heck, M; Heijboer, A; Heinemann, B; Heinrich, J; Henderson, C; Herndon, M; Heuser, J; Hewamanage, S; Hidas, D; Hill, C S; Hirschbuehl, D; Hocker, A; Hou, S; Houlden, M; Hsu, S-C; Huffman, B T; Hughes, R E; Husemann, U; Huston, J; Incandela, J; Introzzi, G; Iori, M; Ivanov, A; James, E; Jayatilaka, B; Jeon, E J; Jha, M K; Jindariani, S; Johnson, W; Jones, M; Joo, K K; Jun, S Y; Jung, J E; Junk, T R; Kamon, T; Kar, D; Karchin, P E; Kato, Y; Kephart, R; Keung, J; Khotilovich, V; Kilminster, B; Kim, D H; Kim, H S; Kim, J E; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kimura, N; Kirsch, L; Klimenko, S; Knuteson, B; Ko, B R; Koay, S A; Kondo, K; Kong, D J; Konigsberg, J; Korytov, A; Kotwal, A V; Kreps, M; Kroll, J; Krop, D; Krumnack, N; Kruse, M; Krutelyov, V; Kubo, T; Kuhr, T; Kulkarni, N P; Kurata, M; Kusakabe, Y; Kwang, S; Laasanen, A T; Lami, S; Lammel, S; Lancaster, M; Lander, R L; Lannon, K; Lath, A; Latino, G; Lazzizzera, I; Lecompte, T; Lee, E; Lee, H; Lee, S W; Leone, S; Lewis, J D; Lin, C S; Linacre, J; Lindgren, M; Lipeles, E; Lister, A; Litvintsev, D O; Liu, C; Liu, T; Lockyer, N S; Loginov, A; Loreti, M; Lovas, L; Lu, R-S; Lucchesi, D; Lueck, J; Luci, C; Lujan, P; Lukens, P; Lungu, G; Lyons, L; Lys, J; Lysak, R; Lytken, E; Mack, P; Macqueen, D; Madrak, R; Maeshima, K; Makhoul, K; Maki, T; Maksimovic, P; Malde, S; Malik, S; Manca, G; Manousakis-Katsikakis, A; Margaroli, F; Marino, C; Marino, C P; Martin, A; Martin, V; Martínez, M; Martínez-Ballarín, R; Maruyama, T; Mastrandrea, P; Masubuchi, T; Mattson, M E; Mazzanti, P; McFarland, K S; McIntyre, P; McNulty, R; Mehta, A; Mehtala, P; Menzione, A; Merkel, P; Mesropian, C; Miao, T; Miladinovic, N; Miller, R; Mills, C; Milnik, M; Mitra, A; Mitselmakher, G; Miyake, H; Moggi, N; Moon, C S; Moore, R; Morello, M J; Morlok, J; Movilla Fernandez, P; Mülmenstädt, J; Mukherjee, A; Muller, Th; Mumford, R; Murat, P; Mussini, M; Nachtman, J; Nagai, Y; Nagano, A; Naganoma, J; Nakamura, K; Nakano, I; Napier, A; Necula, V; Neu, C; Neubauer, M S; Nielsen, J; Nodulman, L; Norman, M; Norniella, O; Nurse, E; Oakes, L; Oh, S H; Oh, Y D; Oksuzian, I; Okusawa, T; Orava, R; Osterberg, K; Pagan Griso, S; Pagliarone, C; Palencia, E; Papadimitriou, V; Papaikonomou, A; Paramonov, A A; Parks, B; Pashapour, S; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pellett, D E; Penzo, A; Phillips, T J; Piacentino, G; Pianori, E; Pinera, L; Pitts, K; Plager, C; Pondrom, L; Poukhov, O; Pounder, N; Prakoshyn, F; Pronko, A; Proudfoot, J; Ptohos, F; Pueschel, E; Punzi, G; Pursley, J; Rademacker, J; Rahaman, A; Ramakrishnan, V; Ranjan, N; Redondo, I; Reisert, B; Rekovic, V; Renton, P; Rescigno, M; Richter, S; Rimondi, F; Ristori, L; Robson, A; Rodrigo, T; Rodriguez, T; Rogers, E; Rolli, S; Roser, R; Rossi, M; Rossin, R; Roy, P; Ruiz, A; Russ, J; Rusu, V; Saarikko, H; Safonov, A; Sakumoto, W K; Saltó, O; Santi, L; Sarkar, S; Sartori, L; Sato, K; Savoy-Navarro, A; Scheidle, T; Schlabach, P; Schmidt, A; Schmidt, E E; Schmidt, M A; Schmidt, M P; Schmitt, M; Schwarz, T; Scodellaro, L; Scott, A L; Scribano, A; Scuri, F; Sedov, A; Seidel, S; Seiya, Y; Semenov, A; Sexton-Kennedy, L; Sfyrla, A; Shalhout, S Z; Shears, T; Shepard, P F; Sherman, D; Shimojima, M; Shiraishi, S; Shochet, M; Shon, Y; Shreyber, I; Sidoti, A; Sinervo, P; Sisakyan, A; Slaughter, A J; Slaunwhite, J; Sliwa, K; Smith, J R; Snider, F D; Snihur, R; Soha, A; Somalwar, S; Sood, A; Sorin, V; Spalding, J; Spreitzer, T; Squillacioti, P; Stanitzki, M; St Denis, R; Stelzer, B; Stelzer-Chilton, O; Stentz, D; Strologas, J; Stuart, D; Suh, J S; Sukhanov, A; Suslov, I; Suzuki, T; Taffard, A; Takashima, R; Takeuchi, Y; Tanaka, R; Tecchio, M; Teng, P K; Terashi, K; Thom, J; Thompson, A S; Thompson, G A; Thomson, E; Tipton, P; Tiwari, V; Tkaczyk, S; Toback, D; Tokar, S; Tollefson, K; Tomura, T; Tonelli, D; Torre, S; Torretta, D; Totaro, P; Tourneur, S; Tu, Y; Turini, N; Ukegawa, F; Vallecorsa, S; van Remortel, N; Varganov, A; Vataga, E; Vázquez, F; Velev, G; Vellidis, C; Veszpremi, V; Vidal, M; Vidal, R; Vila, I; Vilar, R; Vine, T; Vogel, M; Volobouev, I; Volpi, G; Würthwein, F; Wagner, P; Wagner, R G; Wagner, R L; Wagner-Kuhr, J; Wagner, W; Wakisaka, T; Wallny, R; Wang, S M; Warburton, A; Waters, D; Weinberger, M; Wester, W C; Whitehouse, B; Whiteson, D; Wicklund, A B; Wicklund, E; Williams, G; Williams, H H; Wilson, P; Winer, B L; Wittich, P; Wolbers, S; Wolfe, C; Wright, T; Wu, X; Wynne, S M; Xie, S; Yagil, A; Yamamoto, K; Yamaoka, J; Yang, U K; Yang, Y C; Yao, W M; Yeh, G P; Yoh, J; Yorita, K; Yoshida, T; Yu, G B; Yu, I; Yu, S S; Yun, J C; Zanello, L; Zanetti, A; Zaw, I; Zhang, X; Zheng, Y; Zucchelli, S

    2008-12-19

    We use the three lepton and missing energy trilepton signature to search for chargino-neutralino production with 2.0 fb;{-1} of integrated luminosity collected by the CDF II experiment at the Tevatron pp[over ] collider. We expect an excess of approximately 11 supersymmetric events for a choice of parameters of the mSUGRA model, but our observation of 7 events is consistent with the standard model expectation of 6.4 events. We constrain the mSUGRA model of supersymmetry and rule out chargino masses up to 145 GeV/c;{2} for a specific choice of parameters.

  11. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.

  12. An opinion-driven behavioral dynamics model for addictive behaviors

    NASA Astrophysics Data System (ADS)

    Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; Ambrose, Bridget K.; Brodsky, Nancy S.; Brown, Theresa J.; Husten, Corinne; Glass, Robert J.

    2015-04-01

    We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual's behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters provide targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. This has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.

  13. Frequentist and Bayesian Orbital Parameter Estimaton from Radial Velocity Data Using RVLIN, BOOTTRAN, and RUN DMC

    NASA Astrophysics Data System (ADS)

    Nelson, Benjamin Earl; Wright, Jason Thomas; Wang, Sharon

    2015-08-01

    For this hack session, we will present three tools used in analyses of radial velocity exoplanet systems. RVLIN is a set of IDL routines used to quickly fit an arbitrary number of Keplerian curves to radial velocity data to find adequate parameter point estimates. BOOTTRAN is an IDL-based extension of RVLIN to provide orbital parameter uncertainties using bootstrap based on a Keplerian model. RUN DMC is a highly parallelized Markov chain Monte Carlo algorithm that employs an n-body model, primarily used for dynamically complex or poorly constrained exoplanet systems. We will compare the performance of these tools and their applications to various exoplanet systems.

  14. Parameter identification in ODE models with oscillatory dynamics: a Fourier regularization approach

    NASA Astrophysics Data System (ADS)

    Chiara D'Autilia, Maria; Sgura, Ivonne; Bozzini, Benedetto

    2017-12-01

    In this paper we consider a parameter identification problem (PIP) for data oscillating in time, that can be described in terms of the dynamics of some ordinary differential equation (ODE) model, resulting in an optimization problem constrained by the ODEs. In problems with this type of data structure, simple application of the direct method of control theory (discretize-then-optimize) yields a least-squares cost function exhibiting multiple ‘low’ minima. Since in this situation any optimization algorithm is liable to fail in the approximation of a good solution, here we propose a Fourier regularization approach that is able to identify an iso-frequency manifold {{ S}} of codimension-one in the parameter space \

  15. A global parallel model based design of experiments method to minimize model output uncertainty.

    PubMed

    Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E

    2012-03-01

    Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.

  16. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    NASA Astrophysics Data System (ADS)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  17. Inverse problem to constrain the controlling parameters of large-scale heat transport processes: The Tiberias Basin example

    NASA Astrophysics Data System (ADS)

    Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien

    2015-04-01

    Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite Element Modeling of Flow, Mass and Heat Transport in Porous and Fractured Media. Springer- Verlag Berlin Heidelberg ,996p. Doherty J., 2010, PEST: Model-Independent Parameter Estimation. user manual 5th Edition. Watermark, Brisbane, Australia Magri, F., Inbar, N., Siebert C., Rosenthal, E., Guttman, J., Möller, P., 2015. Transient simulations of large-scale hydrogeological processes causing temperature and salinity anomalies in the Tiberias Basin. Journal of Hydrology, 520(0), 342-355.

  18. An overview of a highly versatile forward and stable inverse algorithm for airborne, ground-based and borehole electromagnetic and electric data

    NASA Astrophysics Data System (ADS)

    Auken, Esben; Christiansen, Anders Vest; Kirkegaard, Casper; Fiandaca, Gianluca; Schamper, Cyril; Behroozmand, Ahmad Ali; Binley, Andrew; Nielsen, Emil; Effersø, Flemming; Christensen, Niels Bøie; Sørensen, Kurt; Foged, Nikolaj; Vignoli, Giulio

    2015-07-01

    We present an overview of a mature, robust and general algorithm providing a single framework for the inversion of most electromagnetic and electrical data types and instrument geometries. The implementation mainly uses a 1D earth formulation for electromagnetics and magnetic resonance sounding (MRS) responses, while the geoelectric responses are both 1D and 2D and the sheet's response models a 3D conductive sheet in a conductive host with an overburden of varying thickness and resistivity. In all cases, the focus is placed on delivering full system forward modelling across all supported types of data. Our implementation is modular, meaning that the bulk of the algorithm is independent of data type, making it easy to add support for new types. Having implemented forward response routines and file I/O for a given data type provides access to a robust and general inversion engine. This engine includes support for mixed data types, arbitrary model parameter constraints, integration of prior information and calculation of both model parameter sensitivity analysis and depth of investigation. We present a review of our implementation and methodology and show four different examples illustrating the versatility of the algorithm. The first example is a laterally constrained joint inversion (LCI) of surface time domain induced polarisation (TDIP) data and borehole TDIP data. The second example shows a spatially constrained inversion (SCI) of airborne transient electromagnetic (AEM) data. The third example is an inversion and sensitivity analysis of MRS data, where the electrical structure is constrained with AEM data. The fourth example is an inversion of AEM data, where the model is described by a 3D sheet in a layered conductive host.

  19. Estimation of contrast agent bolus arrival delays for improved reproducibility of liver DCE MRI

    NASA Astrophysics Data System (ADS)

    Chouhan, Manil D.; Bainbridge, Alan; Atkinson, David; Punwani, Shonit; Mookerjee, Rajeshwar P.; Lythgoe, Mark F.; Taylor, Stuart A.

    2016-10-01

    Delays between contrast agent (CA) arrival at the site of vascular input function (VIF) sampling and the tissue of interest affect dynamic contrast enhanced (DCE) MRI pharmacokinetic modelling. We investigate effects of altering VIF CA bolus arrival delays on liver DCE MRI perfusion parameters, propose an alternative approach to estimating delays and evaluate reproducibility. Thirteen healthy volunteers (28.7  ±  1.9 years, seven males) underwent liver DCE MRI using dual-input single compartment modelling, with reproducibility (n  =  9) measured at 7 days. Effects of VIF CA bolus arrival delays were assessed for arterial and portal venous input functions. Delays were pre-estimated using linear regression, with restricted free modelling around the pre-estimated delay. Perfusion parameters and 7 days reproducibility were compared using this method, freely modelled delays and no delays using one-way ANOVA. Reproducibility was assessed using Bland-Altman analysis of agreement. Maximum percent change relative to parameters obtained using zero delays, were  -31% for portal venous (PV) perfusion, +43% for total liver blood flow (TLBF), +3247% for hepatic arterial (HA) fraction, +150% for mean transit time and  -10% for distribution volume. Differences were demonstrated between the 3 methods for PV perfusion (p  =  0.0085) and HA fraction (p  <  0.0001), but not other parameters. Improved mean differences and Bland-Altman 95% Limits-of-Agreement for reproducibility of PV perfusion (9.3 ml/min/100 g, ±506.1 ml/min/100 g) and TLBF (43.8 ml/min/100 g, ±586.7 ml/min/100 g) were demonstrated using pre-estimated delays with constrained free modelling. CA bolus arrival delays cause profound differences in liver DCE MRI quantification. Pre-estimation of delays with constrained free modelling improved 7 days reproducibility of perfusion parameters in volunteers.

  20. The role of interior watershed processes in improving parameter estimation and performance of watershed models.

    PubMed

    Yen, Haw; Bailey, Ryan T; Arabi, Mazdak; Ahmadi, Mehdi; White, Michael J; Arnold, Jeffrey G

    2014-09-01

    Watershed models typically are evaluated solely through comparison of in-stream water and nutrient fluxes with measured data using established performance criteria, whereas processes and responses within the interior of the watershed that govern these global fluxes often are neglected. Due to the large number of parameters at the disposal of these models, circumstances may arise in which excellent global results are achieved using inaccurate magnitudes of these "intra-watershed" responses. When used for scenario analysis, a given model hence may inaccurately predict the global, in-stream effect of implementing land-use practices at the interior of the watershed. In this study, data regarding internal watershed behavior are used to constrain parameter estimation to maintain realistic intra-watershed responses while also matching available in-stream monitoring data. The methodology is demonstrated for the Eagle Creek Watershed in central Indiana. Streamflow and nitrate (NO) loading are used as global in-stream comparisons, with two process responses, the annual mass of denitrification and the ratio of NO losses from subsurface and surface flow, used to constrain parameter estimation. Results show that imposing these constraints not only yields realistic internal watershed behavior but also provides good in-stream comparisons. Results further demonstrate that in the absence of incorporating intra-watershed constraints, evaluation of nutrient abatement strategies could be misleading, even though typical performance criteria are satisfied. Incorporating intra-watershed responses yields a watershed model that more accurately represents the observed behavior of the system and hence a tool that can be used with confidence in scenario evaluation. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.

  1. Tests of gravity with future space-based experiments

    NASA Astrophysics Data System (ADS)

    Sakstein, Jeremy

    2018-03-01

    Future space-based tests of relativistic gravitation—laser ranging to Phobos, accelerometers in orbit, and optical networks surrounding Earth—will constrain the theory of gravity with unprecedented precision by testing the inverse-square law, the strong and weak equivalence principles, and the deflection and time delay of light by massive bodies. In this paper, we estimate the bounds that could be obtained on alternative gravity theories that use screening mechanisms to suppress deviations from general relativity in the Solar System: chameleon, symmetron, and Galileon models. We find that space-based tests of the parametrized post-Newtonian parameter γ will constrain chameleon and symmetron theories to new levels, and that tests of the inverse-square law using laser ranging to Phobos will provide the most stringent constraints on Galileon theories to date. We end by discussing the potential for constraining these theories using upcoming tests of the weak equivalence principle, and conclude that further theoretical modeling is required in order to fully utilize the data.

  2. On the use of published radiobiological parameters and the evaluation of NTCP models regarding lung pneumonitis in clinical breast radiotherapy.

    PubMed

    Svolos, Patricia; Tsougos, Ioannis; Kyrgias, Georgios; Kappas, Constantine; Theodorou, Kiki

    2011-04-01

    In this study we sought to evaluate and accent the importance of radiobiological parameter selection and implementation to the normal tissue complication probability (NTCP) models. The relative seriality (RS) and the Lyman-Kutcher-Burman (LKB) models were studied. For each model, a minimum and maximum set of radiobiological parameter sets was selected from the overall published sets applied in literature and a theoretical mean parameter set was computed. In order to investigate the potential model weaknesses in NTCP estimation and to point out the correct use of model parameters, these sets were used as input to the RS and the LKB model, estimating radiation induced complications for a group of 36 breast cancer patients treated with radiotherapy. The clinical endpoint examined was Radiation Pneumonitis. Each model was represented by a certain dose-response range when the selected parameter sets were applied. Comparing the models with their ranges, a large area of coincidence was revealed. If the parameter uncertainties (standard deviation) are included in the models, their area of coincidence might be enlarged, constraining even greater their predictive ability. The selection of the proper radiobiological parameter set for a given clinical endpoint is crucial. Published parameter values are not definite but should be accompanied by uncertainties, and one should be very careful when applying them to the NTCP models. Correct selection and proper implementation of published parameters provides a quite accurate fit of the NTCP models to the considered endpoint.

  3. Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.

    PubMed

    Lv, Jie; Havlak, Paul; Putnam, Nicholas H

    2011-10-05

    Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.

  4. Supersymmetry searches in GUT models with non-universal scalar masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannoni, M.; Gómez, M.E.; Ellis, J.

    2016-03-01

    We study SO(10), SU(5) and flipped SU(5) GUT models with non-universal soft supersymmetry-breaking scalar masses, exploring how they are constrained by LHC supersymmetry searches and cold dark matter experiments, and how they can be probed and distinguished in future experiments. We find characteristic differences between the various GUT scenarios, particularly in the coannihilation region, which is very sensitive to changes of parameters. For example, the flipped SU(5) GUT predicts the possibility of ∼t{sub 1}−χ coannihilation, which is absent in the regions of the SO(10) and SU(5) GUT parameter spaces that we study. We use the relic density predictions in differentmore » models to determine upper bounds for the neutralino masses, and we find large differences between different GUT models in the sparticle spectra for the same LSP mass, leading to direct connections of distinctive possible experimental measurements with the structure of the GUT group. We find that future LHC searches for generic missing E{sub T}, charginos and stops will be able to constrain the different GUT models in complementary ways, as will the Xenon 1 ton and Darwin dark matter scattering experiments and future FERMI or CTA γ-ray searches.« less

  5. Experimental Studies of Nuclear Physics Input for γ -Process Nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Scholz, Philipp; Heim, Felix; Mayer, Jan; Netterdon, Lars; Zilges, Andreas

    The predictions of reaction rates for the γ process in the scope of the Hauser-Feshbach statistical model crucially depend on nuclear physics input-parameters as optical-model potentials (OMP) or γ -ray strength functions. Precise cross-section measurements at astrophysically relevant energies help to constrain adopted models and, therefore, to reduce the uncertainties in the theoretically predicted reaction rates. During the last years, several cross-sections of charged-particle induced reactions on heavy nuclei have been measured at the University of Cologne. Either by means of the in-beam method at the HORUS γ -ray spectrometer or the activation technique using the Cologne Clover Counting Setup, total and partial cross-sections could be used to further constrain different models for nuclear physics input-parameters. It could be shown that modifications on the α -OMP in the case of the 112Sn(α , γ ) reaction also improve the description of the recently measured cross sections of the 108Cd(α , γ ) and 108Cd(α , n) reaction and other reactions as well. Partial cross-sections of the 92Mo(p, γ ) reaction were used to improve the γ -strength function model in 93Tc in the same way as it was done for the 89Y(p, γ ) reaction.

  6. Constraining the 2012-2014 growing season Alaskan methane budget using CARVE aircraft measurements

    NASA Astrophysics Data System (ADS)

    Hartery, S.; Chang, R. Y. W.; Commane, R.; Lindaas, J.; Miller, S. M.; Wofsy, S. C.; Karion, A.; Sweeney, C.; Miller, C. E.; Dinardo, S. J.; Steiner, N.; McDonald, K. C.; Watts, J. D.; Zona, D.; Oechel, W. C.; Kimball, J. S.; Henderson, J.; Mountain, M. E.

    2015-12-01

    Soil in northen latitudes contains rich carbon stores which have been historically preserved via permafrost within the soil bed; however, recent surface warming in these regions is allowing deeper soil layers to thaw, influencing the net carbon exchange from these areas. Due to the extreme nature of its climate, these eco-regions remain poorly understood by most global models. In this study we analyze methane fluxes from Alaska using in situ aircraft observations from the 2012-2014 Carbon in Arctic Reservoir Vulnerability Experiment (CARVE). These observations are coupled with an atmospheric particle transport model which quantitatively links surface emissions to atmospheric observations to make regional methane emission estimates. The results of this study are two-fold. First, the inter-annual variability of the methane emissions was found to be <1 Tg over the area of interest and is largely influenced by the length of time the deep soil remains unfrozen. Second, the resulting methane flux estimates and mean soil parameters were used to develop an empirical emissions model to help spatially and temporally constrain the methane exchange at the Alaskan soil surface. The empirical emissions model will provide a basis for exploring the sensitivity of methane emissions to subsurface soil temperature, soil moisture, organic carbon content, and other parameters commonly used in process-based models.

  7. Comprehensive, Process-based Identification of Hydrologic Models using Satellite and In-situ Water Storage Data: A Multi-objective calibration Approach

    NASA Astrophysics Data System (ADS)

    Abdo Yassin, Fuad; Wheater, Howard; Razavi, Saman; Sapriza, Gonzalo; Davison, Bruce; Pietroniro, Alain

    2015-04-01

    The credible identification of vertical and horizontal hydrological components and their associated parameters is very challenging (if not impossible) by only constraining the model to streamflow data, especially in regions where the vertical processes significantly dominate the horizontal processes. The prairie areas of the Saskatchewan River basin, a major water system in Canada, demonstrate such behavior, where the hydrologic connectivity and vertical fluxes are mainly controlled by the amount of surface and sub-surface water storages. In this study, we develop a framework for distributed hydrologic model identification and calibration that jointly constrains the model response (i.e., streamflows) as well as a set of model state variables (i.e., water storages) to observations. This framework is set up in the form of multi-objective optimization, where multiple performance criteria are defined and used to simultaneously evaluate the fidelity of the model to streamflow observations and observed (estimated) changes of water storage in the gridded landscape over daily and monthly time scales. The time series of estimated changes in total water storage (including soil, canopy, snow and pond storages) used in this study were derived from an experimental study enhanced by the information obtained from the GRACE satellite. We test this framework on the calibration of a Land Surface Scheme-Hydrology model, called MESH (Modélisation Environmentale Communautaire - Surface and Hydrology), for the Saskatchewan River basin. Pareto Archived Dynamically Dimensioned Search (PA-DDS) is used as the multi-objective optimization engine. The significance of using the developed framework is demonstrated in comparison with the results obtained through a conventional calibration approach to streamflow observations. The approach of incorporating water storage data into the model identification process can more potentially constrain the posterior parameter space, more comprehensively evaluate the model fidelity, and yield more credible predictions.

  8. Charting the parameter space of the global 21-cm signal

    NASA Astrophysics Data System (ADS)

    Cohen, Aviad; Fialkov, Anastasia; Barkana, Rennan; Lotem, Matan

    2017-12-01

    The early star-forming Universe is still poorly constrained, with the properties of high-redshift stars, the first heating sources and reionization highly uncertain. This leaves observers planning 21-cm experiments with little theoretical guidance. In this work, we explore the possible range of high-redshift parameters including the star formation efficiency and the minimal mass of star-forming haloes; the efficiency, spectral energy distribution and redshift evolution of the first X-ray sources; and the history of reionization. These parameters are only weakly constrained by available observations, mainly the optical depth to the cosmic microwave background. We use realistic semi-numerical simulations to produce the global 21-cm signal over the redshift range z = 6-40 for each of 193 different combinations of the astrophysical parameters spanning the allowed range. We show that the expected signal fills a large parameter space, but with a fixed general shape for the global 21-cm curve. Even with our wide selection of models, we still find clear correlations between the key features of the global 21-cm signal and underlying astrophysical properties of the high-redshift Universe, namely the Ly α intensity, the X-ray heating rate and the production rate of ionizing photons. These correlations can be used to directly link future measurements of the global 21-cm signal to astrophysical quantities in a mostly model-independent way. We identify additional correlations that can be used as consistency checks.

  9. Diagnostic Simulations of the Lunar Exosphere using Coma and Tail

    NASA Astrophysics Data System (ADS)

    Lee, Dong Wook; Kim, Sang J.

    2017-10-01

    The characteristics of the lunar exosphere can be constrained by comparing simulated models with observational data of the coma and tail (Lee et al., JGR, 2011); and thus far a few independent approaches on this issue have been performed and presented in the literature. Since there are two-different observational constraints for the lunar exosphere, it is interesting to find the best exospheric model that can account for the observed characteristics of the coma and tail. Considering various initial conditions of different sources and space weather, we present preliminary time-dependent simulations between the initial and final stages of the development of the lunar tail. Based on an updated 3-D model, we are planning to conduct numerous simulations to constrain the best model parameters from the coma images obtained from coronagraph observations supported by a NASA monitoring program (Morgan, Killen, and Potter, AGU, 2015) and future tail data.

  10. An Inequality Constrained Least-Squares Approach as an Alternative Estimation Procedure for Atmospheric Parameters from VLBI Observations

    NASA Astrophysics Data System (ADS)

    Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel

    2016-12-01

    On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.

  11. Evolution of non-interacting entropic dark energy and its phantom nature

    NASA Astrophysics Data System (ADS)

    Mathew, Titus K.; Murali, Chinthak; Shejeelammal, J.

    2016-04-01

    Assuming the form of the entropic dark energy (EDE) as it arises from the surface term in the Einstein-Hilbert’s action, its evolution was analyzed in an expanding flat universe. The model parameters were evaluated by constraining the model using the Union data on Type Ia supernovae. We found that in the non-interacting case, the model predicts an early decelerated phase and a later accelerated phase at the background level. The evolutions of the Hubble parameter, dark energy (DE) density, equation of state parameter and deceleration parameter were obtained. The model hardly seems to be supporting the linear perturbation growth for the structure formation. We also found that the EDE shows phantom nature for redshifts z < 0.257. During the phantom epoch, the model predicts big rip effect at which both the scale factor of expansion and the DE density become infinitely large and the big rip time is found to be around 36 Giga years from now.

  12. Test of the Chevallier-Polarski-Linder parametrization for rapid dark energy equation of state transitions

    NASA Astrophysics Data System (ADS)

    Linden, Sebastian; Virey, Jean-Marc

    2008-07-01

    We test the robustness and flexibility of the Chevallier-Polarski-Linder (CPL) parametrization of the dark energy equation of state w(z)=w0+wa(z)/(1+z) in recovering a four-parameter steplike fiducial model. We constrain the parameter space region of the underlying fiducial model where the CPL parametrization offers a reliable reconstruction. It turns out that non-negligible biases leak into the results for recent (z<2.5) rapid transitions, but that CPL yields a good reconstruction in all other cases. The presented analysis is performed with supernova Ia data as forecasted for a space mission like SNAP/JDEM, combined with future expectations for the cosmic microwave background shift parameter R and the baryonic acoustic oscillation parameter A.

  13. Hamiltonian Effective Field Theory Study of the N^{*}(1535) Resonance in Lattice QCD.

    PubMed

    Liu, Zhan-Wei; Kamleh, Waseem; Leinweber, Derek B; Stokes, Finn M; Thomas, Anthony W; Wu, Jia-Jun

    2016-02-26

    Drawing on experimental data for baryon resonances, Hamiltonian effective field theory (HEFT) is used to predict the positions of the finite-volume energy levels to be observed in lattice QCD simulations of the lowest-lying J^{P}=1/2^{-} nucleon excitation. In the initial analysis, the phenomenological parameters of the Hamiltonian model are constrained by experiment and the finite-volume eigenstate energies are a prediction of the model. The agreement between HEFT predictions and lattice QCD results obtained on volumes with spatial lengths of 2 and 3 fm is excellent. These lattice results also admit a more conventional analysis where the low-energy coefficients are constrained by lattice QCD results, enabling a determination of resonance properties from lattice QCD itself. Finally, the role and importance of various components of the Hamiltonian model are examined.

  14. Structure of neutron star crusts from new Skyrme effective interactions constrained by chiral effective field theory

    NASA Astrophysics Data System (ADS)

    Lim, Yeunhwan; Holt, Jeremy W.

    2017-06-01

    We investigate the structure of neutron star crusts, including the crust-core boundary, based on new Skyrme mean field models constrained by the bulk-matter equation of state from chiral effective field theory and the ground-state energies of doubly-magic nuclei. Nuclear pasta phases are studied using both the liquid drop model as well as the Thomas-Fermi approximation. We compare the energy per nucleon for each geometry (spherical nuclei, cylindrical nuclei, nuclear slabs, cylindrical holes, and spherical holes) to obtain the ground state phase as a function of density. We find that the size of the Wigner-Seitz cell depends strongly on the model parameters, especially the coefficients of the density gradient interaction terms. We employ also the thermodynamic instability method to check the validity of the numerical solutions based on energy comparisons.

  15. Generic NICA-Donnan model parameters for metal-ion binding by humic substances.

    PubMed

    Milne, Christopher J; Kinniburgh, David G; van Riemsdijk, Willem H; Tipping, Edward

    2003-03-01

    A total of 171 datasets of literature and experimental data for metal-ion binding by fulvic and humic acids have been digitized and re-analyzed using the NICA-Donnan model. Generic parameter values have been derived that can be used for modeling in the absence of specific metalion binding measurements. These values complement the previously derived generic descriptions of proton binding. For ions where the ranges of pH, concentration, and ionic strength conditions are well covered by the available data,the generic parameters successfully describe the metalion binding behavior across a very wide range of conditions and for different humic and fulvic acids. Where published data for other metal ions are too sparse to constrain the model well, generic parameters have been estimated by interpolating trends observable in the parameter values of the well-defined data. Recommended generic NICA-Donnan model parameters are provided for 23 metal ions (Al, Am, Ba, Ca, Cd, Cm, Co, CrIII, Cu, Dy, Eu, FeII, FeIII, Hg, Mg, Mn, Ni, Pb, Sr, Thv, UVIO2, VIIIO, and Zn) for both fulvic and humic acids. These parameters probably represent the best NICA-Donnan description of metal-ion binding that can be achieved using existing data.

  16. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  17. Dynamic characterization of high damping viscoelastic materials from vibration test data

    NASA Astrophysics Data System (ADS)

    Martinez-Agirre, Manex; Elejabarrieta, María Jesús

    2011-08-01

    The numerical analysis and design of structural systems involving viscoelastic damping materials require knowledge of material properties and proper mathematical models. A new inverse method for the dynamic characterization of high damping and strong frequency-dependent viscoelastic materials from vibration test data measured by forced vibration tests with resonance is presented. Classical material parameter extraction methods are reviewed; their accuracy for characterizing high damping materials is discussed; and the bases of the new analysis method are detailed. The proposed inverse method minimizes the residue between the experimental and theoretical dynamic response at certain discrete frequencies selected by the user in order to identify the parameters of the material constitutive model. Thus, the material properties are identified in the whole bandwidth under study and not just at resonances. Moreover, the use of control frequencies makes the method insensitive to experimental noise and the efficiency is notably enhanced. Therefore, the number of tests required is drastically reduced and the overall process is carried out faster and more accurately. The effectiveness of the proposed method is demonstrated with the characterization of a CLD (constrained layer damping) cantilever beam. First, the elastic properties of the constraining layers are identified from the dynamic response of a metallic cantilever beam. Then, the viscoelastic properties of the core, represented by a four-parameter fractional derivative model, are identified from the dynamic response of a CLD cantilever beam.

  18. Constraining the phantom braneworld model from cosmic structure sizes

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Kousvos, Stefanos R.

    2017-11-01

    We consider the phantom braneworld model in the context of the maximum turnaround radius, RTA ,max, of a stable, spherical cosmic structure with a given mass. The maximum turnaround radius is the point where the attraction due to the central inhomogeneity gets balanced with the repulsion of the ambient dark energy, beyond which a structure cannot hold any mass, thereby giving the maximum upper bound on the size of a stable structure. In this work we derive an analytical expression of RTA ,max for this model using cosmological scalar perturbation theory. Using this we numerically constrain the parameter space, including a bulk cosmological constant and the Weyl fluid, from the mass versus observed size data for some nearby, nonvirial cosmic structures. We use different values of the matter density parameter Ωm, both larger and smaller than that of the Λ cold dark matter, as the input in our analysis. We show in particular, that (a) with a vanishing bulk cosmological constant the predicted upper bound is always greater than what is actually observed; a similar conclusion holds if the bulk cosmological constant is negative (b) if it is positive, the predicted maximum size can go considerably below than what is actually observed and owing to the involved nature of the field equations, it leads to interesting constraints on not only the bulk cosmological constant itself but on the whole parameter space of the theory.

  19. Monte Carlo-based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2014-04-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures - for example, by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow for a more detailed analysis of the dynamic behaviour of the soil-plant interface. We coupled two of such high-process-oriented independent models and calibrated both models simultaneously. The catchment modelling framework (CMF) simulated soil hydrology based on the Richards equation and the van Genuchten-Mualem model of the soil hydraulic properties. CMF was coupled with the plant growth modelling framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo-based generalized likelihood uncertainty estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from a uniform distribution. The model was applied to three sites with different management in Müncheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matter of roots, storages, stems and leaves. The shape parameter of the retention curve n was highly constrained, whereas other parameters of the retention curve showed a large equifinality. We attribute this slightly poorer model performance to missing leaf senescence, which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need for including agricultural management options in the coupled model.

  20. Feasibility of shutter-speed DCE-MRI for improved prostate cancer detection.

    PubMed

    Li, Xin; Priest, Ryan A; Woodward, William J; Tagge, Ian J; Siddiqui, Faisal; Huang, Wei; Rooney, William D; Beer, Tomasz M; Garzotto, Mark G; Springer, Charles S

    2013-01-01

    The feasibility of shutter-speed model dynamic-contrast-enhanced MRI pharmacokinetic analyses for prostate cancer detection was investigated in a prebiopsy patient cohort. Differences of results from the fast-exchange-regime-allowed (FXR-a) shutter-speed model version and the fast-exchange-limit-constrained (FXL-c) standard model are demonstrated. Although the spatial information is more limited, postdynamic-contrast-enhanced MRI biopsy specimens were also examined. The MRI results were correlated with the biopsy pathology findings. Of all the model parameters, region-of-interest-averaged K(trans) difference [ΔK(trans) ≡ K(trans)(FXR-a) - K(trans)(FXL-c)] or two-dimensional K(trans)(FXR-a) vs. k(ep)(FXR-a) values were found to provide the most useful biomarkers for malignant/benign prostate tissue discrimination (at 100% sensitivity for a population of 13, the specificity is 88%) and disease burden determination. (The best specificity for the fast-exchange-limit-constrained analysis is 63%, with the two-dimensional plot.) K(trans) and k(ep) are each measures of passive transcapillary contrast reagent transfer rate constants. Parameter value increases with shutter-speed model (relative to standard model) analysis are larger in malignant foci than in normal-appearing glandular tissue. Pathology analyses verify the shutter-speed model (FXR-a) promise for prostate cancer detection. Parametric mapping may further improve pharmacokinetic biomarker performance. Copyright © 2012 Wiley Periodicals, Inc.

  1. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1991-01-01

    A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  2. 1D-Var multilayer assimilation of X-band SAR data into a detailed snowpack model

    NASA Astrophysics Data System (ADS)

    Phan, X. V.; Ferro-Famil, L.; Gay, M.; Durand, Y.; Dumont, M.; Morin, S.; Allain, S.; D'Urso, G.; Girard, A.

    2014-10-01

    The structure and physical properties of a snowpack and their temporal evolution may be simulated using meteorological data and a snow metamorphism model. Such an approach may meet limitations related to potential divergences and accumulated errors, to a limited spatial resolution, to wind or topography-induced local modulations of the physical properties of a snow cover, etc. Exogenous data are then required in order to constrain the simulator and improve its performance over time. Synthetic-aperture radars (SARs) and, in particular, recent sensors provide reflectivity maps of snow-covered environments with high temporal and spatial resolutions. The radiometric properties of a snowpack measured at sufficiently high carrier frequencies are known to be tightly related to some of its main physical parameters, like its depth, snow grain size and density. SAR acquisitions may then be used, together with an electromagnetic backscattering model (EBM) able to simulate the reflectivity of a snowpack from a set of physical descriptors, in order to constrain a physical snowpack model. In this study, we introduce a variational data assimilation scheme coupling TerraSAR-X radiometric data into the snowpack evolution model Crocus. The physical properties of a snowpack, such as snow density and optical diameter of each layer, are simulated by Crocus, fed by the local reanalysis of meteorological data (SAFRAN) at a French Alpine location. These snowpack properties are used as inputs of an EBM based on dense media radiative transfer (DMRT) theory, which simulates the total backscattering coefficient of a dry snow medium at X and higher frequency bands. After evaluating the sensitivity of the EBM to snowpack parameters, a 1D-Var data assimilation scheme is implemented in order to minimize the discrepancies between EBM simulations and observations obtained from TerraSAR-X acquisitions by modifying the physical parameters of the Crocus-simulated snowpack. The algorithm then re-initializes Crocus with the modified snowpack physical parameters, allowing it to continue the simulation of snowpack evolution, with adjustments based on remote sensing information. This method is evaluated using multi-temporal TerraSAR-X images acquired over the specific site of the Argentière glacier (Mont-Blanc massif, French Alps) to constrain the evolution of Crocus. Results indicate that X-band SAR data can be taken into account to modify the evolution of snowpack simulated by Crocus.

  3. A probabilistic assessment of calcium carbonate export and dissolution in the modern ocean

    NASA Astrophysics Data System (ADS)

    Battaglia, G.; Steinacher, M.; Joos, F.

    2015-12-01

    The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally-constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Latin-Hypercube scheme to construct a 1000 member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean-sediment fluxes are considered. For local dissolution rates either a strong, a weak or no dependency on CaCO3 saturation is assumed. Median (68 % confidence interval) global CaCO3 export is 0.82 (0.67-0.98) Gt PIC yr-1, within the lower half of previously published estimates (0.4-1.8 Gt PIC yr-1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo-Pacific, the northern Pacific and relatively small in the Atlantic. Dissolution within the 200 to 1500 m depth range (0.33; 0.26-0.40 Gt PIC yr-1) is substantially lower than inferred from the TA*-CFC age method (1 ± 0.5 Gt PIC yr-1). The latter estimate is likely biased high as the TA*-CFC method neglects transport. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport time scales for the different setups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest to apply saturation-independent dissolution rates in Earth System Models to minimise computational costs.

  4. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    NASA Astrophysics Data System (ADS)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  5. Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required

    NASA Astrophysics Data System (ADS)

    Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.

    2017-12-01

    A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely constraining the starting model. We also explore what types of datasets are needed to uniquely constrain the orientation(s) of anisotropic symmetry if the mechanism is assumed.

  6. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  7. The Dynamic Atmospheres of Carbon Rich Giants: Constraining Models Via Interferometry

    NASA Astrophysics Data System (ADS)

    Rau, Gioia; Hron, Josef; Paladini, Claudia; Aringer, Bernard; Eriksson, Kjell; Marigo, Paola

    2016-07-01

    Dynamic models for the atmospheres of C-rich Asymptotic Giant Branch stars are quite advanced and have been overall successful in reproducing spectroscopic and photometric observations. Interferometry provides independent information and is thus an important technique to study the atmospheric stratification and to further constrain the dynamic models. We observed a sample of six C-rich AGBs with the mid infrared interferometer VLTI/MIDI. These observations, combined with photometric and spectroscopic data from the literature, are compared with synthetic observables derived from dynamic model atmospheres (DMA, Eriksson et al. 2014). The SEDs can be reasonably well modelled and the interferometry supports the extended and multi-component structure of the atmospheres, but some differences remain. We discuss the possible reasons for these differences and we compare the stellar parameters derived from this comparison with stellar evolution models. Finally, we point out the high potential of MATISSE, the second generation VLTI instrument allowing interferometric imaging in the L, M, and N bands, for further progress in this field.

  8. CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei

    2014-12-01

    We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.

  9. Source Parameters for Moderate Earthquakes in the Zagros Mountains with Implications for the Depth Extent of Seismicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, A; Brazier, R; Nyblade, A

    2009-02-23

    Six earthquakes within the Zagros Mountains with magnitudes between 4.9 and 5.7 have been studied to determine their source parameters. These events were selected for study because they were reported in open catalogs to have lower crustal or upper mantle source depths and because they occurred within an area of the Zagros Mountains where crustal velocity structure has been constrained by previous studies. Moment tensor inversion of regional broadband waveforms have been combined with forward modeling of depth phases on short period teleseismic waveforms to constrain source depths and moment tensors. Our results show that all six events nucleated withinmore » the upper crust (<11 km depth) and have thrust mechanisms. This finding supports other studies that call into question the existence of lower crustal or mantle events beneath the Zagros Mountains.« less

  10. Empirical Model of Precipitating Ion Oval

    NASA Astrophysics Data System (ADS)

    Goldstein, Jerry

    2017-10-01

    In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.

  11. Tensor non-Gaussianity from axion-gauge-fields dynamics: parameter search

    NASA Astrophysics Data System (ADS)

    Agrawal, Aniket; Fujita, Tomohiro; Komatsu, Eiichiro

    2018-06-01

    We calculate the bispectrum of scale-invariant tensor modes sourced by spectator SU(2) gauge fields during inflation in a model containing a scalar inflaton, a pseudoscalar axion and SU(2) gauge fields. A large bispectrum is generated in this model at tree-level as the gauge fields contain a tensor degree of freedom, and its production is dominated by self-coupling of the gauge fields. This is a unique feature of non-Abelian gauge theory. The shape of the tensor bispectrum is approximately an equilateral shape for 3lesssim mQlesssim 4, where mQ is an effective dimensionless mass of the SU(2) field normalised by the Hubble expansion rate during inflation. The amplitude of non-Gaussianity of the tensor modes, characterised by the ratio Bh/P2h, is inversely proportional to the energy density fraction of the gauge field. This ratio can be much greater than unity, whereas the ratio from the vacuum fluctuation of the metric is of order unity. The bispectrum is effective at constraining large mQ regions of the parameter space, whereas the power spectrum constrains small mQ regions.

  12. The Efficiency of Split Panel Designs in an Analysis of Variance Model

    PubMed Central

    Wang, Wei-Guo; Liu, Hai-Jun

    2016-01-01

    We consider split panel design efficiency in analysis of variance models, that is, the determination of the cross-sections series optimal proportion in all samples, to minimize parametric best linear unbiased estimators of linear combination variances. An orthogonal matrix is constructed to obtain manageable expression of variances. On this basis, we derive a theorem for analyzing split panel design efficiency irrespective of interest and budget parameters. Additionally, relative estimator efficiency based on the split panel to an estimator based on a pure panel or a pure cross-section is present. The analysis shows that the gains from split panel can be quite substantial. We further consider the efficiency of split panel design, given a budget, and transform it to a constrained nonlinear integer programming. Specifically, an efficient algorithm is designed to solve the constrained nonlinear integer programming. Moreover, we combine one at time designs and factorial designs to illustrate the algorithm’s efficiency with an empirical example concerning monthly consumer expenditure on food in 1985, in the Netherlands, and the efficient ranges of the algorithm parameters are given to ensure a good solution. PMID:27163447

  13. Effective theory of flavor for Minimal Mirror Twin Higgs

    NASA Astrophysics Data System (ADS)

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-01

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ɛ^{n_i} for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ɛ^' {n}_i} , so that spontaneous breaking of the parity P arises from a single parameter ɛ'/ɛ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i , including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ɛ'/ɛ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. In each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.

  14. Constraining the optical depth of galaxies and velocity bias with cross-correlation between the kinetic Sunyaev-Zeldovich effect and the peculiar velocity field

    NASA Astrophysics Data System (ADS)

    Ma, Yin-Zhe; Gong, Guo-Dong; Sui, Ning; He, Ping

    2018-03-01

    We calculate the cross-correlation function < (Δ T/T)({v}\\cdot \\hat{n}/σ _v) > between the kinetic Sunyaev-Zeldovich (kSZ) effect and the reconstructed peculiar velocity field using linear perturbation theory, with the aim of constraining the optical depth τ and peculiar velocity bias of central galaxies with Planck data. We vary the optical depth τ and the velocity bias function bv(k) = 1 + b(k/k0)n, and fit the model to the data, with and without varying the calibration parameter y0 that controls the vertical shift of the correlation function. By constructing a likelihood function and constraining the τ, b and n parameters, we find that the quadratic power-law model of velocity bias, bv(k) = 1 + b(k/k0)2, provides the best fit to the data. The best-fit values are τ = (1.18 ± 0.24) × 10-4, b=-0.84^{+0.16}_{-0.20} and y0=(12.39^{+3.65}_{-3.66})× 10^{-9} (68 per cent confidence level). The probability of b > 0 is only 3.12 × 10-8 for the parameter b, which clearly suggests a detection of scale-dependent velocity bias. The fitting results indicate that the large-scale (k ≤ 0.1 h Mpc-1) velocity bias is unity, while on small scales the bias tends to become negative. The value of τ is consistent with the stellar mass-halo mass and optical depth relationship proposed in the literature, and the negative velocity bias on small scales is consistent with the peak background split theory. Our method provides a direct tool for studying the gaseous and kinematic properties of galaxies.

  15. Super-Eddington accreting massive black holes explore high-z cosmology: Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Cai, Rong-Gen; Guo, Zong-Kuan; Huang, Qing-Guo; Yang, Tao

    2018-06-01

    In this paper, we simulate Super-Eddington accreting massive black holes (SEAMBHs) as the candles to probe cosmology for the first time. SEAMBHs have been demonstrated to be able to provide a new tool for estimating cosmological distance. Thus, we create a series of mock data sets of SEAMBHs, especially in the high redshift region, to check their abilities to probe the cosmology. To fulfill the potential of the SEAMBHs on the cosmology, we apply the simulated data to three projects. The first is the exploration of their abilities to constrain the cosmological parameters, in which we combine different data sets of current observations such as the cosmic microwave background from Planck and type Ia supernovae from Joint Light-curve Analysis (JLA). We find that the high redshift SEAMBHs can help to break the degeneracies of the background cosmological parameters constrained by Planck and JLA, thus giving much tighter constraints of the cosmological parameters. The second uses the high redshift SEAMBHs as the complements of the low redshift JLA to constrain the early expansion rate and the dark energy density evolution in the cold dark matter frame. Our results show that these high redshift SEAMBHs are very powerful on constraining the early Hubble rate and the evolution of the dark energy density; thus they can give us more information about the expansion history of our Universe, which is also crucial for testing the Λ CDM model in the high redshift region. Finally, we check the SEAMBH candles' abilities to reconstruct the equation of state for dark energy at high redshift. In summary, our results show that the SEAMBHs, as the rare candles in the high redshift region, can provide us a new and independent observation to probe cosmology in the future.

  16. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  17. Information gains from cosmic microwave background experiments

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Amara, Adam; Refregier, Alexandre; Paranjape, Aseem; Akeret, Joël

    2014-07-01

    To shed light on the fundamental problems posed by dark energy and dark matter, a large number of experiments have been performed and combined to constrain cosmological models. We propose a novel way of quantifying the information gained by updates on the parameter constraints from a series of experiments which can either complement earlier measurements or replace them. For this purpose, we use the Kullback-Leibler divergence or relative entropy from information theory to measure differences in the posterior distributions in model parameter space from a pair of experiments. We apply this formalism to a historical series of cosmic microwave background experiments ranging from Boomerang to WMAP, SPT, and Planck. Considering different combinations of these experiments, we thus estimate the information gain in units of bits and distinguish contributions from the reduction of statistical errors and the "surprise" corresponding to a significant shift of the parameters' central values. For this experiment series, we find individual relative entropy gains ranging from about 1 to 30 bits. In some cases, e.g. when comparing WMAP and Planck results, we find that the gains are dominated by the surprise rather than by improvements in statistical precision. We discuss how this technique provides a useful tool for both quantifying the constraining power of data from cosmological probes and detecting the tensions between experiments.

  18. Cosmological implications of primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luis Bernal, José; Bellomo, Nicola; Raccanelli, Alvise

    The possibility that a relevant fraction of the dark matter might be comprised of Primordial Black Holes (PBHs) has been seriously reconsidered after LIGO's detection of a ∼ 30 M {sub ⊙} binary black holes merger. Despite the strong interest in the model, there is a lack of studies on possible cosmological implications and effects on cosmological parameters inference. We investigate correlations with the other standard cosmological parameters using cosmic microwave background observations, finding significant degeneracies, especially with the tilt of the primordial power spectrum and the sound horizon at radiation drag. However, these degeneracies can be greatly reduced withmore » the inclusion of small scale polarization data. We also explore if PBHs as dark matter in simple extensions of the standard ΛCDM cosmological model induces extra degeneracies, especially between the additional parameters and the PBH's ones. Finally, we present cosmic microwave background constraints on the fraction of dark matter in PBHs, not only for monochromatic PBH mass distributions but also for popular extended mass distributions. Our results show that extended mass distribution's constraints are tighter, but also that a considerable amount of constraining power comes from the high-ℓ polarization data. Moreover, we constrain the shape of such mass distributions in terms of the correspondent constraints on the PBH mass fraction.« less

  19. Fermionic dark matter with pseudo-scalar Yukawa interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghorbani, Karim, E-mail: k-ghorbani@araku.ac.ir

    2015-01-01

    We consider a renormalizable extension of the standard model whose fermionic dark matter (DM) candidate interacts with a real singlet pseudo-scalar via a pseudo-scalar Yukawa term while we assume that the full Lagrangian is CP-conserved in the classical level. When the pseudo-scalar boson develops a non-zero vacuum expectation value, spontaneous CP-violation occurs and this provides a CP-violated interaction of the dark sector with the SM particles through mixing between the Higgs-like boson and the SM-like Higgs boson. This scenario suggests a minimal number of free parameters. Focusing mainly on the indirect detection observables, we calculate the dark matter annihilation crossmore » section and then compute the DM relic density in the range up to m{sub DM} = 300 GeV.We then find viable regions in the parameter space constrained by the observed DM relic abundance as well as invisible Higgs decay width in the light of 125 GeV Higgs discovery at the LHC. We find that within the constrained region of the parameter space, there exists a model with dark matter mass m{sub DM} ∼ 38 GeV annihilating predominantly into b quarks, which can explain the Fermi-LAT galactic gamma-ray excess.« less

  20. Methodology for comparing worldwide performance of diverse weight-constrained high energy laser systems

    NASA Astrophysics Data System (ADS)

    Bartell, Richard J.; Perram, Glen P.; Fiorino, Steven T.; Long, Scott N.; Houle, Marken J.; Rice, Christopher A.; Manning, Zachary P.; Bunch, Dustin W.; Krizo, Matthew J.; Gravley, Liesebet E.

    2005-06-01

    The Air Force Institute of Technology's Center for Directed Energy has developed a software model, the High Energy Laser End-to-End Operational Simulation (HELEEOS), under the sponsorship of the High Energy Laser Joint Technology Office (JTO), to facilitate worldwide comparisons across a broad range of expected engagement scenarios of expected performance of a diverse range of weight-constrained high energy laser system types. HELEEOS has been designed to meet JTO's goals of supporting a broad range of analyses applicable to the operational requirements of all the military services, constraining weapon effectiveness through accurate engineering performance assessments allowing its use as an investment strategy tool, and the establishment of trust among military leaders. HELEEOS is anchored to respected wave optics codes and all significant degradation effects, including thermal blooming and optical turbulence, are represented in the model. The model features operationally oriented performance metrics, e.g. dwell time required to achieve a prescribed probability of kill and effective range. Key features of HELEEOS include estimation of the level of uncertainty in the calculated Pk and generation of interactive nomographs to allow the user to further explore a desired parameter space. Worldwide analyses are enabled at five wavelengths via recently available databases capturing climatological, seasonal, diurnal, and geographical spatial-temporal variability in atmospheric parameters including molecular and aerosol absorption and scattering profiles and optical turbulence strength. Examples are provided of the impact of uncertainty in weight-power relationships, coupled with operating condition variability, on results of performance comparisons between chemical and solid state lasers.

  1. Regional Wave Propagation in Southeastern United States

    NASA Astrophysics Data System (ADS)

    Jemberie, A. L.; Langston, C. A.

    2003-12-01

    Broad band seismograms from the April 29, 2003, M4.6 Fort Payne, Alabama earthquake are analyzed to infer mechanisms of crustal wave propagation, crust and upper mantle velocity structure in southeastern United States, and source parameters of the event. In particular, we are interested in producing deterministic models of the distance attenuation of earthquake ground motions through computation of synthetic seismograms. The method first requires constraining the source parameters of an earthquake and then modeling the amplitude and times of broadband arrivals within the waveforms to infer appropriate layered earth models. A first look at seismograms recorded by stations outside the Mississippi Embayment (ME) show clear body phases such P, sP, Pnl, Sn and Lg. The ME signals are qualitatively different from others because they have longer durations and large surface waves. A straightforward interpretation of P wave arrival times shows a typical upper mantle velocity of 8.18 km/s. However, there is evidence of significantly higher P phase velocities at epicentral distances between 400 and 600km, that may be caused by a high velocity upper mantle anomaly; triplication of P-waves is seen in these seismograms. The arrival time differences between regional P and the depth phase sP at different stations are used to constrain the depth of the earthquake. The source depth lies between 9.5 km and 13km which is somewhat more shallow than the network location that was constrained to 15km depth. The Fort Payne earthquake is the largest earthquake to have occurred within the Eastern Tennessee Seismic Zone.

  2. Constraints on scalar-tensor theories of gravity from observations

    NASA Astrophysics Data System (ADS)

    Lee, Seokcheon

    2011-03-01

    In spite of their original discrepancy, both dark energy and modified theory of gravity can be parameterized by the effective equation of state (EOS) ω for the expansion history of the Universe. A useful model independent approach to the EOS of them can be given by so-called Chevallier-Polarski-Linder (CPL) parametrization where two parameters of it (ω0 and ωa) can be constrained by the geometrical observations which suffer from degeneracies between models. The linear growth of large scale structure is usually used to remove these degeneracies. This growth can be described by the growth index parameter γ and it can be parameterized by γ0+γa(1-a) in general. We use the scalar-tensor theories of gravity (STG) and show that the discernment between models is possible only when γa is not negligible. We show that the linear density perturbation of the matter component as a function of redshift severely constrains the viable subclasses of STG in terms of ω and γ. From this method, we can rule out or prove the viable STG in future observations. When we use Z(phi) = 1, F shows the convex shape of evolution in a viable STG model. The viable STG models with Z(phi) = 1 are not distinguishable from dark energy models when we strongly limit the solar system constraint.

  3. Utilitarian models of the solar nebula

    NASA Technical Reports Server (NTRS)

    Cassen, Patrick

    1994-01-01

    Models of the primitive solar nebula based on a combination of theory, observations of T Tauri stars, and global conservation laws are presented. The models describe the motions of nebular gas, mixing of interstellar material during the formation of the nebula, and evolution of thermal structure in terms of several characteristic parameters. The parameters describe key aspects of the protosolar cloud (its rotation rate and collapse rate) and the nebula (its mass relative to the Sun, decay time, and density distribution). For most applications, the models are heuristic rather than predicted. Their purpose is to provide a realistic context for the interpretation of solar system data, and to distinquish those nebular characteristics that can be specified with confidence, independently of the assumtions of particular models, form those that are poorly constrained. It is demonstrated that nebular gas typically experienced large radial excursions during the evolution of the nebula and that both inward and outward mean radial velocities on the order of meters per second occured in the terrestrial planet region, with inward velocities predominant for most ofthe evolution. However, the time history of disk size, surface density, and radial velocities are sensitive to the total angular momentun of the protosolar cloud, which cannot be constrained by purely theoretical considerations.It is shown that a certain amount of 'formational' mixing of interstellar material was an inevitable consequenc of nebular mass and angular momentum transport during protostellar collapse, regardless of the specific transport mechanisms invloved. Even if the protosolar cloud was initially homogeneous, this mixing was important because it had the effect of mingling presolar material that had experienced different degrees of thermal processing during collapse and passage through the accertion shock. Nebular thermal structure is less sensitive to poorly constrained parameters than is dynamical history. A simple criterion is derived for the condition that silicate grains are evaporated at midplane, and it is argued that this condition was probably fulfilled early in nebular history. Cooling of a hot nebula due tocoagulation of dust and consequent local reduction of optical depth is examined, and it is shown how such a process leads naturally to an enrichment of rock-forming elements in the gas phase.

  4. Constrained inference in mixed-effects models for longitudinal data with application to hearing loss.

    PubMed

    Davidov, Ori; Rosen, Sophia

    2011-04-01

    In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.

  5. Non-invasive water-table imaging with joint DC-resistivity/microgravity/hydrologic-model inversion

    NASA Astrophysics Data System (ADS)

    Kennedy, J.; Macy, J. P.

    2017-12-01

    The depth of the water table, and fluctuations thereof, is a primary concern in hydrology. In riparian areas, the water table controls when and where vegetation grows. Fluctuations in the water table depth indicate changes in aquifer storage and variation in ET, and may also be responsible for the transport and degradation of contaminants. In the latter case, installation of monitoring wells is problematic because of the potential to create preferential flow pathways. We present a novel method for non-invasive water table monitoring using combined DC resistivity and repeat microgravity data. Resistivity profiles provide spatial resolution, but a quantifiable relation between resistivity changes and aquifer-storage changes depends on a petrophysical relation (typically, Archie's Law), with additional parameters and therefore uncertainty. Conversely, repeat microgravity data provide a direct, quantifiable measurement of aquifer-storage change but lack depth resolution. We show how these two geophysical measurements, together with an unsaturated-zone flow model (Hydrogeosphere), effectively constrain the water table position and help identify groundwater-flow model parameters. A demonstration of the method is made using field data collected during the historic 2014 pulse flow in the Colorado River Delta, which shows that geophysical data can effectively constrain a coupled surface-water/groundwater model used to simulate the potential for riparian vegetation germination and recruitment.

  6. Dark energy equation of state parameter and its evolution at low redshift

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Ashutosh; Sangwan, Archana; Jassal, H.K., E-mail: ashutosh_tripathi@fudan.edu.cn, E-mail: archanakumari@iisermohali.ac.in, E-mail: hkjassal@iisermohali.ac.in

    In this paper, we constrain dark energy models using a compendium of observations at low redshifts. We consider the dark energy as a barotropic fluid, with the equation of state a constant as well the case where dark energy equation of state is a function of time. The observations considered here are Supernova Type Ia data, Baryon Acoustic Oscillation data and Hubble parameter measurements. We compare constraints obtained from these data and also do a combined analysis. The combined observational constraints put strong limits on variation of dark energy density with redshift. For varying dark energy models, the range ofmore » parameters preferred by the supernova type Ia data is in tension with the other low redshift distance measurements.« less

  7. Combining the modified Skyrme-like model and the local density approximation to determine the symmetry energy of nuclear matter

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ren, Zhongzhou; Xu, Chang

    2018-07-01

    Combining the modified Skyrme-like model and the local density approximation model, the slope parameter L of symmetry energy is extracted from the properties of finite nuclei with an improved iterative method. The calculations of the iterative method are performed within the framework of the spherical symmetry. By choosing 200 neutron rich nuclei on 25 isotopic chains as candidates, the slope parameter is constrained to be 50 MeV < L < 62 MeV. The validity of this method is examined by the properties of finite nuclei. Results show that reasonable descriptions on the properties of finite nuclei and nuclear matter can be obtained together.

  8. Revisiting Studies of the Statistical Property of a Strong Gravitational Lens System and Model-Independent Constraint on the Curvature of the Universe

    NASA Astrophysics Data System (ADS)

    Xia, Jun-Qing; Yu, Hai; Wang, Guo-Jian; Tian, Shu-Xun; Li, Zheng-Xiang; Cao, Shuo; Zhu, Zong-Hong

    2017-01-01

    In this paper, we use a recently compiled data set, which comprises 118 galactic-scale strong gravitational lensing (SGL) systems to constrain the statistical property of the SGL system as well as the curvature of the universe without assuming any fiducial cosmological model. Based on the singular isothermal ellipsoid (SIE) model of the SGL system, we obtain that the constrained curvature parameter {{{Ω }}}{{k}} is close to zero from the SGL data, which is consistent with the latest result of Planck measurement. More interestingly, we find that the parameter f in the SIE model is strongly correlated with the curvature {{{Ω }}}{{k}}. Neglecting this correlation in the analysis will significantly overestimate the constraining power of SGL data on the curvature. Furthermore, the obtained constraint on f is different from previous results: f=1.105+/- 0.030 (68% confidence level [C.L.]), which means that the standard singular isothermal sphere (SIS) model (f = 1) is disfavored by the current SGL data at more than a 3σ C.L. We also divide all of the SGL data into two parts according to the centric stellar velocity dispersion {σ }{{c}} and find that the larger the value of {σ }{{c}} for the subsample, the more favored the standard SIS model is. Finally, we extend the SIE model by assuming the power-law density profiles for the total mass density, ρ ={ρ }0{(r/{r}0)}-α , and luminosity density, ν ={ν }0{(r/{r}0)}-δ , and obtain the constraints on the power-law indices: α =1.95+/- 0.04 and δ =2.40+/- 0.13 at a 68% C.L. When assuming the power-law index α =δ =γ , this scenario is totally disfavored by the current SGL data, {χ }\\min ,γ 2-{χ }\\min ,{SIE}2≃ 53.

  9. Mapping the Solar Wind from its Source Region into the Outer Corona

    NASA Technical Reports Server (NTRS)

    Esser, Ruth

    1998-01-01

    Knowledge of the radial variation of the plasma conditions in the coronal source region of the solar wind is essential to exploring coronal heating and solar wind acceleration mechanisms. The goal of the present proposal is to determine as many plasma parameters in that region as possible by coordinating different observational techniques, such as Interplanetary Scintillation Observations, spectral line intensity observations, polarization brightness measurements and X-ray observations. The inferred plasma parameters are then used to constrain solar wind models.

  10. Helicopter Control Energy Reduction Using Moving Horizontal Tail

    PubMed Central

    Oktay, Tugrul; Sal, Firat

    2015-01-01

    Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841

  11. Gamma-ray Burst Cosmology

    NASA Astrophysics Data System (ADS)

    Wang, F. Y.

    2011-07-01

    Gamma-ray bursts (GRBs) are brief flashes of gamma-rays occurring at cosmological distances. GRB was discovered by Vela satellite in 1967. The discovery of afterglows in 1997 made it possible to measure the GRBs' redshifts and confirmed the cosmological origin. GRB cosmology includes utilizing long GRBs as standard candles to constrain the dark energy and cosmological parameters, measuring the high-redshift star formation rate (SFR), probing the metal enrichment history of the universe, dust, quantum gravity, etc. The correlations between GRB observables in the prompt emission and afterglow phases were discovered, so we can use these correlations as standard candles to constrain the cosmological parameters and dark energy, especially at high redshifts. Observations show that long GRBs may be associated with supernovae. So long GRBs are promising tools to measure the high-redshift SFR. GRB afterglows have a smooth continuum, so the extraction of IGM absorption features from the spectrum is very easy. The information of metal enrichment history and reionization can be obtained from the absorption lines. In this thesis, we investigate the high-redshift cosmology using GRBs, called GRB cosmology. This is a new and fast developing field. The structure of this thesis is as follows. In the first chapter, we introduce the progress of GRB studies. First we introduce the progress of GRB studies in various satellite eras, mainly in the Swift and Fermi eras. The fireball model and standard afterglow model are also presented. In chapter 2, we introduce the standard cosmology model, astronomical observations and dark energy models. Then progress on the GRB cosmology studies is introduced. Some of my works including what to be submitted are also introduced in this chapter. In chapter 3, we present our studies on constraining the cosmological parameters and dark energy using latest observations. We use SNe Ia, GRBs, CMB, BAO, the X-ray gas mass fraction in clusters and the linear growth rate of perturbations, and find that the ΛCDM is the best fitted model. The transition redshift z_{T} is from 0.40_{-0.08}^{+0.14} to 0.65_{-0.05}^{+0.10}. This is the first time to combine GRBs with other observations to constrain the cosmological parameters, dark energy and transition redshift. In chapter 4, we investigate the early dark energy model using GRBs, SNe Ia, CMB and BAO. The negligible dark energy at high redshift will influence the growth of cosmic structures and leave observable signatures that are different from the standard cosmology. We propose that GRBs are promising tools to study the early dark energy. We find that the fractional dark energy density is less than 0.03 and the linear growth index of perturbations is 0.66. In chapter 5, we use a model-independent method to constrain the dark energy equation of state (EOS) w(z). Among the parameters describing the properties of dark energy, EOS is the most important. Whether and how it evolves with time are crucial in distinguishing different cosmological models. In our analysis, we include high-redshift GRBs. We find that w(z)<0 at z>1.7, and EOS deviates from the cosmological constant at z>0.5 at 95.4% confidence level. In chapter 6, we probe the cosmographic parameters to distinguish between the dark energy and modified gravity models. These two families of models can drive the universe to acclerate. We first derive the expressions of deceleration, jerk and snap parameters in the dark energy and modified gravity models. The snap parameters in these models are different, so they can be used to distinguish between the models. In chapter 7, we measure the high-redshift SFR using long GRBs. Swift observations reveal that the number of high-redshift GRBs is larger than the predication from SFR. We find that the evolving initial mass function can interpret this discrepancy. We study the high-redshift SFR up to z˜ 8.2 considering the Swift GRBs tracing the star formation history and the cosmic metallicity evolution in different background cosmological models. In chapter 8, we present the observational signatures of Pop III GRBs and study the pre-galactic metal enrichment with the metal absorption lines in the GRB spectrum from first galaxy. We focus on the unusual circumburst environment inside the systems that hosted Pop III stars. The metals in the first galaxies produced by the first supernova explosion are likely to reside in the low-ionization states (C II, O I, Si II and Fe II). When GRB afterglow goes through the metal polluted region, the metal absorption lines may appear. The topology of metal enrichment could be highly inhomogeneous, so along different lines of sight, the metal absorption lines may show distinct signatures. A summary of the open questions in GRB cosmology filed is presented in chapter 9.

  12. Substorm Electric And Magnetic Fields In The Earth's Magnetotail: Observations Compared To The WINDMI Model

    NASA Astrophysics Data System (ADS)

    Srinivas, P. G.; Spencer, E. A.; Vadepu, S. K.; Horton, W., Jr.

    2017-12-01

    We compare satellite observations of substorm electric fields and magnetic fields to the output of a low dimensional nonlinear physics model of the nightside magnetosphere called WINDMI. The electric and magnetic field satellite data are used to calculate the E X B drift, which is one of the intermediate variables of the WINDMI model. The model uses solar wind and IMF measurements from the ACE spacecraft as input into a system of 8 nonlinear ordinary differential equations. The state variables of the differential equations represent the energy stored in the geomagnetic tail, central plasma sheet, ring current and field aligned currents. The output from the model is the ground based geomagnetic westward auroral electrojet (AL) index, and the Dst index.Using ACE solar wind data, IMF data and SuperMAG identification of substorm onset times up to December 2015, we constrain the WINDMI model to trigger substorm events, and compare the model intermediate variables to THEMIS and GEOTAIL satellite data in the magnetotail. By forcing the model to be consistent with satellite electric and magnetic field observations, we are able to track the magnetotail energy dynamics, the field aligned current contributions, energy injections into the ring current, and ensure that they are within allowable limts. In addition we are able to constrain the physical parameters of the model, in particular the lobe inductance, the plasma sheet capacitance, and the resistive and conductive parameters in the plasma sheet and ionosphere.

  13. Determining Crust and Upper Mantle Structure by Bayesian Joint Inversion of Receiver Functions and Surface Wave Dispersion at a Single Station: Preparation for Data from the InSight Mission

    NASA Astrophysics Data System (ADS)

    Jia, M.; Panning, M. P.; Lekic, V.; Gao, C.

    2017-12-01

    The InSight (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport) mission will deploy a geophysical station on Mars in 2018. Using seismology to explore the interior structure of the Mars is one of the main targets, and as part of the mission, we will use 3-component seismic data to constrain the crust and upper mantle structure including P and S wave velocities and densities underneath the station. We will apply a reversible jump Markov chain Monte Carlo algorithm in the transdimensional hierarchical Bayesian inversion framework, in which the number of parameters in the model space and the noise level of the observed data are also treated as unknowns in the inversion process. Bayesian based methods produce an ensemble of models which can be analyzed to quantify uncertainties and trade-offs of the model parameters. In order to get better resolution, we will simultaneously invert three different types of seismic data: receiver functions, surface wave dispersion (SWD), and ZH ratios. Because the InSight mission will only deliver a single seismic station to Mars, and both the source location and the interior structure will be unknown, we will jointly invert the ray parameter in our approach. In preparation for this work, we first verify our approach by using a set of synthetic data. We find that SWD can constrain the absolute value of velocities while receiver functions constrain the discontinuities. By joint inversion, the velocity structure in the crust and upper mantle is well recovered. Then, we apply our approach to real data from an earth-based seismic station BFO located in Black Forest Observatory in Germany, as already used in a demonstration study for single station location methods. From the comparison of the results, our hierarchical treatment shows its advantage over the conventional method in which the noise level of observed data is fixed as a prior.

  14. Constraining screened fifth forces with the electron magnetic moment

    NASA Astrophysics Data System (ADS)

    Brax, Philippe; Davis, Anne-Christine; Elder, Benjamin; Wong, Leong Khim

    2018-04-01

    Chameleon and symmetron theories serve as archetypal models for how light scalar fields can couple to matter with gravitational strength or greater, yet evade the stringent constraints from classical tests of gravity on Earth and in the Solar System. They do so by employing screening mechanisms that dynamically alter the scalar's properties based on the local environment. Nevertheless, these do not hide the scalar completely, as screening leads to a distinct phenomenology that can be well constrained by looking for specific signatures. In this work, we investigate how a precision measurement of the electron magnetic moment places meaningful constraints on both chameleons and symmetrons. Two effects are identified: First, virtual chameleons and symmetrons run in loops to generate quantum corrections to the intrinsic value of the magnetic moment—a common process widely considered in the literature for many scenarios beyond the Standard Model. A second effect, however, is unique to scalar fields that exhibit screening. A scalar bubblelike profile forms inside the experimental vacuum chamber and exerts a fifth force on the electron, leading to a systematic shift in the experimental measurement. In quantifying this latter effect, we present a novel approach that combines analytic arguments and a small number of numerical simulations to solve for the bubblelike profile quickly for a large range of model parameters. Taken together, both effects yield interesting constraints in complementary regions of parameter space. While the constraints we obtain for the chameleon are largely uncompetitive with those in the existing literature, this still represents the tightest constraint achievable yet from an experiment not originally designed to search for fifth forces. We break more ground with the symmetron, for which our results exclude a large and previously unexplored region of parameter space. Central to this achievement are the quantum correction terms, which are able to constrain symmetrons with masses in the range μ ∈[10-3.88,108] eV , whereas other experiments have hitherto only been sensitive to 1 or 2 orders of magnitude at a time.

  15. Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.

    PubMed

    Mulder, Joris

    2014-02-01

    Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.

  16. What can the CMB tell about the microphysics of cosmic reheating?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drewes, Marco, E-mail: marcodrewes@googlemail.com

    In inflationary cosmology, cosmic reheating after inflation sets the initial conditions for the hot big bang. We investigate how CMB data can be used to study the effective potential and couplings of the inflaton during reheating to constrain the underlying microphysics. If there is a phase of preheating that is driven by a parametric resonance or other instability, then the thermal history and expansion history during the reheating era depend on a large number of microphysical parameters in a complicated way. In this case the connection between CMB observables and microphysical parameters can only established with intense numerical studies. Suchmore » studies can help to improve CMB constraints on the effective inflaton potential in specific models, but parameter degeneracies usually make it impossible to extract meaningful best-fit values for individual microphysical parameters. If, on the other hand, reheating is driven by perturbative processes, then it can be possible to constrain the inflaton couplings and the reheating temperature from CMB data. This provides an indirect probe of fundamental microphysical parameters that most likely can never be measured directly in the laboratory, but have an immense impact on the evolution of the cosmos by setting the stage for the hot big bang.« less

  17. Using natural laboratories and modeling to decipher lithospheric rheology

    NASA Astrophysics Data System (ADS)

    Sobolev, Stephan

    2013-04-01

    Rheology is obviously important for geodynamic modeling but at the same time rheological parameters appear to be least constrained. Laboratory experiments give rather large ranges of rheological parameters and their scaling to nature is not entirely clear. Therefore finding rheological proxies in nature is very important. One way to do that is finding appropriate values of rheological parameter by fitting models to the lithospheric structure in the highly deformed regions where lithospheric structure and geologic evolution is well constrained. Here I will present two examples of such studies at plate boundaries. One case is the Dead Sea Transform (DST) that comprises a boundary between African and Arabian plates. During the last 15- 20 Myr more than 100 km of left lateral transform displacement has been accumulated on the DST and about 10 km thick Dead Sea Basin (DSB) was formed in the central part of the DST. Lithospheric structure and geological evolution of DST and DSB is rather well constrained by a number of interdisciplinary projects including DESERT and DESIRE projects leaded by the GFZ Potsdam. Detailed observations reveal apparently contradictory picture. From one hand widespread igneous activity, especially in the last 5 Myr, thin (60-80 km) lithosphere constrained from seismic data and absence of seismicity below the Moho, seem to be quite natural for this tectonically active plate boundary. However, surface heat flow of less than 50-60mW/m2 and deep seismicity in the lower crust ( deeper than 20 km) reported for this region are apparently inconsistent with the tectonic settings specific for an active continental plate boundary and with the crustal structure of the DSB. To address these inconsistencies which comprise what I call the "DST heat-flow paradox", a 3D numerical thermo-mechanical model was developed operating with non-linear elasto-visco-plastic rheology of the lithosphere. Results of the numerical experiments show that the entire set of observations for the DSB can be explained within the classical pull-apart model assuming that (1) the lithosphere has been thermally eroded at about 20 Ma, just before the active faulting at the DST, and (2) the uppermost mantle in the region have relatively weak rheology consistent with the experimental data for wet olivine or pyroxenite. Another example is modeling of the collision of India and Eurasia in Tibet. Our recent thermo-mechanical model (see abstract by Tympel et al) reproduce well many important features of this orogeny, including observed convergence and distance of underthrusting of Indian lithosphere beneath Tibet, if long-term friction at India-Eurasia interface is about 0.04- 0.05, which is typical for oceanic subduction zones, but is unexpected low for continental setting.

  18. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  19. Constraining the interaction between dark sectors with future HI intensity mapping observations

    NASA Astrophysics Data System (ADS)

    Xu, Xiaodong; Ma, Yin-Zhe; Weltman, Amanda

    2018-04-01

    We study a model of interacting dark matter and dark energy, in which the two components are coupled. We calculate the predictions for the 21-cm intensity mapping power spectra, and forecast the detectability with future single-dish intensity mapping surveys (BINGO, FAST and SKA-I). Since dark energy is turned on at z ˜1 , which falls into the sensitivity range of these radio surveys, the HI intensity mapping technique is an efficient tool to constrain the interaction. By comparing with current constraints on dark sector interactions, we find that future radio surveys will produce tight and reliable constraints on the coupling parameters.

  20. Using eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements, and PhenoCams to constrain a process-based biogeochemical model for carbon market-funded wetland restoration

    NASA Astrophysics Data System (ADS)

    Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.

    2015-12-01

    We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1% of observed annual budgets of CO2 and CH4, respectively). The use of multiple data streams is critical for constraining parameters and reducing uncertainty in model predictions, thereby providing accurate simulation of greenhouse gas exchange in a wetland restoration project with implications for C market-funded wetland restoration worldwide.

  1. The effect of crustal anisotropy on SKS splitting analysis—synthetic models and real-data observations

    NASA Astrophysics Data System (ADS)

    Latifi, Koorosh; Kaviani, Ayoub; Rümpker, Georg; Mahmoodabadi, Meysam; Ghassemi, Mohammad R.; Sadidkhouy, Ahmad

    2018-05-01

    The contribution of crustal anisotropy to the observation of SKS splitting parameters is often assumed to be negligible. Based on synthetic models, we show that the impact of crustal anisotropy on the SKS splitting parameters can be significant even in the case of moderate to weak anisotropy within the crust. In addition, real-data examples reveal that significant azimuthal variations in SKS splitting parameters can be caused by crustal anisotropy. Ps-splitting analysis of receiver functions (RF) can be used to infer the anisotropic parameters of the crust. These crustal splitting parameters may then be used to constrain the inversion of SKS apparent splitting parameters to infer the anisotropy of the mantle. The observation of SKS splitting for different azimuths is indispensable to verify the presence or absence of multiple layers of anisotropy beneath a seismic station. By combining SKS and RF observations in different azimuths at a station, we are able to uniquely decipher the anisotropic parameters of crust and upper mantle.

  2. Equivalence between the Lovelock-Cartan action and a constrained gauge theory

    NASA Astrophysics Data System (ADS)

    Junqueira, O. C.; Pereira, A. D.; Sadovski, G.; Santos, T. R. S.; Sobreiro, R. F.; Tomaz, A. A.

    2017-04-01

    We show that the four-dimensional Lovelock-Cartan action can be derived from a massless gauge theory for the SO(1, 3) group with an additional BRST trivial part. The model is originally composed of a topological sector and a BRST exact piece and has no explicit dependence on the metric, the vierbein or a mass parameter. The vierbein is introduced together with a mass parameter through some BRST trivial constraints. The effect of the constraints is to identify the vierbein with some of the additional fields, transforming the original action into the Lovelock-Cartan one. In this scenario, the mass parameter is identified with Newton's constant, while the gauge field is identified with the spin connection. The symmetries of the model are also explored. Moreover, the extension of the model to a quantum version is qualitatively discussed.

  3. Pseudoscalar portal dark matter and new signatures of vector-like fermions

    DOE PAGES

    Fan, JiJi; Koushiappas, Savvas M.; Landsberg, Greg

    2016-01-19

    Fermionic dark matter interacting with the Standard Model sector through a pseudoscalar portal could evade the direct detection constraints while preserving a WIMP miracle. Here, we study the LHC constraints on the pseudoscalar production in simplified models with the pseudoscalar either dominantly coupled to b quarks ormore » $${{\\tau}}$$ leptons and explore their implications for the GeV excesses in gamma ray observations. We also investigate models with new vector-like fermions that could realize the simplfied models of pseudoscalar portal dark matter. Furthermore, these models yield new decay channels and signatures of vector-like fermions, for instance, bbb; b$${{\\tau}}$$ $${{\\tau}}$$, and $${{\\tau}}$$ $${{\\tau}}$$ $${{\\tau}}$$ resonances. Some of the signatures have already been strongly constrained by the existing LHC searches and the parameter space fitting the gamma ray excess is further restricted. Conversely, the pure $${{\\tau}}$$-rich final state is only weakly constrained so far due to the small electroweak production rate.« less

  4. On the Utility (or Futility) of Using Stable Water Isotopes to Constrain the Bulk Properties of Tropical Convection

    NASA Astrophysics Data System (ADS)

    Duan, Suqin Q.; Wright, Jonathon S.; Romps, David M.

    2018-02-01

    Atmospheric water-vapor isotopes have been proposed as a potentially powerful constraint on convection, which plays a critical role in Earth's present and future climate. It is shown here, however, that the mean tropical profile of HDO in the free troposphere does not usefully constrain the mean convective entrainment rate or precipitation efficiency. This is demonstrated using a single-column analytical model of atmospheric water isotopes. The model has three parameters: the entrainment rate, the precipitation efficiency, and the distance that evaporating condensates fall. At a given relative humidity, the possible range of HDO is small: its range is comparable to both the measurement uncertainty in the mean tropical profile and the structural uncertainty of a single-column model. Therefore, the mean tropical HDO profile is unlikely to add information about convective processes in a bulk-plume framework that cannot already be learned from relative humidity alone.

  5. Probing primordial features with next-generation photometric and radio surveys

    NASA Astrophysics Data System (ADS)

    Ballardini, M.; Finelli, F.; Maartens, R.; Moscardini, L.

    2018-04-01

    We investigate the possibility of using future photometric and radio surveys to constrain the power spectrum of primordial fluctuations that is predicted by inflationary models with a violation of the slow-roll phase. We forecast constraints with a Fisher analysis on the amplitude of the parametrized features on ultra-large scales, in order to assess whether these could be distinguishable over the cosmic variance. We find that the next generation of photometric and radio surveys has the potential to test these models at a sensitivity better than current CMB experiments and that the synergy between galaxy and CMB observations is able to constrain models with many extra parameters. In particular, an SKA continuum survey with a huge sky coverage and a flux threshold of a few μJy could confirm the presence of a new phase in the early Universe at more than 3σ.

  6. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; Steffen, J. H.; Weltman, A.

    2010-01-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here, we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss the GammeV-CHameleon Afterglow SEarch, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHameleon Afterglow SEarch. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experimentmore » will be able to probe a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  7. Constraining chameleon field theories using the GammeV afterglow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, A.; /Chicago U., EFI /KICP, Chicago; Steffen, J.H.

    2009-11-01

    The GammeV experiment has constrained the couplings of chameleon scalar fields to matter and photons. Here we present a detailed calculation of the chameleon afterglow rate underlying these constraints. The dependence of GammeV constraints on various assumptions in the calculation is studied. We discuss GammeV-CHASE, a second-generation GammeV experiment, which will improve upon GammeV in several major ways. Using our calculation of the chameleon afterglow rate, we forecast model-independent constraints achievable by GammeV-CHASE. We then apply these constraints to a variety of chameleon models, including quartic chameleons and chameleon dark energy models. The new experiment will be able to probemore » a large region of parameter space that is beyond the reach of current tests, such as fifth force searches, constraints on the dimming of distant astrophysical objects, and bounds on the variation of the fine structure constant.« less

  8. Assessing Shape Characteristics of Jupiter Trojans in the Kepler Campaign 6 Field

    NASA Astrophysics Data System (ADS)

    Sharkey, Benjamin; Ryan, Erin L.; Woodward, Charles E.

    2017-10-01

    We report estimates of spin pole orientations and body-centric axis ratios of nine Jupiter Trojan asteroids through convex shape models derived from Kepler K2 photometry. Our sample contains single-component as well as candidate binary systems (identified through lightcurve features). Photometric baselines on the targets covered 7 to 93 full rotation periods. By incorporating a bias against highly elongated physical shapes, spin vector orientations of single-component systems were constrained to several discrete regions. Single-component convex models failed to converge on two binary candidates while two others demonstrated pronounced tapering that may be consistent with concavities of contact binaries. Further work to create two-component models is likely necessary to constrain the candidate binary targets. We find that Kepler K2 photometry provides robust datasets capable of providing detailed information on physical shape parameters of Jupiter Trojans.

  9. Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation

    NASA Astrophysics Data System (ADS)

    Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill

    2012-06-01

    Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.

  10. Systematic Uncertainties in High-Energy Hadronic Interaction Models

    NASA Astrophysics Data System (ADS)

    Zha, M.; Knapp, J.; Ostapchenko, S.

    2003-07-01

    Hadronic interaction models for cosmic ray energies are uncertain since our knowledge of hadronic interactions is extrap olated from accelerator experiments at much lower energies. At present most high-energy models are based on Grib ov-Regge theory of multi-Pomeron exchange, which provides a theoretical framework to evaluate cross-sections and particle production. While experimental data constrain some of the model parameters, others are not well determined and are therefore a source of systematic uncertainties. In this paper we evaluate the variation of results obtained with the QGSJET model, when modifying parameters relating to three ma jor sources of uncertainty: the form of the parton structure function, the role of diffractive interactions, and the string hadronisation. Results on inelastic cross sections, on secondary particle production and on the air shower development are discussed.

  11. Uncertainty quantification of Antarctic contribution to sea-level rise using the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model

    NASA Astrophysics Data System (ADS)

    Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel

    2017-04-01

    Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.

  12. Constraining the JULES land-surface model for different land-use types using citizen-science generated hydrological data

    NASA Astrophysics Data System (ADS)

    Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.

    2017-12-01

    Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.

  13. S stars in the Gaia era: stellar parameters and nucleosynthesis

    NASA Astrophysics Data System (ADS)

    van Eck, Sophie; Karinkuzhi, Drisya; Shetye, Shreeya; Jorissen, Alain; Goriely, Stéphane; Siess, Lionel; Merle, Thibault; Plez, Bertrand

    2018-04-01

    S stars are s-process and C-enriched (0.5

  14. Bayesian estimation of source parameters and associated Coulomb failure stress changes for the 2005 Fukuoka (Japan) Earthquake

    NASA Astrophysics Data System (ADS)

    Dutta, Rishabh; Jónsson, Sigurjón; Wang, Teng; Vasyura-Bathke, Hannes

    2018-04-01

    Several researchers have studied the source parameters of the 2005 Fukuoka (northwestern Kyushu Island, Japan) earthquake (Mw 6.6) using teleseismic, strong motion and geodetic data. However, in all previous studies, errors of the estimated fault solutions have been neglected, making it impossible to assess the reliability of the reported solutions. We use Bayesian inference to estimate the location, geometry and slip parameters of the fault and their uncertainties using Interferometric Synthetic Aperture Radar and Global Positioning System data. The offshore location of the earthquake makes the fault parameter estimation challenging, with geodetic data coverage mostly to the southeast of the earthquake. To constrain the fault parameters, we use a priori constraints on the magnitude of the earthquake and the location of the fault with respect to the aftershock distribution and find that the estimated fault slip ranges from 1.5 to 2.5 m with decreasing probability. The marginal distributions of the source parameters show that the location of the western end of the fault is poorly constrained by the data whereas that of the eastern end, located closer to the shore, is better resolved. We propagate the uncertainties of the fault model and calculate the variability of Coulomb failure stress changes for the nearby Kego fault, located directly below Fukuoka city, showing that the main shock increased stress on the fault and brought it closer to failure.

  15. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1993-01-01

    A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  16. Vibration control of beams using stand-off layer damping: finite element modeling and experiments

    NASA Astrophysics Data System (ADS)

    Chaudry, A.; Baz, A.

    2006-03-01

    Damping treatments with stand-off layer (SOL) have been widely accepted as an attractive alternative to conventional constrained layer damping (CLD) treatments. Such an acceptance stems from the fact that the SOL, which is simply a slotted spacer layer sandwiched between the viscoelastic layer and the base structure, acts as a strain magnifier that considerably amplifies the shear strain and hence the energy dissipation characteristics of the viscoelastic layer. Accordingly, more effective vibration suppression can be achieved by using SOL as compared to employing CLD. In this paper, a comprehensive finite element model of the stand-off layer constrained damping treatment is developed. The model accounts for the geometrical and physical parameters of the slotted SOL, the viscoelastic, layer the constraining layer, and the base structure. The predictions of the model are validated against the predictions of a distributed transfer function model and a model built using a commercial finite element code (ANSYS). Furthermore, the theoretical predictions are validated experimentally for passive SOL treatments of different configurations. The obtained results indicate a close agreement between theory and experiments. Furthermore, the obtained results demonstrate the effectiveness of the CLD with SOL in enhancing the energy dissipation as compared to the conventional CLD. Extension of the proposed one-dimensional CLD with SOL to more complex structures is a natural extension to the present study.

  17. Studying W‧ boson contributions in \\bar{B} \\rightarrow {D}^{(* )}{{\\ell }}^{-}{\\bar{\

    NASA Astrophysics Data System (ADS)

    Wang, Yi-Long; Wei, Bin; Sheng, Jin-Huan; Wang, Ru-Min; Yang, Ya-Dong

    2018-05-01

    Recently, the Belle collaboration reported the first measurement of the τ lepton polarization P τ (D*) in \\bar{B}\\to {D}* {τ }-{\\bar{ν }}τ decay and a new measurement of the rate of the branching ratios R(D*), which are consistent with the Standard Model (SM) predictions. These could be used to constrain the New Physics (NP) beyond the SM. In this paper, we probe \\bar{B}\\to {D}(* ){{\\ell }}-{\\bar{ν }}{\\ell } (ℓ = e, μ, τ) decays in the model-independent way and in the specific G(221) models with lepton flavour universality. Considering the theoretical uncertainties and the experimental errors at the 95% C.L., we obtain the quite strong bounds on the model-independent parameters {C}{{LL}}{\\prime },{C}{{LR}}{\\prime },{C}{{RR}}{\\prime },{C}{{RL}}{\\prime },{g}V,{g}A,{g}V{\\prime },{g}A{\\prime } and the specific G(221) model parameter rates. We find that the constrained NP couplings have no obvious effects on all (differential) branching ratios and their rates, nevertheless, many NP couplings have very large effects on the lepton spin asymmetries of \\bar{B}\\to {D}(* ){{\\ell }}-{\\bar{ν }}{\\ell } decays and the forward–backward asymmetries of \\bar{B}\\to {D}* {{\\ell }}-{\\bar{ν }}{\\ell }. So we expect precision measurements of these observables would be researched by LHCb and Belle-II.

  18. Constraining Earthquake Source Parameters in Rupture Patches and Rupture Barriers on Gofar Transform Fault, East Pacific Rise from Ocean Bottom Seismic Data

    NASA Astrophysics Data System (ADS)

    Moyer, P. A.; Boettcher, M. S.; McGuire, J. J.; Collins, J. A.

    2015-12-01

    On Gofar transform fault on the East Pacific Rise (EPR), Mw ~6.0 earthquakes occur every ~5 years and repeatedly rupture the same asperity (rupture patch), while the intervening fault segments (rupture barriers to the largest events) only produce small earthquakes. In 2008, an ocean bottom seismometer (OBS) deployment successfully captured the end of a seismic cycle, including an extensive foreshock sequence localized within a 10 km rupture barrier, the Mw 6.0 mainshock and its aftershocks that occurred in a ~10 km rupture patch, and an earthquake swarm located in a second rupture barrier. Here we investigate whether the inferred variations in frictional behavior along strike affect the rupture processes of 3.0 < M < 4.5 earthquakes by determining source parameters for 100 earthquakes recorded during the OBS deployment.Using waveforms with a 50 Hz sample rate from OBS accelerometers, we calculate stress drop using an omega-squared source model, where the weighted average corner frequency is derived from an empirical Green's function (EGF) method. We obtain seismic moment by fitting the omega-squared source model to the low frequency amplitude of individual spectra and account for attenuation using Q obtained from a velocity model through the foreshock zone. To ensure well-constrained corner frequencies, we require that the Brune [1970] model provides a statistically better fit to each spectral ratio than a linear model and that the variance is low between the data and model. To further ensure that the fit to the corner frequency is not influenced by resonance of the OBSs, we require a low variance close to the modeled corner frequency. Error bars on corner frequency were obtained through a grid search method where variance is within 10% of the best-fit value. Without imposing restrictive selection criteria, slight variations in corner frequencies from rupture patches and rupture barriers are not discernable. Using well-constrained source parameters, we find an average stress drop of 5.7 MPa in the aftershock zone compared to values of 2.4 and 2.9 MPa in the foreshock and swarm zones respectively. The higher stress drops in the rupture patch compared to the rupture barriers reflect systematic differences in along strike fault zone properties on Gofar transform fault.

  19. An opinion-driven behavioral dynamics model for addictive behaviors

    DOE PAGES

    Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; ...

    2015-04-08

    We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual’s behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Additionally, individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters providemore » targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. Furthermore, this has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.« less

  20. Z boson mediated dark matter beyond the effective theory

    DOE PAGES

    Kearney, John; Orlofsky, Nicholas; Pierce, Aaron

    2017-02-17

    Here, direct detection bounds are beginning to constrain a very simple model of weakly interacting dark matter—a Majorana fermion with a coupling to the Z boson. In a particularly straightforward gauge-invariant realization, this coupling is introduced via a higher-dimensional operator. While attractive in its simplicity, this model generically induces a large ρ parameter. An ultraviolet completion that avoids an overly large contribution to ρ is the singlet-doublet model. We revisit this model, focusing on the Higgs blind spot region of parameter space where spin-independent interactions are absent. This model successfully reproduces dark matter with direct detection mediated by the Zmore » boson but whose cosmology may depend on additional couplings and states. Future direct detection experiments should effectively probe a significant portion of this parameter space, aside from a small coannihilating region. As such, Z-mediated thermal dark matter as realized in the singlet-doublet model represents an interesting target for future searches.« less

  1. Using dual-domain advective-transport simulation to reconcile multiple-tracer ages and estimate dual-porosity transport parameters

    NASA Astrophysics Data System (ADS)

    Sanford, Ward E.; Niel Plummer, L.; Casile, Gerolamo; Busenberg, Ed; Nelms, David L.; Schlosser, Peter

    2017-06-01

    Dual-domain transport is an alternative conceptual and mathematical paradigm to advection-dispersion for describing the movement of dissolved constituents in groundwater. Here we test the use of a dual-domain algorithm combined with advective pathline tracking to help reconcile environmental tracer concentrations measured in springs within the Shenandoah Valley, USA. The approach also allows for the estimation of the three dual-domain parameters: mobile porosity, immobile porosity, and a domain exchange rate constant. Concentrations of CFC-113, SF6, 3H, and 3He were measured at 28 springs emanating from carbonate rocks. The different tracers give three different mean composite piston-flow ages for all the springs that vary from 5 to 18 years. Here we compare four algorithms that interpret the tracer concentrations in terms of groundwater age: piston flow, old-fraction mixing, advective-flow path modeling, and dual-domain modeling. Whereas the second two algorithms made slight improvements over piston flow at reconciling the disparate piston-flow age estimates, the dual-domain algorithm gave a very marked improvement. Optimal values for the three transport parameters were also obtained, although the immobile porosity value was not well constrained. Parameter correlation and sensitivities were calculated to help quantify the uncertainty. Although some correlation exists between the three parameters being estimated, a watershed simulation of a pollutant breakthrough to a local stream illustrates that the estimated transport parameters can still substantially help to constrain and predict the nature and timing of solute transport. The combined use of multiple environmental tracers with this dual-domain approach could be applicable in a wide variety of fractured-rock settings.

  2. A cosmological exclusion plot: towards model-independent constraints on modified gravity from current and future growth rate data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taddei, Laura; Amendola, Luca, E-mail: laura.taddei@fis.unipr.it, E-mail: l.amendola@thphys.uni-heidelberg.de

    Most cosmological constraints on modified gravity are obtained assuming that the cosmic evolution was standard ΛCDM in the past and that the present matter density and power spectrum normalization are the same as in a ΛCDM model. Here we examine how the constraints change when these assumptions are lifted. We focus in particular on the parameter Y (also called G{sub eff}) that quantifies the deviation from the Poisson equation. This parameter can be estimated by comparing with the model-independent growth rate quantity fσ{sub 8}(z) obtained through redshift distortions. We reduce the model dependency in evaluating Y by marginalizing over σ{submore » 8} and over the initial conditions, and by absorbing the degenerate parameter Ω{sub m,0} into Y. We use all currently available values of fσ{sub 8}(z). We find that the combination Y-circumflex =YΩ{sub m,0}, assumed constant in the observed redshift range, can be constrained only very weakly by current data, Y-circumflex =0.28{sub −0.23}{sup +0.35} at 68% c.l. We also forecast the precision of a future estimation of Y-circumflex in a Euclid-like redshift survey. We find that the future constraints will reduce substantially the uncertainty, Y-circumflex =0.30{sub −0.09}{sup +0.08} , at 68% c.l., but the relative error on Y-circumflex around the fiducial remains quite high, of the order of 30%. The main reason for these weak constraints is that Y-circumflex is strongly degenerate with the initial conditions, so that large or small values of Y-circumflex are compensated by choosing non-standard initial values of the derivative of the matter density contrast. Finally, we produce a forecast of a cosmological exclusion plot on the Yukawa strength and range parameters, which complements similar plots on laboratory scales but explores scales and epochs reachable only with large-scale galaxy surveys. We find that future data can constrain the Yukawa strength to within 3% of the Newtonian one if the range is around a few Megaparsecs. In the particular case of f(R) models, we find that the Yukawa range will be constrained to be larger than 80 Mpc/h or smaller than 2 Mpc/h (95% c.l.), regardless of the specific f(R) model.« less

  3. Primordial perturbations generated by Higgs field and R2 operator

    NASA Astrophysics Data System (ADS)

    Wang, Yun-Chao; Wang, Tower

    2017-12-01

    If the very early Universe is dominated by the nonminimally coupled Higgs field and Starobinsky's curvature-squared term together, the potential diagram would mimic the landscape of a valley, serving as a cosmological attractor. The inflationary dynamics along this valley is studied, model parameters are constrained against observational data, and the effect of isocurvature perturbation is estimated.

  4. X-Ray Variability and the Secondary Star

    NASA Technical Reports Server (NTRS)

    Corcoran, M. F.; Ishibashi, K.

    2012-01-01

    We discuss the history of X-ray observations of the 11 Car system, concentrating on the periodic variability discovered in the 1990s. We discuss the interpretation of these variations, concentrating on a model of the system as a "collidingwind" binary. This interpretation allows the physical and orbital parameters of eta Car and its companion star to be constrained.

  5. Insight into glacier climate interaction: reconstruction of the mass balance field using ice extent data

    NASA Astrophysics Data System (ADS)

    Visnjevic, Vjeran; Herman, Frédéric; Licul, Aleksandar

    2016-04-01

    With the end of the Last Glacial Maximum (LGM), about 20 000 years ago, ended the most recent long-lasting cold phase in Earth's history. We recently developed a model that describes large-scale erosion and its response to climate and dynamical changes with the application to the Alps for the LGM period. Here we will present an inverse approach we have recently developed to infer the LGM mass balance from known ice extent data, focusing on a glacier or ice cap. The ice flow model is developed using the shallow ice approximation and the developed codes are accelerated using GPUs capabilities. The mass balance field is the constrained variable defined by the balance rate β and the equilibrium line altitude (ELA), where c is the cutoff value: b = max(βṡ(S(z) - ELA), c) We show that such a mass balance can be constrained from the observed past ice extent and ice thickness. We are also investigating several different geostatistical methods to constrain spatially variable mass balance, and derive uncertainties on each of the mass balance parameters.

  6. A multi-frequency receiver function inversion approach for crustal velocity structure

    NASA Astrophysics Data System (ADS)

    Li, Xuelei; Li, Zhiwei; Hao, Tianyao; Wang, Sheng; Xing, Jian

    2017-05-01

    In order to constrain the crustal velocity structures better, we developed a new nonlinear inversion approach based on multi-frequency receiver function waveforms. With the global optimizing algorithm of Differential Evolution (DE), low-frequency receiver function waveforms can primarily constrain large-scale velocity structures, while high-frequency receiver function waveforms show the advantages in recovering small-scale velocity structures. Based on the synthetic tests with multi-frequency receiver function waveforms, the proposed approach can constrain both long- and short-wavelength characteristics of the crustal velocity structures simultaneously. Inversions with real data are also conducted for the seismic stations of KMNB in southeast China and HYB in Indian continent, where crustal structures have been well studied by former researchers. Comparisons of inverted velocity models from previous and our studies suggest good consistency, but better waveform fitness with fewer model parameters are achieved by our proposed approach. Comprehensive tests with synthetic and real data suggest that the proposed inversion approach with multi-frequency receiver function is effective and robust in inverting the crustal velocity structures.

  7. Deep Unfolding for Topic Models.

    PubMed

    Chien, Jen-Tzung; Lee, Chao-Hsi

    2018-02-01

    Deep unfolding provides an approach to integrate the probabilistic generative models and the deterministic neural networks. Such an approach is benefited by deep representation, easy interpretation, flexible learning and stochastic modeling. This study develops the unsupervised and supervised learning of deep unfolded topic models for document representation and classification. Conventionally, the unsupervised and supervised topic models are inferred via the variational inference algorithm where the model parameters are estimated by maximizing the lower bound of logarithm of marginal likelihood using input documents without and with class labels, respectively. The representation capability or classification accuracy is constrained by the variational lower bound and the tied model parameters across inference procedure. This paper aims to relax these constraints by directly maximizing the end performance criterion and continuously untying the parameters in learning process via deep unfolding inference (DUI). The inference procedure is treated as the layer-wise learning in a deep neural network. The end performance is iteratively improved by using the estimated topic parameters according to the exponentiated updates. Deep learning of topic models is therefore implemented through a back-propagation procedure. Experimental results show the merits of DUI with increasing number of layers compared with variational inference in unsupervised as well as supervised topic models.

  8. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  9. Self-constrained inversion of potential fields

    NASA Astrophysics Data System (ADS)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  10. Spectroscopic ellipsometry data inversion using constrained splines and application to characterization of ZnO with various morphologies

    NASA Astrophysics Data System (ADS)

    Gilliot, Mickaël; Hadjadj, Aomar; Stchakovsky, Michel

    2017-11-01

    An original method of ellipsometric data inversion is proposed based on the use of constrained splines. The imaginary part of the dielectric function is represented by a series of splines, constructed with particular constraints on slopes at the node boundaries to avoid well-know oscillations of natural splines. The nodes are used as fit parameters. The real part is calculated using Kramers-Kronig relations. The inversion can be performed in successive inversion steps with increasing resolution. This method is used to characterize thin zinc oxide layers obtained by a sol-gel and spin-coating process, with a particular recipe yielding very thin layers presenting nano-porosity. Such layers have particular optical properties correlated with thickness, morphological and structural properties. The use of the constrained spline method is particularly efficient for such materials which may not be easily represented by standard dielectric function models.

  11. Thunder-induced ground motions: 2. Site characterization

    NASA Astrophysics Data System (ADS)

    Lin, Ting-L.; Langston, Charles A.

    2009-04-01

    Thunder-induced ground motion, near-surface refraction, and Rayleigh wave dispersion measurements were used to constrain near-surface velocity structure at an unconsolidated sediment site. We employed near-surface seismic refraction measurements to first define ranges for site structure parameters. Air-coupled and hammer-generated Rayleigh wave dispersion curves were used to further constrain the site structure by a grid search technique. The acoustic-to-seismic coupling is modeled as an incident plane P wave in a fluid half-space impinging into a solid layered half-space. We found that the infrasound-induced ground motions constrained substrate velocities and the average thickness and velocities of the near-surface layer. The addition of higher-frequency near-surface Rayleigh waves produced tighter constraints on the near-surface velocities. This suggests that natural or controlled airborne pressure sources can be used to investigate the near-surface site structures for earthquake shaking hazard studies.

  12. Constraints from the CMB temperature and other common observational data sets on variable dark energy density models

    NASA Astrophysics Data System (ADS)

    Jetzer, Philippe; Tortora, Crescenzo

    2011-08-01

    The thermodynamic and dynamical properties of a variable dark energy model with density scaling as ρx∝(1+z)m, z being the redshift, are discussed following the outline of Jetzer et al. [P. Jetzer, D. Puy, M. Signore, and C. Tortora, Gen. Relativ. Gravit. 43, 1083 (2011).GRGVA80001-770110.1007/s10714-010-1091-4]. These kinds of models are proven to lead to the creation/disruption of matter and radiation, which affect the cosmic evolution of both matter and radiation components in the Universe. In particular, we have concentrated on the temperature-redshift relation of radiation, which has been constrained using a very recent collection of cosmic microwave background (CMB) temperature measurements up to z˜3. For the first time, we have combined this observational probe with a set of independent measurements (Supernovae Ia distance moduli, CMB anisotropy, large-scale structure and observational data for the Hubble parameter), which are commonly adopted to constrain dark energy models. We find that, within the uncertainties, the model is indistinguishable from a cosmological constant which does not exchange any particles with other components. Anyway, while temperature measurements and Supernovae Ia tend to predict slightly decaying models, the contrary happens if CMB data are included. Future observations, in particular, measurements of CMB temperature at large redshift, will allow to give firmer bounds on the effective equation of state parameter weff of this kind of dark energy model.

  13. The Supernovae Analysis Application (SNAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  14. The Supernovae Analysis Application (SNAP)

    DOE PAGES

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas; ...

    2017-09-06

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  15. The Supernovae Analysis Application (SNAP)

    NASA Astrophysics Data System (ADS)

    Bayless, Amanda J.; Fryer, Chris L.; Wollaeger, Ryan; Wiggins, Brandon; Even, Wesley; de la Rosa, Janie; Roming, Peter W. A.; Frey, Lucy; Young, Patrick A.; Thorpe, Rob; Powell, Luke; Landers, Rachel; Persson, Heather D.; Hay, Rebecca

    2017-09-01

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginning to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.

  16. Constraining the parameters of the EAP sea ice rheology from satellite observations and discrete element model

    NASA Astrophysics Data System (ADS)

    Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven

    2016-04-01

    The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the uncertain model parameters and functions in the EAP model.

  17. Holographic dark energy with cosmological constant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Yazhou; Li, Nan; Zhang, Zhenhui

    2015-08-01

    Inspired by the multiverse scenario, we study a heterotic dark energy model in which there are two parts, the first being the cosmological constant and the second being the holographic dark energy, thus this model is named the ΛHDE model. By studying the ΛHDE model theoretically, we find that the parameters d and Ω{sub hde} are divided into a few domains in which the fate of the universe is quite different. We investigate dynamical behaviors of this model, and especially the future evolution of the universe. We perform fitting analysis on the cosmological parameters in the ΛHDE model by usingmore » the recent observational data. We find the model yields χ{sup 2}{sub min}=426.27 when constrained by Planck+SNLS3+BAO+HST, comparable to the results of the HDE model (428.20) and the concordant ΛCDM model (431.35). At 68.3% CL, we obtain −0.07« less

  18. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  19. Simultaneously constraining the astrophysics of reionization and the epoch of heating with 21CMMC

    NASA Astrophysics Data System (ADS)

    Greig, Bradley; Mesinger, Andrei

    2017-12-01

    The cosmic 21 cm signal is set to revolutionize our understanding of the early Universe, allowing us to probe the 3D temperature and ionization structure of the intergalactic medium (IGM). It will open a window on to the unseen first galaxies, showing us how their UV and X-ray photons drove the cosmic milestones of the epoch of reionization (EoR) and epoch of heating (EoH). To facilitate parameter inference from the 21 cm signal, we previously developed 21CMMC: a Monte Carlo Markov Chain sampler of 3D EoR simulations. Here, we extend 21CMMC to include simultaneous modelling of the EoH, resulting in a complete Bayesian inference framework for the astrophysics dominating the observable epochs of the cosmic 21 cm signal. We demonstrate that second-generation interferometers, the Hydrogen Epoch of Reionization Array and Square Kilometre Array will be able to constrain ionizing and X-ray source properties of the first galaxies with a fractional precision of the order of ∼1-10 per cent (1σ). The ionization history of the Universe can be constrained to within a few percent. Using our extended framework, we quantify the bias in EoR parameter recovery incurred by the common simplification of a saturated spin temperature in the IGM. Depending on the extent of overlap between the EoR and the EoH, the recovered astrophysical parameters can be biased by ∼3σ-10σ.

  20. Analysis Techniques to Measure Charged Current Inclusive Water Cross Section and to Constrain Neutrino Oscillation Parameters using the Near Detector (ND280) of the T2K Experiment

    NASA Astrophysics Data System (ADS)

    Das, Rajarshi

    2014-03-01

    The Tokai to Kamioka (T2K) Experiment is a long-baseline neutrino oscillation experiment located in Japan with the primary goal to precisely measure multiple neutrino flavor oscillation parameters. An off-axis muon neutrino beam with an energy that peaks at 600 MeV is generated at the JPARC facility and directed towards the kiloton Super-Kamiokande (SK) water Cherenkov detector located 295 km away. The rates of electron neutrino and muon neutrino interactions are measured at SK and compared with expected model values. This yields a measurement of the neutrino oscillation parameters sinq and sinq. Measurements from a Near Detector that is 280 m downstream of the neutrino beam target are used to constrain uncertainties in the beam flux prediction and neutrino interaction rates. We present a measurement of inclusive charged current neutrino interactions on water. We used several sub-detectors in the ND280 complex, including a Pi-Zero detector (P0D) that has alternating planes of plastic scintillator and water bag layers, a time projection chamber (TPC) and fine-grained detector (FGD) to detect and reconstruct muons from neutrino charged current events. Finally, we describe a ``forward-fitting'' technique that is used to constrain the beam flux and cross section as an input for the neutrino oscillation analysis and also to extract a flux-averaged inclusive charged current cross section on water.

  1. Using Parameter Constraints to Choose State Structures in Cost-Effectiveness Modelling.

    PubMed

    Thom, Howard; Jackson, Chris; Welton, Nicky; Sharples, Linda

    2017-09-01

    This article addresses the choice of state structure in a cost-effectiveness multi-state model. Key model outputs, such as treatment recommendations and prioritisation of future research, may be sensitive to state structure choice. For example, it may be uncertain whether to consider similar disease severities or similar clinical events as the same state or as separate states. Standard statistical methods for comparing models require a common reference dataset but merging states in a model aggregates the data, rendering these methods invalid. We propose a method that involves re-expressing a model with merged states as a model on the larger state space in which particular transition probabilities, costs and utilities are constrained to be equal between states. This produces a model that gives identical estimates of cost effectiveness to the model with merged states, while leaving the data unchanged. The comparison of state structures can be achieved by comparing maximised likelihoods or information criteria between constrained and unconstrained models. We can thus test whether the costs and/or health consequences for a patient in two states are the same, and hence if the states can be merged. We note that different structures can be used for rates, costs and utilities, as appropriate. We illustrate our method with applications to two recent models evaluating the cost effectiveness of prescribing anti-depressant medications by depression severity and the cost effectiveness of diagnostic tests for coronary artery disease. State structures in cost-effectiveness models can be compared using standard methods to compare constrained and unconstrained models.

  2. On constraining pilot point calibration with regularization in PEST

    USGS Publications Warehouse

    Fienen, M.N.; Muffels, C.T.; Hunt, R.J.

    2009-01-01

    Ground water model calibration has made great advances in recent years with practical tools such as PEST being instrumental for making the latest techniques available to practitioners. As models and calibration tools get more sophisticated, however, the power of these tools can be misapplied, resulting in poor parameter estimates and/or nonoptimally calibrated models that do not suit their intended purpose. Here, we focus on an increasingly common technique for calibrating highly parameterized numerical models - pilot point parameterization with Tikhonov regularization. Pilot points are a popular method for spatially parameterizing complex hydrogeologic systems; however, additional flexibility offered by pilot points can become problematic if not constrained by Tikhonov regularization. The objective of this work is to explain and illustrate the specific roles played by control variables in the PEST software for Tikhonov regularization applied to pilot points. A recent study encountered difficulties implementing this approach, but through examination of that analysis, insight into underlying sources of potential misapplication can be gained and some guidelines for overcoming them developed. ?? 2009 National Ground Water Association.

  3. Improvements to Wire Bundle Thermal Modeling for Ampacity Determination

    NASA Technical Reports Server (NTRS)

    Rickman, Steve L.; Iannello, Christopher J.; Shariff, Khadijah

    2017-01-01

    Determining current carrying capacity (ampacity) of wire bundles in aerospace vehicles is critical not only to safety but also to efficient design. Published standards provide guidance on determining wire bundle ampacity but offer little flexibility for configurations where wire bundles of mixed gauges and currents are employed with varying external insulation jacket surface properties. Thermal modeling has been employed in an attempt to develop techniques to assist in ampacity determination for these complex configurations. Previous developments allowed analysis of wire bundle configurations but was constrained to configurations comprised of less than 50 elements. Additionally, for vacuum analyses, configurations with very low emittance external jackets suffered from numerical instability in the solution. A new thermal modeler is presented allowing for larger configurations and is not constrained for low bundle infrared emissivity calculations. Formulation of key internal radiation and interface conductance parameters is discussed including the effects of temperature and air pressure on wire to wire thermal conductance. Test cases comparing model-predicted ampacity and that calculated from standards documents are presented.

  4. Sub-TeV quintuplet minimal dark matter with left-right symmetry

    NASA Astrophysics Data System (ADS)

    Agarwalla, Sanjib Kumar; Ghosh, Kirtiman; Patra, Ayon

    2018-05-01

    A detailed study of a fermionic quintuplet dark matter in a left-right symmetric scenario is performed in this article. The minimal quintuplet dark matter model is highly constrained from the WMAP dark matter relic density (RD) data. To elevate this constraint, an extra singlet scalar is introduced. It introduces a host of new annihilation and co-annihilation channels for the dark matter, allowing even sub-TeV masses. The phenomenology of this singlet scalar is studied in detail in the context of the Large Hadron Collider (LHC) experiment. The production and decay of this singlet scalar at the LHC give rise to interesting resonant di-Higgs or diphoton final states. We also constrain the RD allowed parameter space of this model in light of the ATLAS bounds on the resonant di-Higgs and diphoton cross-sections.

  5. Numerical modeling of Drangajökull Ice Cap, NW Iceland

    NASA Astrophysics Data System (ADS)

    Anderson, Leif S.; Jarosch, Alexander H.; Flowers, Gwenn E.; Aðalgeirsdóttir, Guðfinna; Magnússon, Eyjólfur; Pálsson, Finnur; Muñoz-Cobo Belart, Joaquín; Þorsteinsson, Þorsteinn; Jóhannesson, Tómas; Sigurðsson, Oddur; Harning, David; Miller, Gifford H.; Geirsdóttir, Áslaug

    2016-04-01

    Over the past century the Arctic has warmed twice as fast as the global average. This discrepancy is likely due to feedbacks inherent to the Arctic climate system. These Arctic climate feedbacks are currently poorly quantified, but are essential to future climate predictions based on global circulation modeling. Constraining the magnitude and timing of past Arctic climate changes allows us to test climate feedback parameterizations at different times with different boundary conditions. Because Holocene Arctic summer temperature changes have been largest in the North Atlantic (Kaufman et al., 2004) we focus on constraining the paleoclimate of Iceland. Glaciers are highly sensitive to changes in temperature and precipitation amount. This sensitivity allows for the estimation of paleoclimate using glacier models, modern glacier mass balance data, and past glacier extents. We apply our model to the Drangajökull ice cap (~150 sq. km) in NW Iceland. Our numerical model is resolved in two-dimensions, conserves mass, and applies the shallow-ice-approximation. The bed DEM used in the model runs was constructed from radio echo data surveyed in spring 2014. We constrain the modern surface mass balance of Drangajökull using: 1) ablation and accumulation stakes; 2) ice surface digital elevation models (DEMs) from satellite, airborne LiDAR, and aerial photographs; and 3) full-stokes model-derived vertical ice velocities. The modeled vertical ice velocities and ice surface DEMs are combined to estimate past surface mass balance. We constrain Holocene glacier geometries using moraines and trimlines (e.g., Brynjolfsson, etal, 2014), proglacial-lake cores, and radiocarbon-dated dead vegetation emerging from under the modern glacier. We present a sensitivity analysis of the model to changes in parameters and show the effect of step changes of temperature and precipitation on glacier extent. Our results are placed in context with local lacustrine and marine climate proxies as well as with glacier extent and volume changes across the North Atlantic.

  6. Constraining the mass of the Local Group

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  7. Fermi-LAT upper limits on gamma-ray emission from colliding wind binaries

    DOE PAGES

    Werner, Michael; Reimer, O.; Reimer, A.; ...

    2013-07-09

    Here, colliding wind binaries (CWBs) are thought to give rise to a plethora of physical processes including acceleration and interaction of relativistic particles. Observation of synchrotron radiation in the radio band confirms there is a relativistic electron population in CWBs. Accordingly, CWBs have been suspected sources of high-energy γ-ray emission since the COS-B era. Theoretical models exist that characterize the underlying physical processes leading to particle acceleration and quantitatively predict the non-thermal energy emission observable at Earth. Furthermore, we strive to find evidence of γ-ray emission from a sample of seven CWB systems: WR 11, WR 70, WR 125, WRmore » 137, WR 140, WR 146, and WR 147. Theoretical modelling identified these systems as the most favourable candidates for emitting γ-rays. We make a comparison with existing γ-ray flux predictions and investigate possible constraints. We used 24 months of data from the Large Area Telescope (LAT) on-board the Fermi Gamma Ray Space Telescope to perform a dedicated likelihood analysis of CWBs in the LAT energy range. As a result, we find no evidence of γ-ray emission from any of the studied CWB systems and determine corresponding flux upper limits. For some CWBs the interplay of orbital and stellar parameters renders the Fermi-LAT data not sensitive enough to constrain the parameter space of the emission models. In the cases of WR140 and WR147, the Fermi -LAT upper limits appear to rule out some model predictions entirely and constrain theoretical models over a significant parameter space. A comparison of our findings to the CWB η Car is made.« less

  8. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  9. The importance of the initial water depth in basin modelling: the example of the Venetian foredeep (NE Italy)

    NASA Astrophysics Data System (ADS)

    Barbieri, C.; Mancin, N.

    2003-04-01

    The Tertiary evolution of the Venetian area (NE Italy) led to the superposition of three overlapping foreland systems, different in both age and polarity, as a consequence of the main orogenic phases of the Dinarides, to the North-East, the Southern Alps, to the North, and the Apennines, to the South-West, respectively. Aim of this work is to quantify the flexural effect produced by the Southalpine main orogenic phases (Serravallian-Early Pliocene) in the Venetian foredeep, and particularly to evaluate the importance of constrained initial water depth for evaluating correctly the contribution to flexure of the surface loads. To this end, a 2-D flexural modelling has been applied along a N-S trending industrial seismic line (courtesy of ENI-AGIP) extended from the Northern Alps to the Adriatic sea. Once interpreted and depth migrated, the geometries of the sedimentary bodies have been studied and the base of the foredeep wedge, Serravallian-Tortonian in age, related to the Southern Alps load, has been recognized. Water depth variations during Miocene time have been constrained on three wells located along this section. According to bathymetric reconstructions, based on the quantitative study of foraminiferal assemblages, an overall neritic environment (0--200m), developed during Langhian time, was followed by a fast deepening to bathyal conditions (200--600m) to the North, toward the Southern Alps, during Serravallian-Tortonian time, whereas neritic conditions still persisted to the South. According to these constraints, a best fit model was obtained for an Effective Elastic Thickness value of about 20 Km and a belt topography equal to the present day one. The extremely good fit of the model to realty highlights that, in the studied region, flexure related to the Southern Alps is fully due to surface loads (topographic load and initial water depth), and no subloads are requested to improve the fit, unlike a previous proposed model. Such a difference can be due to both the better constraining of the bathymetric parameter and the improvement of geophysical and geological data. A test was also performed to evaluate the actual influence of the bathymetric parameter on flexural response of the crust by modelling a condition with maximum, minimum and zero initial water depth respectively. Results show that this parameter can contribute up to 50% to the total flexure in the studied region.

  10. Patchy screening of the cosmic microwave background by inhomogeneous reionization

    NASA Astrophysics Data System (ADS)

    Gluscevic, Vera; Kamionkowski, Marc; Hanson, Duncan

    2013-02-01

    We derive a constraint on patchy screening of the cosmic microwave background from inhomogeneous reionization using off-diagonal TB and TT correlations in WMAP-7 temperature/polarization data. We interpret this as a constraint on the rms optical-depth fluctuation Δτ as a function of a coherence multipole LC. We relate these parameters to a comoving coherence scale, of bubble size RC, in a phenomenological model where reionization is instantaneous but occurs on a crinkly surface, and also to the bubble size in a model of “Swiss cheese” reionization where bubbles of fixed size are spread over some range of redshifts. The current WMAP data are still too weak, by several orders of magnitude, to constrain reasonable models, but forthcoming Planck and future EPIC data should begin to approach interesting regimes of parameter space. We also present constraints on the parameter space imposed by the recent results from the EDGES experiment.

  11. Study of constrained minimal supersymmetry

    NASA Astrophysics Data System (ADS)

    Kane, G. L.; Kolda, Chris; Roszkowski, Leszek; Wells, James D.

    1994-06-01

    Taking seriously the phenomenological indications for supersymmetry we have made a detailed study of unified minimal SUSY, including many effects at the few percent level in a consistent fashion. We report here a general analysis of what can be studied without choosing a particular gauge group at the unification scale. Firstly, we find that the encouraging SUSY unification results of recent years do survive the challenge of a more complete and accurate analysis. Taking into account effects at the 5-10 % level leads to several improvements of previous results and allows us to sharpen our predictions for SUSY in the light of unification. We perform a thorough study of the parameter space and look for patterns to indicate SUSY predictions, so that they do not depend on arbitrary choices of some parameters or untested assumptions. Our results can be viewed as a fully constrained minimal SUSY standard model. The resulting model forms a well-defined basis for comparing the physics potential of different facilities. Very little of the acceptable parameter space has been excluded by CERN LEP or Fermilab so far, but a significant fraction can be covered when these accelerators are upgraded. A number of initial applications to the understanding of the values of mh and mt, the SUSY spectrum, detectability of SUSY at LEP II or Fermilab, B(b-->sγ), Γ(Z-->bb¯), dark matter, etc., are included in a separate section that might be of more interest to some readers than the technical aspects of model building. We formulate an approach to extracting SUSY parameters from data when superpartners are detected. For small tanβ or large mt both m1/2 and m0 are entirely bounded from above at ~1 TeV without having to use a fine-tuning constraint.

  12. Observational constraints on variable equation of state parameters of dark matter and dark energy after Planck

    NASA Astrophysics Data System (ADS)

    Kumar, Suresh; Xu, Lixin

    2014-10-01

    In this paper, we study a cosmological model in general relativity within the framework of spatially flat Friedmann-Robertson-Walker space-time filled with ordinary matter (baryonic), radiation, dark matter and dark energy, where the latter two components are described by Chevallier-Polarski-Linder equation of state parameters. We utilize the observational data sets from SNLS3, BAO and Planck + WMAP9 + WiggleZ measurements of matter power spectrum to constrain the model parameters. We find that the current observational data offer tight constraints on the equation of state parameter of dark matter. We consider the perturbations and study the behavior of dark matter by observing its effects on CMB and matter power spectra. We find that the current observational data favor the cold dark matter scenario with the cosmological constant type dark energy at the present epoch.

  13. Optimizing future imaging survey of galaxies to confront dark energy and modified gravity models

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kazuhiro; Parkinson, David; Hamana, Takashi; Nichol, Robert C.; Suto, Yasushi

    2007-07-01

    We consider the extent to which future imaging surveys of galaxies can distinguish between dark energy and modified gravity models for the origin of the cosmic acceleration. Dynamical dark energy models may have similar expansion rates as models of modified gravity, yet predict different growth of structure histories. We parametrize the cosmic expansion by the two parameters, w0 and wa, and the linear growth rate of density fluctuations by Linder’s γ, independently. Dark energy models generically predict γ≈0.55, while the Dvali-Gabadadze-Porrati (DGP) model γ≈0.68. To determine if future imaging surveys can constrain γ within 20% (or Δγ<0.1), we perform the Fisher matrix analysis for a weak-lensing survey such as the ongoing Hyper Suprime-Cam (HSC) project. Under the condition that the total observation time is fixed, we compute the figure of merit (FoM) as a function of the exposure time texp. We find that the tomography technique effectively improves the FoM, which has a broad peak around texp≃several˜10min; a shallow and wide survey is preferred to constrain the γ parameter. While Δγ<0.1 cannot be achieved by the HSC weak-lensing survey alone, one can improve the constraints by combining with a follow-up spectroscopic survey like Wide-field Fiber-fed Multi-Object Spectrograph (WFMOS) and/or future cosmic microwave background (CMB) observations.

  14. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    NASA Astrophysics Data System (ADS)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  15. Cosmic shear results from the deep lens survey. II. Full cosmological parameter constraints from tomography

    DOE PAGES

    Jee, M. James; Tyson, J. Anthony; Hilbert, Stefan; ...

    2016-06-15

    Here, we present a tomographic cosmic shear study from the Deep Lens Survey (DLS), which, providing a limiting magnitudemore » $${r}_{\\mathrm{lim}}\\sim 27$$ ($$5\\sigma $$), is designed as a precursor Large Synoptic Survey Telescope (LSST) survey with an emphasis on depth. Using five tomographic redshift bins, we study their auto- and cross-correlations to constrain cosmological parameters. We use a luminosity-dependent nonlinear model to account for the astrophysical systematics originating from intrinsic alignments of galaxy shapes. We find that the cosmological leverage of the DLS is among the highest among existing $$\\gt 10$$ deg2 cosmic shear surveys. Combining the DLS tomography with the 9 yr results of the Wilkinson Microwave Anisotropy Probe (WMAP9) gives $${{\\rm{\\Omega }}}_{m}={0.293}_{-0.014}^{+0.012}$$, $${\\sigma }_{8}={0.833}_{-0.018}^{+0.011}$$, $${H}_{0}={68.6}_{-1.2}^{+1.4}\\;{\\text{km s}}^{-1}\\;{{\\rm{Mpc}}}^{-1}$$, and $${{\\rm{\\Omega }}}_{b}=0.0475\\pm 0.0012$$ for ΛCDM, reducing the uncertainties of the WMAP9-only constraints by ~50%. When we do not assume flatness for ΛCDM, we obtain the curvature constraint $${{\\rm{\\Omega }}}_{k}=-{0.010}_{-0.015}^{+0.013}$$ from the DLS+WMAP9 combination, which, however, is not well constrained when WMAP9 is used alone. The dark energy equation-of-state parameter w is tightly constrained when baryonic acoustic oscillation (BAO) data are added, yielding $$w=-{1.02}_{-0.09}^{+0.10}$$ with the DLS+WMAP9+BAO joint probe. The addition of supernova constraints further tightens the parameter to $$w=-1.03\\pm 0.03$$. Our joint constraints are fully consistent with the final Planck results and also with the predictions of a ΛCDM universe.« less

  16. Mapping the Solar Wind from its Source Region into the Outer Corona

    NASA Technical Reports Server (NTRS)

    Esser, Ruth

    1997-01-01

    Knowledge of the radial variation of the plasma conditions in the coronal source region of the solar wind is essential to exploring coronal heating and solar wind acceleration mechanisms. The goal of the proposal was to determine as many plasma parameters in the solar wind acceleration region and beyond as possible by coordinating different observational techniques, such as Interplanetary Scintillation Observations, spectral line intensity observations, polarization brightness measurements and X-ray observations. The inferred plasma parameters were then used to constrain solar wind models.

  17. Suspension parameter estimation in the frequency domain using a matrix inversion approach

    NASA Astrophysics Data System (ADS)

    Thite, A. N.; Banvidi, S.; Ibicek, T.; Bennett, L.

    2011-12-01

    The dynamic lumped parameter models used to optimise the ride and handling of a vehicle require base values of the suspension parameters. These parameters are generally experimentally identified. The accuracy of identified parameters can depend on the measurement noise and the validity of the model used. The existing publications on suspension parameter identification are generally based on the time domain and use a limited degree of freedom. Further, the data used are either from a simulated 'experiment' or from a laboratory test on an idealised quarter or a half-car model. In this paper, a method is developed in the frequency domain which effectively accounts for the measurement noise. Additional dynamic constraining equations are incorporated and the proposed formulation results in a matrix inversion approach. The nonlinearities in damping are estimated, however, using a time-domain approach. Full-scale 4-post rig test data of a vehicle are used. The variations in the results are discussed using the modal resonant behaviour. Further, a method is implemented to show how the results can be improved when the matrix inverted is ill-conditioned. The case study shows a good agreement between the estimates based on the proposed frequency-domain approach and measurable physical parameters.

  18. Extending semi-numeric reionization models to the first stars and galaxies

    NASA Astrophysics Data System (ADS)

    Koh, Daegene; Wise, John H.

    2018-03-01

    Semi-numeric methods have made it possible to efficiently model the epoch of reionization (EoR). While most implementations involve a reduction to a simple three-parameter model, we introduce a new mass-dependent ionizing efficiency parameter that folds in physical parameters that are constrained by the latest numerical simulations. This new parametrization enables the effective modelling of a broad range of host halo masses containing ionizing sources, extending from the smallest Population III host haloes with M ˜ 106 M⊙, which are often ignored, to the rarest cosmic peaks with M ˜ 1012 M⊙ during EoR. We compare the resulting ionizing histories with a typical three-parameter model and also compare with the latest constraints from the Planck mission. Our model results in an optical depth due to Thomson scattering, τe = 0.057, that is consistent with Planck. The largest difference in our model is shown in the resulting bubble size distributions that peak at lower characteristic sizes and are broadened. We also consider the uncertainties of the various physical parameters, and comparing the resulting ionizing histories broadly disfavours a small contribution from galaxies. The smallest haloes cease a meaningful contribution to the ionizing photon budget after z = 10, implying that they play a role in determining the start of EoR and little else.

  19. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape parameter of the retention curve n was highly constrained whilst other parameters of the retention curve showed a large equifinality. The root and storage dry matter observations were predicted with a NSE of 0.94, a low bias of 58.2 kg ha-1 and a high R2 of 0.98. Dry matters of stem and leaves were predicted with less, but still high accuracy (NSE=0.79, bias=221.7 kg ha-1, R2=0.87). We attribute this slightly poorer model performance to missing leaf senescence which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use-efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need of including agricultural management options in the coupled model.

  20. Monte Carlo based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2013-12-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape parameter of the retention curve n was highly constrained whilst other parameters of the retention curve showed a large equifinality. The root and storage dry matter observations were predicted with a NSE of 0.94, a low bias of -58.2 kg ha-1 and a high R2 of 0.98. Dry matters of stem and leaves were predicted with less, but still high accuracy (NSE = 0.79, bias = 221.7 kg ha-1, R2 = 0.87). We attribute this slightly poorer model performance to missing leaf senescence which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use-efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need of including agricultural management options in the coupled model.

  1. High-Contrast Near-Infrared Imaging Polarimetry of the Protoplanetary Disk around RY Tau

    NASA Technical Reports Server (NTRS)

    Takami, Michihiro; Karr, Jennifer L.; Hashimoto, Jun; Kim, Hyosun; Wisenewski, John; Henning, Thomas; Grady, Carol; Kandori, Ryo; Hodapp, Klaus W.; Kudo, Tomoyuki; hide

    2013-01-01

    We present near-infrared coronagraphic imaging polarimetry of RY Tau. The scattered light in the circumstellar environment was imaged at H-band at a high resolution (approx. 0.05) for the first time, using Subaru-HiCIAO. The observed polarized intensity (PI) distribution shows a butterfly-like distribution of bright emission with an angular scale similar to the disk observed at millimeter wavelengths. This distribution is offset toward the blueshifted jet, indicating the presence of a geometrically thick disk or a remnant envelope, and therefore the earliest stage of the Class II evolutionary phase. We perform comparisons between the observed PI distribution and disk models with: (1) full radiative transfer code, using the spectral energy distribution (SED) to constrain the disk parameters; and (2) monochromatic simulations of scattered light which explore a wide range of parameters space to constrain the disk and dust parameters. We show that these models cannot consistently explain the observed PI distribution, SED, and the viewing angle inferred by millimeter interferometry. We suggest that the scattered light in the near-infrared is associated with an optically thin and geometrically thick layer above the disk surface, with the surface responsible for the infrared SED. Half of the scattered light and thermal radiation in this layer illuminates the disk surface, and this process may significantly affect the thermal structure of the disk.

  2. Constraining f(R) gravity in solar system, cosmology and binary pulsar systems

    NASA Astrophysics Data System (ADS)

    Liu, Tan; Zhang, Xing; Zhao, Wen

    2018-02-01

    The f (R) gravity can be cast into the form of a scalar-tensor theory, and scalar degree of freedom can be suppressed in high-density regions by the chameleon mechanism. In this article, for the general f (R) gravity, using a scalar-tensor representation with the chameleon mechanism, we calculate the parametrized post-Newtonian parameters γ and β, the effective gravitational constant Geff, and the effective cosmological constant Λeff. In addition, for the general f (R) gravity, we also calculate the rate of orbital period decay of the binary system due to gravitational radiation. Then we apply these results to specific f (R) models (Hu-Sawicki model, Tsujikawa model and Starobinsky model) and derive the constraints on the model parameters by combining the observations in solar system, cosmological scales and the binary systems.

  3. Retrieval of ammonia abundances and cloud opacities on Jupiter from Voyager IRIS spectra

    NASA Technical Reports Server (NTRS)

    Conrath, B. J.; Gierasch, P. J.

    1986-01-01

    Gaseous ammonia abundances and cloud opacities are retrieved from Voyager IRIS 5- and 45-micron data on the basis of a simplified atmospheric model and a two-stream radiative transfer approximation, assuming a single cloud layer with 680-mbar base pressure and 0.14 gas scale height. Brightness temperature measurements obtained as a function of emission angle from selected planetary locations are used to verify the model and constrain a number of its parameters.

  4. Constraining the inclination of the Low-Mass X-ray Binary Cen X-4

    NASA Astrophysics Data System (ADS)

    Hammerstein, Erica K.; Cackett, Edward M.; Reynolds, Mark T.; Miller, Jon M.

    2018-05-01

    We present the results of ellipsoidal light curve modeling of the low mass X-ray binary Cen X-4 in order to constrain the inclination of the system and mass of the neutron star. Near-IR photometric monitoring was performed in May 2008 over a period of three nights at Magellan using PANIC. We obtain J, H and K lightcurves of Cen X-4 using differential photometry. An ellipsoidal modeling code was used to fit the phase folded light curves. The lightcurve fit which makes the least assumptions about the properties of the binary system yields an inclination of 34.9^{+4.9}_{-3.6} degrees (1σ), which is consistent with previous determinations of the system's inclination but with improved statistical uncertainties. When combined with the mass function and mass ratio, this inclination yields a neutron star mass of 1.51^{+0.40}_{-0.55} M⊙. This model allows accretion disk parameters to be free in the fitting process. Fits that do not allow for an accretion disk component in the near-IR flux gives a systematically lower inclination between approximately 33 and 34 degrees, leading to a higher mass neutron star between approximately 1.7 M⊙ and 1.8 M⊙. We discuss the implications of other assumptions made during the modeling process as well as numerous free parameters and their effects on the resulting inclination.

  5. Strong constraint on modelled global carbon uptake using solar-induced chlorophyll fluorescence data.

    PubMed

    MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias

    2018-01-31

    Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.

  6. Measurements and Modeling of Stress in Precipitation-Hardened Aluminum Alloy AA2618 during Gleeble Interrupted Quenching and Constrained Cooling

    NASA Astrophysics Data System (ADS)

    Chobaut, Nicolas; Carron, Denis; Saelzle, Peter; Drezet, Jean-Marie

    2016-11-01

    Solutionizing and quenching are the key steps in the fabrication of heat-treatable aluminum parts such as AA2618 compressor impellers for turbochargers as they highly impact the mechanical characteristics of the product. In particular, quenching induces residual stresses that can cause unacceptable distortions during machining and unfavorable stresses in service. Predicting and controlling stress generation during quenching of large AA2618 forgings are therefore of particular interest. Since possible precipitation during quenching may affect the local yield strength of the material and thus impact the level of macroscale residual stresses, consideration of this phenomenon is required. A material model accounting for precipitation in a simple but realistic way is presented. Instead of modeling precipitation that occurs during quenching, the model parameters are identified using a limited number of tensile tests achieved after representative interrupted cooling paths in a Gleeble machine. This material model is presented, calibrated, and validated against constrained coolings in a Gleeble blocked-jaws configuration. Applications of this model are FE computations of stress generation during quenching of large AA2618 forgings for compressor impellers.

  7. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less

  8. Orbit determination of the Next-Generation Beidou satellites with Intersatellite link measurements and a priori orbit constraints

    NASA Astrophysics Data System (ADS)

    Ren, Xia; Yang, Yuanxi; Zhu, Jun; Xu, Tianhe

    2017-11-01

    Intersatellite Link (ISL) technology helps to realize the auto update of broadcast ephemeris and clock error parameters for Global Navigation Satellite System (GNSS). ISL constitutes an important approach with which to both improve the observation geometry and extend the tracking coverage of China's Beidou Navigation Satellite System (BDS). However, ISL-only orbit determination might lead to the constellation drift, rotation, and even lead to the divergence in orbit determination. Fortunately, predicted orbits with good precision can be used as a priori information with which to constrain the estimated satellite orbit parameters. Therefore, the precision of satellite autonomous orbit determination can be improved by consideration of a priori orbit information, and vice versa. However, the errors of rotation and translation in a priori orbit will remain in the ultimate result. This paper proposes a constrained precise orbit determination (POD) method for a sub-constellation of the new Beidou satellite constellation with only a few ISLs. The observation model of dual one-way measurements eliminating satellite clock errors is presented, and the orbit determination precision is analyzed with different data processing backgrounds. The conclusions are as follows. (1) With ISLs, the estimated parameters are strongly correlated, especially the positions and velocities of satellites. (2) The performance of determined BDS orbits will be improved by the constraints with more precise priori orbits. The POD precision is better than 45 m with a priori orbit constrain of 100 m precision (e.g., predicted orbits by telemetry tracking and control system), and is better than 6 m with precise priori orbit constraints of 10 m precision (e.g., predicted orbits by international GNSS monitoring & Assessment System (iGMAS)). (3) The POD precision is improved by additional ISLs. Constrained by a priori iGMAS orbits, the POD precision with two, three, and four ISLs is better than 6, 3, and 2 m, respectively. (4) The in-plane link and out-of-plane link have different contributions to observation configuration and system observability. The POD with weak observation configuration (e.g., one in-plane link and one out-of-plane link) should be tightly constrained with a priori orbits.

  9. Conformationally constrained farnesoid X receptor (FXR) agonists: Naphthoic acid-based analogs of GW 4064.

    PubMed

    Akwabi-Ameyaw, Adwoa; Bass, Jonathan Y; Caldwell, Richard D; Caravella, Justin A; Chen, Lihong; Creech, Katrina L; Deaton, David N; Jones, Stacey A; Kaldor, Istvan; Liu, Yaping; Madauss, Kevin P; Marr, Harry B; McFadyen, Robert B; Miller, Aaron B; Navas, Frank; Parks, Derek J; Spearing, Paul K; Todd, Dan; Williams, Shawn P; Wisely, G Bruce

    2008-08-01

    Starting from the known FXR agonist GW 4064 1a, a series of stilbene replacements were prepared. The 6-substituted 1-naphthoic acid 1b was an equipotent FXR agonist with improved developability parameters relative to 1a. Analog 1b also reduced the severity of cholestasis in the ANIT acute cholestatic rat model.

  10. A coupled stochastic inverse-management framework for dealing with nonpoint agriculture pollution under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.

    2014-04-01

    In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.

  11. Effective theory of flavor for Minimal Mirror Twin Higgs

    DOE PAGES

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-03

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ϵ more » $$n_i$$ for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ϵ' $$n_i$$, so that spontaneous breaking of the parity P arises from a single parameter ϵ'/ϵ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i, including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ϵ'/ϵ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. Lastly, in each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.« less

  12. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-04-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion, and mass-loss rates during the luminous blue variable and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  13. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-07-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion and mass-loss rates during the luminous blue variable, and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  14. A pitfall of piecewise-polytropic equation of state inference

    NASA Astrophysics Data System (ADS)

    Raaijmakers, Geert; Riley, Thomas E.; Watts, Anna L.

    2018-05-01

    The only messenger radiation in the Universe which one can use to statistically probe the Equation of State (EOS) of cold dense matter is that originating from the near-field vicinities of compact stars. Constraining gravitational masses and equatorial radii of rotating compact stars is a major goal for current and future telescope missions, with a primary purpose of constraining the EOS. From a Bayesian perspective it is necessary to carefully discuss prior definition; in this context a complicating issue is that in practice there exist pathologies in the general relativistic mapping between spaces of local (interior source matter) and global (exterior spacetime) parameters. In a companion paper, these issues were raised on a theoretical basis. In this study we reproduce a probability transformation procedure from the literature in order to map a joint posterior distribution of Schwarzschild gravitational masses and radii into a joint posterior distribution of EOS parameters. We demonstrate computationally that EOS parameter inferences are sensitive to the choice to define a prior on a joint space of these masses and radii, instead of on a joint space interior source matter parameters. We focus on the piecewise-polytropic EOS model, which is currently standard in the field of astrophysical dense matter study. We discuss the implications of this issue for the field.

  15. Evaluation of deep moonquake source parameters: Implication for fault characteristics and thermal state

    NASA Astrophysics Data System (ADS)

    Kawamura, Taichi; Lognonné, Philippe; Nishikawa, Yasuhiro; Tanaka, Satoshi

    2017-07-01

    While deep moonquakes are seismic events commonly observed on the Moon, their source mechanism is still unexplained. The two main issues are poorly constrained source parameters and incompatibilities between the thermal profiles suggested by many studies and the apparent need for brittle properties at these depths. In this study, we reinvestigated the deep moonquake data to reestimate its source parameters and uncover the characteristics of deep moonquake faults that differ from those on Earth. We first improve the estimation of source parameters through spectral analysis using "new" broadband seismic records made by combining those of the Apollo long- and short-period seismometers. We use the broader frequency band of the combined spectra to estimate corner frequencies and DC values of spectra, which are important parameters to constrain the source parameters. We further use the spectral features to estimate seismic moments and stress drops for more than 100 deep moonquake events from three different source regions. This study revealed that deep moonquake faults are extremely smooth compared to terrestrial faults. Second, we reevaluate the brittle-ductile transition temperature that is consistent with the obtained source parameters. We show that the source parameters imply that the tidal stress is the main source of the stress glut causing deep moonquakes and the large strain rate from tides makes the brittle-ductile transition temperature higher. Higher transition temperatures open a new possibility to construct a thermal model that is consistent with deep moonquake occurrence and pressure condition and thereby improve our understandings of the deep moonquake source mechanism.

  16. Modelling Thermal Emission to Constrain Io's Largest Eruptions

    NASA Astrophysics Data System (ADS)

    Davies, A. G.; De Pater, I.; de Kleer, K.; Head, J. W., III; Wilson, L.

    2016-12-01

    Massive, voluminous, low-silica content basalt lava flows played a major role in shaping the surfaces of the terrestrial planets and the Moon [1] but the mechanisms of eruption, including effusion rate profiles and flow regime, are often obscure. However, eruptions of large volumes of lava and the emplacement of thick, areally extensive silicate lava flows are extant on the volcanic jovian moon Io [2], thus providing a template for understanding how these processes behaved elsewhere in the Solar System. We have modelled data of the largest of these eruptions to constrain eruption processes from the evolution of the wavelength variation of the resulting thermal emission [3]. We continue to refine our models to further constrain eruption parameters. We focus on large "outburst" eruptions, large lava fountains which feed lava flows [4] which have been directly observed on Io from the Galileo spacecraft [5, 6]. Outburst data continue to be collected by large ground-based telescopes [7, 8]. These data have been fitted with a sophisticated thermal emission model to derive eruption parameters such as areal coverage and effusion rates. We have created a number of tools for investigating and constraining effusion rate for Io's largest eruptions. It remains for all of the components to be integrated into a single model with rheological properties dependent on flow regime and the effects of heat loss. The crucial advance on previous estimates of lava flow emplacement on Io [e.g., 5] is that, by keeping track of the temperature distribution on the surface of the lava flows (a function of flow regime and varying effusion rate) the integrated thermal emission spectrum can be synthesized. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA. We thank the NASA OPR Program (NNN13D466T) and NSF (Grant AST-1313485) for supports. Refs: [1] Wilson, L. and J. W. Head (2016), Icarus, doi:10.1016/j.icarus.2015.12.039. [2] Davies, A. (2007) Volcanism on Io, Cambridge. [3] Davies, A. et al. (2010) JGR, 194, 75.99. [4] Davies, A. (1996) Icarus, 124, 45-61. [5] Keszthelyi, L. et al., (2001) JGR, 106, 33025-33052. [6] Williams, D. et al. (2001) JGR, 106, 33105-33120. [7] dePater, I. et al. (2014) Icarus, 242, 365-378. [8] de Kleer, K. et al. (2014) Icarus, 242, 352-364.

  17. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, Nina; Jupp, Tim; Cox, Peter; Luke, Catherine

    2016-04-01

    Land-surface models (LSMs) are crucial components of the Earth System Models (ESMs) which are used to make coupled climate-carbon cycle projections for the 21st century. The Joint UK Land Environment Simulator (JULES) is the land-surface model used in the climate and weather forecast models of the UK Met Office. In this study, JULES is automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or adjoint, of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. We present an introduction to the adJULES system and demonstrate its ability to improve the model-data fit using eddy covariance measurements of gross primary production (GPP) and latent heat (LE) fluxes. adJULES also has the ability to calibrate over multiple sites simultaneously. This feature is used to define new optimised parameter values for the 5 Plant Functional Types (PFTS) in JULES. The optimised PFT-specific parameters improve the performance of JULES over 90% of the FLUXNET sites used in the study. These reductions in error are shown and compared to reductions found due to site-specific optimisations. Finally, we show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  18. A LEAST ABSOLUTE SHRINKAGE AND SELECTION OPERATOR (LASSO) FOR NONLINEAR SYSTEM IDENTIFICATION

    NASA Technical Reports Server (NTRS)

    Kukreja, Sunil L.; Lofberg, Johan; Brenner, Martin J.

    2006-01-01

    Identification of parametric nonlinear models involves estimating unknown parameters and detecting its underlying structure. Structure computation is concerned with selecting a subset of parameters to give a parsimonious description of the system which may afford greater insight into the functionality of the system or a simpler controller design. In this study, a least absolute shrinkage and selection operator (LASSO) technique is investigated for computing efficient model descriptions of nonlinear systems. The LASSO minimises the residual sum of squares by the addition of a 1 penalty term on the parameter vector of the traditional 2 minimisation problem. Its use for structure detection is a natural extension of this constrained minimisation approach to pseudolinear regression problems which produces some model parameters that are exactly zero and, therefore, yields a parsimonious system description. The performance of this LASSO structure detection method was evaluated by using it to estimate the structure of a nonlinear polynomial model. Applicability of the method to more complex systems such as those encountered in aerospace applications was shown by identifying a parsimonious system description of the F/A-18 Active Aeroelastic Wing using flight test data.

  19. Testing general relativity and alternative theories of gravity with space-based atomic clocks and atom interferometers

    NASA Astrophysics Data System (ADS)

    Bondarescu, Ruxandra; Schärer, Andreas; Jetzer, Philippe; Angélil, Raymond; Saha, Prasenjit; Lundgren, Andrew

    2015-05-01

    The successful miniaturisation of extremely accurate atomic clocks and atom interferometers invites prospects for satellite missions to perform precision experiments. We discuss the effects predicted by general relativity and alternative theories of gravity that can be detected by a clock, which orbits the Earth. Our experiment relies on the precise tracking of the spacecraft using its observed tick-rate. The spacecraft's reconstructed four-dimensional trajectory will reveal the nature of gravitational perturbations in Earth's gravitational field, potentially differentiating between different theories of gravity. This mission can measure multiple relativistic effects all during the course of a single experiment, and constrain the Parametrized Post-Newtonian Parameters around the Earth. A satellite carrying a clock of fractional timing inaccuracy of Δ f / f ˜ 10-16 in an elliptic orbit around the Earth would constrain the PPN parameters |β - 1|, |γ - 1| ≲ 10-6. We also briefly review potential constraints by atom interferometers on scalar tensor theories and in particular on Chameleon and dilaton models.

  20. Cosmological constraints on Brans-Dicke theory.

    PubMed

    Avilez, A; Skordis, C

    2014-07-04

    We report strong cosmological constraints on the Brans-Dicke (BD) theory of gravity using cosmic microwave background data from Planck. We consider two types of models. First, the initial condition of the scalar field is fixed to give the same effective gravitational strength Geff today as the one measured on Earth, GN. In this case, the BD parameter ω is constrained to ω>692 at the 99% confidence level, an order of magnitude improvement over previous constraints. In the second type, the initial condition for the scalar is a free parameter leading to a somewhat stronger constraint of ω>890, while Geff is constrained to 0.981

  1. Cosmological structure formation in Decaying Dark Matter models

    NASA Astrophysics Data System (ADS)

    Cheng, Dalong; Chu, M.-C.; Tang, Jiayu

    2015-07-01

    The standard cold dark matter (CDM) model predicts too many and too dense small structures. We consider an alternative model that the dark matter undergoes two-body decays with cosmological lifetime τ into only one type of massive daughters with non-relativistic recoil velocity Vk. This decaying dark matter model (DDM) can suppress the structure formation below its free-streaming scale at time scale comparable to τ. Comparing with warm dark matter (WDM), DDM can better reduce the small structures while being consistent with high redshfit observations. We study the cosmological structure formation in DDM by performing self-consistent N-body simulations and point out that cosmological simulations are necessary to understand the DDM structures especially on non-linear scales. We propose empirical fitting functions for the DDM suppression of the mass function and the concentration-mass relation, which depend on the decay parameters lifetime τ, recoil velocity Vk and redshift. The fitting functions lead to accurate reconstruction of the the non-linear power transfer function of DDM to CDM in the framework of halo model. Using these results, we set constraints on the DDM parameter space by demanding that DDM does not induce larger suppression than the Lyman-α constrained WDM models. We further generalize and constrain the DDM models to initial conditions with non-trivial mother fractions and show that the halo model predictions are still valid after considering a global decayed fraction. Finally, we point out that the DDM is unlikely to resolve the disagreement on cluster numbers between the Planck primary CMB prediction and the Sunyaev-Zeldovich (SZ) effect number count for τ ~ H0-1.

  2. An enhanced beam model for constrained layer damping and a parameter study of damping contribution

    NASA Astrophysics Data System (ADS)

    Xie, Zhengchao; Shepard, W. Steve, Jr.

    2009-01-01

    An enhanced analytical model is presented based on an extension of previous models for constrained layer damping (CLD) in beam-like structures. Most existing CLD models are based on the assumption that shear deformation in the core layer is the only source of damping in the structure. However, previous research has shown that other types of deformation in the core layer, such as deformations from longitudinal extension and transverse compression, can also be important. In the enhanced analytical model developed here, shear, extension, and compression deformations are all included. This model can be used to predict the natural frequencies and modal loss factors. The numerical study shows that compared to other models, this enhanced model is accurate in predicting the dynamic characteristics. As a result, the model can be accepted as a general computation model. With all three types of damping included and the formulation used here, it is possible to study the impact of the structure's geometry and boundary conditions on the relative contribution of each type of damping. To that end, the relative contributions in the frequency domain for a few sample cases are presented.

  3. Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization

    NASA Astrophysics Data System (ADS)

    Christensen, H. M.; Moroz, I.; Palmer, T.

    2015-12-01

    It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.

  4. Constraining parameters of white-dwarf binaries using gravitational-wave and electromagnetic observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Sweta; Nelemans, Gijs, E-mail: s.shah@astro.ru.nl

    The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, themore » inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.« less

  5. Multi-objective vs. single-objective calibration of a hydrologic model using single- and multi-objective screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Shafii, Mahyar; Zink, Matthias; Schäfer, David; Thober, Stephan; Samaniego, Luis; Tolson, Bryan

    2016-04-01

    Hydrologic models are traditionally calibrated against observed streamflow. Recent studies have shown however, that only a few global model parameters are constrained using this kind of integral signal. They can be identified using prior screening techniques. Since different objectives might constrain different parameters, it is advisable to use multiple information to calibrate those models. One common approach is to combine these multiple objectives (MO) into one single objective (SO) function and allow the use of a SO optimization algorithm. Another strategy is to consider the different objectives separately and apply a MO Pareto optimization algorithm. In this study, two major research questions will be addressed: 1) How do multi-objective calibrations compare with corresponding single-objective calibrations? 2) How much do calibration results deteriorate when the number of calibrated parameters is reduced by a prior screening technique? The hydrologic model employed in this study is a distributed hydrologic model (mHM) with 52 model parameters, i.e. transfer coefficients. The model uses grid cells as a primary hydrologic unit, and accounts for processes like snow accumulation and melting, soil moisture dynamics, infiltration, surface runoff, evapotranspiration, subsurface storage and discharge generation. The model is applied in three distinct catchments over Europe. The SO calibrations are performed using the Dynamically Dimensioned Search (DDS) algorithm with a fixed budget while the MO calibrations are achieved using the Pareto Dynamically Dimensioned Search (PA-DDS) algorithm allowing for the same budget. The two objectives used here are the Nash Sutcliffe Efficiency (NSE) of the simulated streamflow and the NSE of the logarithmic transformation. It is shown that the SO DDS results are located close to the edges of the Pareto fronts of the PA-DDS. The MO calibrations are hence preferable due to their supply of multiple equivalent solutions from which the user can choose at the end due to the specific needs. The sequential single-objective parameter screening was employed prior to the calibrations reducing the number of parameters by at least 50% in the different catchments and for the different single objectives. The single-objective calibrations led to a faster convergence of the objectives and are hence beneficial when using a DDS on single-objectives. The above mentioned parameter screening technique is generalized for multi-objectives and applied before calibration using the PA-DDS algorithm. Two different alternatives of this MO-screening are tested. The comparison of the calibration results using all parameters and using only screened parameters shows for both alternatives that the PA-DDS algorithm does not profit in terms of trade-off size and function evaluations required to achieve converged pareto fronts. This is because the PA-DDS algorithm automatically reduces search space with progress of the calibration run. This automatic reduction should be different for other search algorithms. It is therefore hypothesized that prior screening can but must not be beneficial for parameter estimation dependent on the chosen optimization algorithm.

  6. Collective behaviour in vertebrates: a sensory perspective

    PubMed Central

    Collignon, Bertrand; Fernández-Juricic, Esteban

    2016-01-01

    Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616

  7. Vertical structure and physical processes of the Madden-Julian Oscillation: Biases and uncertainties at short range

    NASA Astrophysics Data System (ADS)

    Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; Woolnough, Steve J.; Jiang, Xianan; Waliser, Duane E.; Caian, Mihaela; Cole, Jason; Hagos, Samson M.; Hannay, Cecile; Kim, Daehyun; Miyakawa, Tomoki; Pritchard, Michael S.; Roehrig, Romain; Shindo, Eiki; Vitart, Frederic; Wang, Hailan

    2015-05-01

    An analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.

  8. Constrained parameterisation of photosynthetic capacity causes significant increase of modelled tropical vegetation surface temperature

    NASA Astrophysics Data System (ADS)

    Kattge, J.; Knorr, W.; Raddatz, T.; Wirth, C.

    2009-04-01

    Photosynthetic capacity is one of the most sensitive parameters of terrestrial biosphere models whose representation in global scale simulations has been severely hampered by a lack of systematic analyses using a sufficiently broad database. Due to its coupling to stomatal conductance changes in the parameterisation of photosynthetic capacity may potentially influence transpiration rates and vegetation surface temperature. Here, we provide a constrained parameterisation of photosynthetic capacity for different plant functional types in the context of the photosynthesis model proposed by Farquhar et al. (1980), based on a comprehensive compilation of leaf photosynthesis rates and leaf nitrogen content. Mean values of photosynthetic capacity were implemented into the coupled climate-vegetation model ECHAM5/JSBACH and modelled gross primary production (GPP) is compared to a compilation of independent observations on stand scale. Compared to the current standard parameterisation the root-mean-squared difference between modelled and observed GPP is substantially reduced for almost all PFTs by the new parameterisation of photosynthetic capacity. We find a systematic depression of NUE (photosynthetic capacity divided by leaf nitrogen content) on certain tropical soils that are known to be deficient in phosphorus. Photosynthetic capacity of tropical trees derived by this study is substantially lower than standard estimates currently used in terrestrial biosphere models. This causes a decrease of modelled GPP while it significantly increases modelled tropical vegetation surface temperatures, up to 0.8°C. These results emphasise the importance of a constrained parameterisation of photosynthetic capacity not only for the carbon cycle, but also for the climate system.

  9. Improving the Fit of a Land-Surface Model to Data Using its Adjoint

    NASA Astrophysics Data System (ADS)

    Raoult, N.; Jupp, T. E.; Cox, P. M.; Luke, C.

    2015-12-01

    Land-surface models (LSMs) are of growing importance in the world of climate prediction. They are crucial components of larger Earth system models that are aimed at understanding the effects of land surface processes on the global carbon cycle. The Joint UK Land Environment Simulator (JULES) is the land-surface model used by the UK Met Office. It has been automatically differentiated using commercial software from FastOpt, resulting in an analytical gradient, or 'adjoint', of the model. Using this adjoint, the adJULES parameter estimation system has been developed, to search for locally optimum parameter sets by calibrating against observations. adJULES presents an opportunity to confront JULES with many different observations, and make improvements to the model parameterisation. In the newest version of adJULES, multiple sites can be used in the calibration, to giving a generic set of parameters that can be generalised over plant functional types. We present an introduction to the adJULES system and its applications to data from a variety of flux tower sites. We show that calculation of the 2nd derivative of JULES allows us to produce posterior probability density functions of the parameters and how knowledge of parameter values is constrained by observations.

  10. Killing the cMSSM softly

    DOE PAGES

    Bechtle, Philip; Camargo-Molina, José Eliel; Desch, Klaus; ...

    2016-02-24

    We investigate the constrained Minimal Supersymmetric Standard Model (cMSSM) in the light of constraining experimental and observational data from precision measurements, astrophysics, direct supersymmetry searches at the LHC and measurements of the properties of the Higgs boson, by means of a global fit using the program Fittino. As in previous studies, we find rather poor agreement of the best fit point with the global data. We also investigate the stability of the electro-weak vacuum in the preferred region of parameter space around the best fit point.We find that the vacuum is metastable, with a lifetime significantly longer than the agemore » of the Universe. For the first time in a global fit of supersymmetry, we employ a consistent methodology to evaluate the goodness-of-fit of the cMSSM in a frequentist approach by deriving p values from large sets of toy experiments. We analyse analytically and quantitatively the impact of the choice of the observable set on the p value, and in particular its dilution when confronting the model with a large number of barely constraining measurements. Lastly, for the preferred sets of observables, we obtain p values for the cMSSM below 10 %, i.e. we exclude the cMSSM as a model at the 90 % confidence level.« less

  11. Inferring Spatial Variations of Microstructural Properties from Macroscopic Mechanical Response

    PubMed Central

    Liu, Tengxiao; Hall, Timothy J.; Barbone, Paul E.; Oberai, Assad A.

    2016-01-01

    Disease alters tissue microstructure, which in turn affects the macroscopic mechanical properties of tissue. In elasticity imaging, the macroscopic response is measured and is used to infer the spatial distribution of the elastic constitutive parameters. When an empirical constitutive model is used these parameters cannot be linked to the microstructure. However, when the constitutive model is derived from a microstructural representation of the material, it allows for the possibility of inferring the local averages of the spatial distribution of the microstructural parameters. This idea forms the basis of this study. In particular, we first derive a constitutive model by homogenizing the mechanical response of a network of elastic, tortuous fibers. Thereafter, we use this model in an inverse problem to determine the spatial distribution of the microstructural parameters. We solve the inverse problem as a constrained minimization problem, and develop efficient methods for solving it. We apply these methods to displacement fields obtained by deforming gelatin-agar co-gels, and determine the spatial distribution of agar concentration and fiber tortuosity, thereby demonstrating that it is possible to image local averages of microstructural parameters from macroscopic measurements of deformation. PMID:27655420

  12. Constrained spectral clustering under a local proximity structure assumption

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Xu, Qianjun; des Jardins, Marie

    2005-01-01

    This work focuses on incorporating pairwise constraints into a spectral clustering algorithm. A new constrained spectral clustering method is proposed, as well as an active constraint acquisition technique and a heuristic for parameter selection. We demonstrate that our constrained spectral clustering method, CSC, works well when the data exhibits what we term local proximity structure.

  13. Clues on the Milky Way disc formation from population synthesis simulations

    NASA Astrophysics Data System (ADS)

    Robin, A. C.; Reylé, C.; Bienaymé, O.; Fernandez-Trincado, J. G.; Amôres, E. B.

    2016-09-01

    In recent years the stellar populations of the Milky Way have been investigated from large scale surveys in different ways, from pure star count analysis to detailed studies based on spectroscopic surveys. While in the former case the data can constrain the scale height and scale length thanks to completeness, they suffer from high correlation between these two values. On the other hand, spectroscopic surveys suffer from complex selection functions which hardly allow to derive accurate density distributions. The scale length in particular has been difficult to be constrained, resulting in discrepant values in the literature. Here, we investigate the thick disc characteristics by comparing model simulations with large scale data sets. The simulations are done from the population synthesis model of Besançon. We explore the parameters of the thick disc (shape, local density, age, metallicity) using a Monte Carlo Markov Chain method to constrain the model free parameters (Robin et al. 2014). Correlations between parameters are limited due to the vast spatial coverage of the used surveys (SDSS + 2MASS). We show that the thick disc was created during a long phase of formation, starting about 12 Gyr ago and finishing about 10 Gyr ago, during which gravitational contraction occurred, both vertically and radially. Moreover, in its early phase the thick disc was flaring in the outskirts. We conclude that the thick disc has been created prior to the thin disc during a gravitational collapse phase, slowed down by turbulence related to a high star formation rate, as explained for example in Bournaud et al. (2009) or Lehnert et al. (2009). Our result does not favor a formation from an initial thin disc thickened later by merger events or by secular evolution of the thin disc. We then study the in-plane distribution of stars in the thin disc from 2MASS and show that the thin disc scale length varies as a function of age, indicating an inside out formation. Moreover, we investigate the warp and flare and demonstrate that the warp amplitude is changing with time and the node angle is slightly precessing. Finally, we show comparisons between the new model and spectroscopic surveys. The new model allows to correctly simulate the kinematics, the metallicity, and α-abundance distributions in the solar neighbourhood as well as in the bulge region.

  14. USING ForeCAT DEFLECTIONS AND ROTATIONS TO CONSTRAIN THE EARLY EVOLUTION OF CMEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kay, C.; Opher, M.; Colaninno, R. C.

    2016-08-10

    To accurately predict the space weather effects of the impacts of coronal mass ejection (CME) at Earth one must know if and when a CME will impact Earth and the CME parameters upon impact. In 2015 Kay et al. presented Forecasting a CME’s Altered Trajectory (ForeCAT), a model for CME deflections based on the magnetic forces from the background solar magnetic field. Knowing the deflection and rotation of a CME enables prediction of Earth impacts and the orientation of the CME upon impact. We first reconstruct the positions of the 2010 April 8 and the 2012 July 12 CMEs frommore » the observations. The first of these CMEs exhibits significant deflection and rotation (34° deflection and 58° rotation), while the second shows almost no deflection or rotation (<3° each). Using ForeCAT, we explore a range of initial parameters, such as the CME’s location and size, and find parameters that can successfully reproduce the behavior for each CME. Additionally, since the deflection depends strongly on the behavior of a CME in the low corona, we are able to constrain the expansion and propagation of these CMEs in the low corona.« less

  15. Constraining smoothness parameter and the DD relation of Dyer-Roeder equation with supernovae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xi; Yu, Hao-Ran; Zhang, Tong-Jie, E-mail: yangwds@mail.bnu.edu.cn, E-mail: yu@bnu.edu.cn, E-mail: tjzhang@bnu.edu.cn

    2013-06-01

    Our real universe is locally inhomogeneous. Dyer and Roeder introduced the smoothness parameter α to describe the influence of local inhomogeneity on angular diameter distance, and they obtained the angular diameter distance-redshift approximate relation (Dyer-Roeder equation) for locally inhomogeneous universe. Furthermore, the Distance-Duality (DD) relation, D{sub L}(z)(1+z){sup −2}/D{sub A}(z) = 1, should be valid for all cosmological models that are described by Riemannian geometry, where D{sub L} and D{sub A} are, respectively, the luminosity and angular distance distances. Therefore, it is necessary to test whether if the Dyer-Roeder approximate equation can satisfy the Distance-Duality relation. In this paper, we usemore » Union2.1 SNe Ia data to constrain the smoothness parameter α and test whether the Dyer-Roeder equation meet the DD relation. By using χ{sup 2} minimization, we get α = 0.92{sub −0.32}{sup +0.08} at 1σ and 0.92{sub −0.65}{sup +0.08} at 2σ, and our results show that the Dyer-Roeder equation is in good consistency with the DD relation at 1σ.« less

  16. Probing primordial features with future galaxy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballardini, M.; Fedeli, C.; Moscardini, L.

    2016-10-01

    We study the capability of future measurements of the galaxy clustering power spectrum to probe departures from a power-law spectrum for primordial fluctuations. On considering the information from the galaxy clustering power spectrum up to quasi-linear scales, i.e. k < 0.1 h Mpc{sup −1}, we present forecasts for DESI, Euclid and SPHEREx in combination with CMB measurements. As examples of departures in the primordial power spectrum from a simple power-law, we consider four Planck 2015 best-fits motivated by inflationary models with different breaking of the slow-roll approximation. At present, these four representative models provide an improved fit to CMB temperaturemore » anisotropies, although not at statistical significant level. As for other extensions in the matter content of the simplest ΛCDM model, the complementarity of the information in the resulting matter power spectrum expected from these galaxy surveys and in the primordial power spectrum from CMB anisotropies can be effective in constraining cosmological models. We find that the three galaxy surveys can add significant information to CMB to better constrain the extra parameters of the four models considered.« less

  17. Cosmology from galaxy clusters as observed by Planck

    NASA Astrophysics Data System (ADS)

    Pierpaoli, Elena

    We propose to use current all-sky data on galaxy clusters in the radio/infrared bands in order to constrain cosmology. This will be achieved performing parameter estimation with number counts and power spectra for galaxy clusters detected by Planck through their Sunyaev—Zeldovich signature. The ultimate goal of this proposal is to use clusters as tracers of matter density in order to provide information about fundamental properties of our Universe, such as the law of gravity on large scale, early Universe phenomena, structure formation and the nature of dark matter and dark energy. We will leverage on the availability of a larger and deeper cluster catalog from the latest Planck data release in order to include, for the first time, the cluster power spectrum in the cosmological parameter determination analysis. Furthermore, we will extend clusters' analysis to cosmological models not yet investigated by the Planck collaboration. These aims require a diverse set of activities, ranging from the characterization of the clusters' selection function, the choice of the cosmological cluster sample to be used for parameter estimation, the construction of mock samples in the various cosmological models with correct correlation properties in order to produce reliable selection functions and noise covariance matrices, and finally the construction of the appropriate likelihood for number counts and power spectra. We plan to make the final code available to the community and compatible with the most widely used cosmological parameter estimation code. This research makes use of data from the NASA satellites Planck and, less directly, Chandra, in order to constrain cosmology; and therefore perfectly fits the NASA objectives and the specifications of this solicitation.

  18. Synthesizing trait correlations and functional relationships across multiple scales: A Hierarchical Bayes approach

    NASA Astrophysics Data System (ADS)

    Shiklomanov, A. N.; Cowdery, E.; Dietze, M.

    2016-12-01

    Recent syntheses of global trait databases have revealed that although the functional diversity among plant species is immense, this diversity is constrained by trade-offs between plant strategies. However, the use of among-trait and trait-environment correlations at the global scale for both qualitative ecological inference and land surface modeling has several important caveats. An alternative approach is to preserve the existing PFT-based model structure while using statistical analyses to account for uncertainty and variability in model parameters. In this study, we used a hierarchical Bayesian model of foliar traits in the TRY database to test the following hypotheses: (1) Leveraging the covariance between foliar traits will significantly constrain our uncertainty in their distributions; and (2) Among-trait covariance patterns are significantly different among and within PFTs, reflecting differences in trade-offs associated with biome-level evolution, site-level community assembly, and individual-level ecophysiological acclimation. We found that among-trait covariance significantly constrained estimates of trait means, and the additional information provided by across-PFT covariance led to more constraint still, especially for traits and PFTs with low sample sizes. We also found that among-trait correlations were highly variable among PFTs, and were generally inconsistent with correlations within PFTs. The hierarchical multivariate framework developed in our study can readily be enhanced with additional levels of hierarchy to account for geographic, species, and individual-level variability.

  19. DETERMINING AGES OF APOGEE GIANTS WITH KNOWN DISTANCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feuillet, Diane K.; Holtzman, Jon; Bovy, Jo

    2016-01-20

    We present a sample of 705 local giant stars observed using the New Mexico State University 1 m telescope with the Sloan Digital Sky Survey-III/Apache Point Observatory Galactic Evolution Experiment (APOGEE) spectrograph, for which we estimate stellar ages and the local star formation history (SFH). The high-resolution (R ∼ 22,500), near infrared (1.51–1.7 μm) APOGEE spectra provide measurements of stellar atmospheric parameters (temperature, surface gravity, [M/H], and [α/M]). Due to the smaller uncertainties in surface gravity possible with high-resolution spectra and accurate Hipparcos distance measurements, we are able to calculate the stellar masses to within 30%. For giants, the relativelymore » rapid evolution up the red giant branch allows the age to be constrained by the mass. We examine methods of estimating age using both the mass–age relation directly and a Bayesian isochrone matching of measured parameters, assuming a constant SFH. To improve the SFH prior, we use a hierarchical modeling approach to constrain the parameters of the model SFH using the age probability distribution functions of the data. The results of an α-dependent Gaussian SFH model show a clear age–[α/M] relation at all ages. Using this SFH model as the prior for an empirical Bayesian analysis, we determine ages for individual stars. The resulting age–metallicity relation is flat, with a slight decrease in [M/H] at the oldest ages and a ∼0.5 dex spread in metallicity across most ages. For stars with ages ≲1 Gyr we find a smaller spread, consistent with radial migration having a smaller effect on these young stars than on the older stars.« less

  20. Section-constrained local geological interface dynamic updating method based on the HRBF surface

    NASA Astrophysics Data System (ADS)

    Guo, Jiateng; Wu, Lixin; Zhou, Wenhui; Li, Chaoling; Li, Fengdan

    2018-02-01

    Boundaries, attitudes and sections are the most common data acquired from regional field geological surveys, and they are used for three-dimensional (3D) geological modelling. However, constructing topologically consistent 3D geological models from rapid and automatic regional modelling with convenient local modifications remains unresolved. In previous works, the Hermite radial basis function (HRBF) surface was introduced for the simulation of geological interfaces from geological boundaries and attitudes, which allows 3D geological models to be automatically extracted from the modelling area by the interfaces. However, the reasonability and accuracy of non-supervised subsurface modelling is limited without further modifications generated through explanations and analyses performed by geology experts. In this paper, we provide flexible and convenient manual interactive manipulation tools for geologists to sketch constraint lines, and these tools may help geologists transform and apply their expert knowledge to the models. In the modified modelling workflow, the geological sections were treated as auxiliary constraints to construct more reasonable 3D geological models. The geometric characteristics of section lines were abstracted to coordinates and normal vectors, and along with the transformed coordinates and vectors from boundaries and attitudes, these characteristics were adopted to co-calculate the implicit geological surface function parameters of the HRBF equations and form constrained geological interfaces from topographic (boundaries and attitudes) and subsurface data (sketched sections). Based on this new modelling method, a prototype system was developed, in which the section lines could be imported from databases or interactively sketched, and the models could be immediately updated after the new constraints were added. Experimental comparisons showed that all boundary, attitude and section data are well represented in the constrained models, which are consistent with expert explanations and help improve the quality of the models.

  1. A MAGNIFIED GLANCE INTO THE DARK SECTOR: PROBING COSMOLOGICAL MODELS WITH STRONG LENSING IN A1689

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Magaña, Juan; Motta, V.; Cárdenas, Victor H.

    2015-11-01

    In this paper we constrain four alternative models to the late cosmic acceleration in the universe: Chevallier–Polarski–Linder (CPL), interacting dark energy (IDE), Ricci holographic dark energy (HDE), and modified polytropic Cardassian (MPC). Strong lensing (SL) images of background galaxies produced by the galaxy cluster Abell 1689 are used to test these models. To perform this analysis we modify the LENSTOOL lens modeling code. The value added by this probe is compared with other complementary probes: Type Ia supernovae (SN Ia), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB). We found that the CPL constraints obtained for the SL datamore » are consistent with those estimated using the other probes. The IDE constraints are consistent with the complementary bounds only if large errors in the SL measurements are considered. The Ricci HDE and MPC constraints are weak, but they are similar to the BAO, SN Ia, and CMB estimations. We also compute the figure of merit as a tool to quantify the goodness of fit of the data. Our results suggest that the SL method provides statistically significant constraints on the CPL parameters but is weak for those of the other models. Finally, we show that the use of the SL measurements in galaxy clusters is a promising and powerful technique to constrain cosmological models. The advantage of this method is that cosmological parameters are estimated by modeling the SL features for each underlying cosmology. These estimations could be further improved by SL constraints coming from other galaxy clusters.« less

  2. Hiereachical Bayesian Model for Combining Geochemical and Geophysical Data for Environmental Applications Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jinsong

    2013-05-01

    Development of a hierarchical Bayesian model to estimate the spatiotemporal distribution of aqueous geochemical parameters associated with in-situ bioremediation using surface spectral induced polarization (SIP) data and borehole geochemical measurements collected during a bioremediation experiment at a uranium-contaminated site near Rifle, Colorado. The SIP data are first inverted for Cole-Cole parameters including chargeability, time constant, resistivity at the DC frequency and dependence factor, at each pixel of two-dimensional grids using a previously developed stochastic method. Correlations between the inverted Cole-Cole parameters and the wellbore-based groundwater chemistry measurements indicative of key metabolic processes within the aquifer (e.g. ferrous iron, sulfate, uranium)more » were established and used as a basis for petrophysical model development. The developed Bayesian model consists of three levels of statistical sub-models: 1) data model, providing links between geochemical and geophysical attributes, 2) process model, describing the spatial and temporal variability of geochemical properties in the subsurface system, and 3) parameter model, describing prior distributions of various parameters and initial conditions. The unknown parameters are estimated using Markov chain Monte Carlo methods. By combining the temporally distributed geochemical data with the spatially distributed geophysical data, we obtain the spatio-temporal distribution of ferrous iron, sulfate and sulfide, and their associated uncertainity information. The obtained results can be used to assess the efficacy of the bioremediation treatment over space and time and to constrain reactive transport models.« less

  3. Understanding and Mitigating Vortex-Dominated, Tip-Leakage and End-Wall Losses in a Transonic Splittered Rotor Stage

    DTIC Science & Technology

    2015-04-23

    blade geometry parameters the TPL design 9   tool was initiated by running the MATLAB script (*.m) Main_SpeedLine_Auto. Main_SpeedLine_Auto...SolidWorks for solid model generation of the blade shapes. Computational Analysis With solid models generated of the gas -path air wedge, automated...287 mm (11.3 in) Constrained by existing TCR geometry Number of Passages 12 None A blade tip-down design approach was used. The outputs of the

  4. Process-oriented Observational Metrics for CMIP6 Climate Model Assessments

    NASA Astrophysics Data System (ADS)

    Jiang, J. H.; Su, H.

    2016-12-01

    Observational metrics based on satellite observations have been developed and effectively applied during post-CMIP5 model evaluation and improvement projects. As new physics and parameterizations continue to be included in models for the upcoming CMIP6, it is important to continue objective comparisons between observations and model results. This talk will summarize the process-oriented observational metrics and methodologies for constraining climate models with A-Train satellite observations and support CMIP6 model assessments. We target parameters and processes related to atmospheric clouds and water vapor, which are critically important for Earth's radiative budget, climate feedbacks, and water and energy cycles, and thus reduce uncertainties in climate models.

  5. Generalized framework for testing gravity with gravitational-wave propagation. I. Formulation

    NASA Astrophysics Data System (ADS)

    Nishizawa, Atsushi

    2018-05-01

    The direct detection of gravitational waves (GWs) from merging binary black holes and neutron stars marks the beginning of a new era in gravitational physics, and it brings forth new opportunities to test theories of gravity. To this end, it is crucial to search for anomalous deviations from general relativity in a model-independent way, irrespective of gravity theories, GW sources, and background spacetimes. In this paper, we propose a new universal framework for testing gravity with GWs, based on the generalized propagation of a GW in an effective field theory that describes modification of gravity at cosmological scales. Then, we perform a parameter estimation study, showing how well the future observation of GWs can constrain the model parameters in the generalized models of GW propagation.

  6. Constraining parameters of the neutron star in the supernova remnant HESS J1731-347

    NASA Astrophysics Data System (ADS)

    Klochkov, D.; Suleimanov, V.; Puehlhofer, G.; Werner, K.; Santangelo, A.

    2014-07-01

    The Central Compact Object (CCO) in HESS J1731-347, presumably a neutron star, is one of the brightest sources in this class. Like other CCOs, it potentially provides an "undisturbed" view of thermal radiation generated at the neutron star surface. The shape and normalization of the corresponding X-ray spectrum depends on the emitting area, surface redshift, and gravity acceleration. Thus, its modeling under certain assumptions allows the mass and radius of the neutron star to be constrained. In our analysis, we model the spectrum of the CCO accumulated with XMM-Newton over ˜100 ksec exposure time in three observations. The exposure time has increased by a factor of five since our previous analysis of the source. For the spectral fitting, we use our hydrogen and carbon atmosphere models calculated assuming hydrostatic and radiative equilibria and taking into account pressure ionization and the presence of spectral lines (in case of carbon). We present the resulting constraints on the mass, radius, distance, and temperature of the neutron star.

  7. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  8. RX J1856-3754: Evidence for a Stiff Equation of State

    NASA Astrophysics Data System (ADS)

    Braje, Timothy M.; Romani, Roger W.

    2002-12-01

    We have examined the soft X-ray plus optical/UV spectrum of the nearby isolated neutron star RX J1856-3754, comparing it with detailed models of a thermally emitting surface. Like previous investigators, we find that the spectrum is best fitted by a two-temperature blackbody model. In addition, our simulations constrain the allowed viewing geometry from the observed pulse fraction upper limits. These simulations show that RX J1856-3754 is very likely to be a normal young pulsar, with the nonthermal radio beam missing Earth's line of sight. The spectral energy distribution limits on the model parameter space put a strong constraint on the star's M/R. At the measured parallax distance, the allowed range for MNS=1.5Msolar is RNS=13.7+/-0.6km. Under this interpretation, the equation of state (EOS) is relatively stiff near nuclear density, and the quark star EOS posited in some previous studies is strongly excluded. The data also constrain the surface T distribution over the polar cap.

  9. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  10. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.

    2016-12-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  11. TH-E-BRF-06: Kinetic Modeling of Tumor Response to Fractionated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Gordon, J; Chetty, I

    2014-06-15

    Purpose: Accurate calibration of radiobiological parameters is crucial to predicting radiation treatment response. Modeling differences may have a significant impact on calibrated parameters. In this study, we have integrated two existing models with kinetic differential equations to formulate a new tumor regression model for calibrating radiobiological parameters for individual patients. Methods: A system of differential equations that characterizes the birth-and-death process of tumor cells in radiation treatment was analytically solved. The solution of this system was used to construct an iterative model (Z-model). The model consists of three parameters: tumor doubling time Td, half-life of dying cells Tr and cellmore » survival fraction SFD under dose D. The Jacobian determinant of this model was proposed as a constraint to optimize the three parameters for six head and neck cancer patients. The derived parameters were compared with those generated from the two existing models, Chvetsov model (C-model) and Lim model (L-model). The C-model and L-model were optimized with the parameter Td fixed. Results: With the Jacobian-constrained Z-model, the mean of the optimized cell survival fractions is 0.43±0.08, and the half-life of dying cells averaged over the six patients is 17.5±3.2 days. The parameters Tr and SFD optimized with the Z-model differ by 1.2% and 20.3% from those optimized with the Td-fixed C-model, and by 32.1% and 112.3% from those optimized with the Td-fixed L-model, respectively. Conclusion: The Z-model was analytically constructed from the cellpopulation differential equations to describe changes in the number of different tumor cells during the course of fractionated radiation treatment. The Jacobian constraints were proposed to optimize the three radiobiological parameters. The developed modeling and optimization methods may help develop high-quality treatment regimens for individual patients.« less

  12. Classical nucleation theory of homogeneous freezing of water: thermodynamic and kinetic parameters.

    PubMed

    Ickes, Luisa; Welti, André; Hoose, Corinna; Lohmann, Ulrike

    2015-02-28

    The probability of homogeneous ice nucleation under a set of ambient conditions can be described by nucleation rates using the theoretical framework of Classical Nucleation Theory (CNT). This framework consists of kinetic and thermodynamic parameters, of which three are not well-defined (namely the interfacial tension between ice and water, the activation energy and the prefactor), so that any CNT-based parameterization of homogeneous ice formation is less well-constrained than desired for modeling applications. Different approaches to estimate the thermodynamic and kinetic parameters of CNT are reviewed in this paper and the sensitivity of the calculated nucleation rate to the choice of parameters is investigated. We show that nucleation rates are very sensitive to this choice. The sensitivity is governed by one parameter - the interfacial tension between ice and water, which determines the energetic barrier of the nucleation process. The calculated nucleation rate can differ by more than 25 orders of magnitude depending on the choice of parameterization for this parameter. The second most important parameter is the activation energy of the nucleation process. It can lead to a variation of 16 orders of magnitude. By estimating the nucleation rate from a collection of droplet freezing experiments from the literature, the dependence of these two parameters on temperature is narrowed down. It can be seen that the temperature behavior of these two parameters assumed in the literature does not match with the predicted nucleation rates from the fit in most cases. Moreover a comparison of all possible combinations of theoretical parameterizations of the dominant two free parameters shows that one combination fits the fitted nucleation rates best, which is a description of the interfacial tension coming from a molecular model [Reinhardt and Doye, J. Chem. Phys., 2013, 139, 096102] in combination with the activation energy derived from self-diffusion measurements [Zobrist et al., J. Phys. Chem. C, 2007, 111, 2149]. However, some fundamental understanding of the processes is still missing. Further research in future might help to tackle this problem. The most important questions, which need to be answered to constrain CNT, are raised in this study.

  13. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.

  14. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    NASA Astrophysics Data System (ADS)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  15. The Atacama Cosmology Telescope: cosmological parameters from three seasons of data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sievers, Jonathan L.; Appel, John William; Hlozek, Renée A.

    2013-10-01

    We present constraints on cosmological and astrophysical parameters from high-resolution microwave background maps at 148 GHz and 218 GHz made by the Atacama Cosmology Telescope (ACT) in three seasons of observations from 2008 to 2010. A model of primary cosmological and secondary foreground parameters is fit to the map power spectra and lensing deflection power spectrum, including contributions from both the thermal Sunyaev-Zeldovich (tSZ) effect and the kinematic Sunyaev-Zeldovich (kSZ) effect, Poisson and correlated anisotropy from unresolved infrared sources, radio sources, and the correlation between the tSZ effect and infrared sources. The power ℓ{sup 2}C{sub ℓ}/2π of the thermal SZmore » power spectrum at 148 GHz is measured to be 3.4±1.4  μK{sup 2} at ℓ = 3000, while the corresponding amplitude of the kinematic SZ power spectrum has a 95% confidence level upper limit of 8.6  μK{sup 2}. Combining ACT power spectra with the WMAP 7-year temperature and polarization power spectra, we find excellent consistency with the LCDM model. We constrain the number of effective relativistic degrees of freedom in the early universe to be N{sub eff} = 2.79±0.56, in agreement with the canonical value of N{sub eff} = 3.046 for three massless neutrinos. We constrain the sum of the neutrino masses to be Σm{sub ν} < 0.39 eV at 95% confidence when combining ACT and WMAP 7-year data with BAO and Hubble constant measurements. We constrain the amount of primordial helium to be Y{sub p} = 0.225±0.034, and measure no variation in the fine structure constant α since recombination, with α/α{sub 0} = 1.004±0.005. We also find no evidence for any running of the scalar spectral index, dn{sub s}/dln k = −0.004±0.012.« less

  16. The Atacama Cosmology Telescope: Cosmological Parameters from Three Seasons of Data

    NASA Technical Reports Server (NTRS)

    Seivers, Jonathan L.; Hlozek, Renee A.; Nolta, Michael R.; Acquaviva, Viviana; Addison, Graeme E.; Ade, Peter A. R.; Aguirre, Paula; Amiri, Mandana; Appel, John W.; Barrientos, L. Felipe; hide

    2013-01-01

    We present constraints on cosmological and astrophysical parameters from highresolution microwave background maps at 148 GHz and 218 GHz made by the Atacama Cosmology Telescope (ACT) in three seasons of observations from 2008 to 2010. A model of primary cosmological and secondary foreground parameters is fit to the map power spectra and lensing deflection power spectrum, including contributions from both the thermal Sunyaev-Zeldovich (tSZ) effect and the kinematic Sunyaev-Zeldovich (kSZ) effect, Poisson and correlated anisotropy from unresolved infrared sources, radio sources, and the correlation between the tSZ effect and infrared sources. The power l(sup 2)C(sub l)/2pi of the thermal SZ power spectrum at 148 GHz is measured to be 3.4 +/- 1.4 micro-K(sup 2) at l = 3000, while the corresponding amplitude of the kinematic SZ power spectrum has a 95% confidence level upper limit of 8.6 micro-K(sup 2). Combining ACT power spectra with the WMAP 7-year temperature and polarization power spectra, we find excellent consistency with the LCDM model. We constrain the number of effective relativistic degrees of freedom in the early universe to be N(sub eff) = 2.79 +/- 0.56, in agreement with the canonical value of N(sub eff) = 3.046 for three massless neutrinos. We constrain the sum of the neutrino masses to be sigma(m?) is less than 0.39 eV at 95% confidence when combining ACT and WMAP 7-year data with BAO and Hubble constant measurements. We constrain the amount of primordial helium to be Y(sub p) = 0.225 +/- 0.034, and measure no variation in the fine structure constant alpha since recombination, with alpha/alpha(sub 0) = 1.004 +/- 0.005. We also find no evidence for any running of the scalar spectral index, derivative(n(sub s))/derivative(ln k) = -0.004 +/- 0.012.

  17. Made-to-measure modelling of observed galaxy dynamics

    NASA Astrophysics Data System (ADS)

    Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.

    2018-01-01

    Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.

  18. MODOPTIM: A general optimization program for ground-water flow model calibration and ground-water management with MODFLOW

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.

  19. Search for Muonic Dark Forces at BABAR

    NASA Astrophysics Data System (ADS)

    Godang, Romulus

    2017-04-01

    Many models of physics beyond Standard Model predict the existence of light Higgs states, dark photons, and new gauge bosons mediating interactions between dark sectors and the Standard Model. Using a full data sample collected with the BABAR detector at the PEP-II e+e- collider, we report searches for a light non-Standard Model Higgs boson, dark photon, and a new muonic dark force mediated by a gauge boson (Z') coupling only to the second and third lepton families. Our results significantly improve upon the current bounds and further constrain the remaining region of the allowed parameter space.

  20. Using whole disease modeling to inform resource allocation decisions: economic evaluation of a clinical guideline for colorectal cancer using a single model.

    PubMed

    Tappenden, Paul; Chilcott, Jim; Brennan, Alan; Squires, Hazel; Glynne-Jones, Rob; Tappenden, Janine

    2013-06-01

    To assess the feasibility and value of simulating whole disease and treatment pathways within a single model to provide a common economic basis for informing resource allocation decisions. A patient-level simulation model was developed with the intention of being capable of evaluating multiple topics within National Institute for Health and Clinical Excellence's colorectal cancer clinical guideline. The model simulates disease and treatment pathways from preclinical disease through to detection, diagnosis, adjuvant/neoadjuvant treatments, follow-up, curative/palliative treatments for metastases, supportive care, and eventual death. The model parameters were informed by meta-analyses, randomized trials, observational studies, health utility studies, audit data, costing sources, and expert opinion. Unobservable natural history parameters were calibrated against external data using Bayesian Markov chain Monte Carlo methods. Economic analysis was undertaken using conventional cost-utility decision rules within each guideline topic and constrained maximization rules across multiple topics. Under usual processes for guideline development, piecewise economic modeling would have been used to evaluate between one and three topics. The Whole Disease Model was capable of evaluating 11 of 15 guideline topics, ranging from alternative diagnostic technologies through to treatments for metastatic disease. The constrained maximization analysis identified a configuration of colorectal services that is expected to maximize quality-adjusted life-year gains without exceeding current expenditure levels. This study indicates that Whole Disease Model development is feasible and can allow for the economic analysis of most interventions across a disease service within a consistent conceptual and mathematical infrastructure. This disease-level modeling approach may be of particular value in providing an economic basis to support other clinical guidelines. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  1. exocartographer: Constraining surface maps orbital parameters of exoplanets

    NASA Astrophysics Data System (ADS)

    Farr, Ben; Farr, Will M.; Cowan, Nicolas B.; Haggard, Hal M.; Robinson, Tyler

    2018-05-01

    exocartographer solves the exo-cartography inverse problem. This flexible forward-modeling framework, written in Python, retrieves the albedo map and spin geometry of a planet based on time-resolved photometry; it uses a Markov chain Monte Carlo method to extract albedo maps and planet spin and their uncertainties. Gaussian Processes use the data to fit for the characteristic length scale of the map and enforce smooth maps.

  2. Plan View Pattern Control for Steel Plates through Constrained Locally Weighted Regression

    NASA Astrophysics Data System (ADS)

    Shigemori, Hiroyasu; Nambu, Koji; Nagao, Ryo; Araki, Tadashi; Mizushima, Narihito; Kano, Manabu; Hasebe, Shinji

    A technique for performing parameter identification in a locally weighted regression model using foresight information on the physical properties of the object of interest as constraints was proposed. This method was applied to plan view pattern control of steel plates, and a reduction of shape nonconformity (crop) at the plate head end was confirmed by computer simulation based on real operation data.

  3. Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Huang, Q.

    2017-12-01

    Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.

  4. Explorations in dark energy

    NASA Astrophysics Data System (ADS)

    Bozek, Brandon

    This dissertation describes three research projects on the topic of dark energy. The first project is an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their "w 0--wa" parameterization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which can not be distinguished from a cosmological constant at DETF Stage 2, and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The second project details this analysis on a Inverse Power Law (IPL) or "Ratra-Peebles" (RP) model. This model is a member of a popular subset of scalar field quintessence models that exhibit "tracking" behavior that make this model particularly theoretically interesting. We find that the relative increase in constraining power on the parameter space of this model is consistent to what was found in the first project and the DETF report. We also show, using a background cosmology based on an IPL scalar field model that is consistent with a cosmological constant with Stage 2 data, that good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The third project extends the Causal Entropic Principle to predict the preferred curvature within the "multiverse". The Causal Entropic Principle (Bousso, et al.) provides an alternative approach to anthropic attempts to predict our observed value of the cosmological constant by calculating the entropy created within a causal diamond. We have found that values larger than rhok = 40rho m are disfavored by more than 99.99% and a peak value at rho Λ = 7.9 x 10-123 and rho k = 4.3rhom for open universes. For universes that allow only positive curvature or both positive and negative curvature, we find a correlation between curvature and dark energy that leads to an extended region of preferred values. Our universe is found to be disfavored to an extent depending the priors on curvature. We also provide a comparison to previous anthropic constraints on open universes and discuss future directions for this work.

  5. A framework for streamflow prediction in the world's most severely data-limited regions: Test of applicability and performance in a poorly-gauged region of China

    NASA Astrophysics Data System (ADS)

    Alipour, M. H.; Kibler, Kelly M.

    2018-02-01

    A framework methodology is proposed for streamflow prediction in poorly-gauged rivers located within large-scale regions of sparse hydrometeorologic observation. A multi-criteria model evaluation is developed to select models that balance runoff efficiency with selection of accurate parameter values. Sparse observed data are supplemented by uncertain or low-resolution information, incorporated as 'soft' data, to estimate parameter values a priori. Model performance is tested in two catchments within a data-poor region of southwestern China, and results are compared to models selected using alternative calibration methods. While all models perform consistently with respect to runoff efficiency (NSE range of 0.67-0.78), models selected using the proposed multi-objective method may incorporate more representative parameter values than those selected by traditional calibration. Notably, parameter values estimated by the proposed method resonate with direct estimates of catchment subsurface storage capacity (parameter residuals of 20 and 61 mm for maximum soil moisture capacity (Cmax), and 0.91 and 0.48 for soil moisture distribution shape factor (B); where a parameter residual is equal to the centroid of a soft parameter value minus the calibrated parameter value). A model more traditionally calibrated to observed data only (single-objective model) estimates a much lower soil moisture capacity (residuals of Cmax = 475 and 518 mm and B = 1.24 and 0.7). A constrained single-objective model also underestimates maximum soil moisture capacity relative to a priori estimates (residuals of Cmax = 246 and 289 mm). The proposed method may allow managers to more confidently transfer calibrated models to ungauged catchments for streamflow predictions, even in the world's most data-limited regions.

  6. Inverts permittivity and conductivity with structural constraint in GPR FWI based on truncated Newton method

    NASA Astrophysics Data System (ADS)

    Ren, Qianci

    2018-04-01

    Full waveform inversion (FWI) of ground penetrating radar (GPR) is a promising technique to quantitatively evaluate the permittivity and conductivity of near subsurface. However, these two parameters are simultaneously inverted in the GPR FWI, increasing the difficulty to obtain accurate inversion results for both parameters. In this study, I present a structural constrained GPR FWI procedure to jointly invert the two parameters, aiming to force a structural relationship between permittivity and conductivity in the process of model reconstruction. The structural constraint is enforced by a cross-gradient function. In this procedure, the permittivity and conductivity models are inverted alternately at each iteration and updated with hierarchical frequency components in the frequency domain. The joint inverse problem is solved by the truncated Newton method which considering the effect of Hessian operator and using the approximated solution of Newton equation to be the perturbation model in the updating process. The joint inversion procedure is tested by three synthetic examples. The results show that jointly inverting permittivity and conductivity in GPR FWI effectively increases the structural similarities between the two parameters, corrects the structures of parameter models, and significantly improves the accuracy of conductivity model, resulting in a better inversion result than the individual inversion.

  7. Joint Far-field and Near-field GPS Observations to Modified the Fault Slip Models of 2011 Tohoku-Oki Earthquake (Mw 9.0)

    NASA Astrophysics Data System (ADS)

    Yang, J.; Yi, S.; Sun, W.

    2016-12-01

    Signification displacements caused by the 2011 Tohoku-Oki earthquake (Mw9.0) can be detected by GPS observations on the north and northeast of Asian continent which comes from Crustal Movement Observation Network of China (CMONOC). Obviously horizontal displacement which can be detected with many GPS stations reaches to almost 3cm and 2cm and most of those extend eastward pointing to the epicenter of this earthquake. Those data can be acquired rapidly after the earthquake from CMONOC. Here, we will discuss how to calculate the seismic moment with those far-field GPS observations. The far field displacement can constrain the pattern of finite slip model and seismic moment using spherically stratified Earth model (PREM). We give a general rule of thumb to show how far-field GPS observations are affected by the earthquake parameters. In the worldwide, after 1990 there are 27 large earthquakes (the magnitude more than Mw 8.0) which most are subduction types with low rake angle. Their far-field GPS observations are mainly controlled by the component of Y22. Far-field GPS observations are potential to constrain one or two components of the focal mechanisms. When we joint far-field and near-field GPS data to get the 2011 Tohoku-Oki earthquake, we can get a more accurately finite slip model. The article shows a new mothed using far-field GPS data to constrain the fault slip model.

  8. Application of a data assimilation method via an ensemble Kalman filter to reactive urea hydrolysis transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juxiu Tong; Bill X. Hu; Hai Huang

    2014-03-01

    With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less

  9. Use of inequality constrained least squares estimation in small area estimation

    NASA Astrophysics Data System (ADS)

    Abeygunawardana, R. A. B.; Wickremasinghe, W. N.

    2017-05-01

    Traditional surveys provide estimates that are based only on the sample observations collected for the population characteristic of interest. However, these estimates may have unacceptably large variance for certain domains. Small Area Estimation (SAE) deals with determining precise and accurate estimates for population characteristics of interest for such domains. SAE usually uses least squares or maximum likelihood procedures incorporating prior information and current survey data. Many available methods in SAE use constraints in equality form. However there are practical situations where certain inequality restrictions on model parameters are more realistic. It will lead to Inequality Constrained Least Squares (ICLS) estimates if the method used is least squares. In this study ICLS estimation procedure is applied to many proposed small area estimates.

  10. Maximum entropy modeling of metabolic networks by constraining growth-rate moments predicts coexistence of phenotypes

    NASA Astrophysics Data System (ADS)

    De Martino, Daniele

    2017-12-01

    In this work maximum entropy distributions in the space of steady states of metabolic networks are considered upon constraining the first and second moments of the growth rate. Coexistence of fast and slow phenotypes, with bimodal flux distributions, emerges upon considering control on the average growth (optimization) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of Escherichia coli where it quantifies the metabolic activity of slow growing phenotypes and it provides a quantitative map with metabolic fluxes, opening the possibility to detect coexistence from flux data. A preliminary analysis on data for E. coli cultures in standard conditions shows degeneracy for the inferred parameters that extend in the coexistence region.

  11. MAGIC observations of the February 2014 flare of 1ES 1011+496 and ensuing constraint of the EBL density

    NASA Astrophysics Data System (ADS)

    Ahnen, M. L.; Ansoldi, S.; Antonelli, L. A.; Antoranz, P.; Babic, A.; Banerjee, B.; Bangale, P.; Barres de Almeida, U.; Barrio, J. A.; Becerra González, J.; Bednarek, W.; Bernardini, E.; Biasuzzi, B.; Biland, A.; Blanch, O.; Bonnefoy, S.; Bonnoli, G.; Borracci, F.; Bretz, T.; Carmona, E.; Carosi, A.; Chatterjee, A.; Clavero, R.; Colin, P.; Colombo, E.; Contreras, J. L.; Cortina, J.; Covino, S.; Da Vela, P.; Dazzi, F.; De Angelis, A.; De Lotto, B.; de Oña Wilhelmi, E.; Delgado Mendez, C.; Di Pierro, F.; Dominis Prester, D.; Dorner, D.; Doro, M.; Einecke, S.; Eisenacher Glawion, D.; Elsaesser, D.; Fernández-Barral, A.; Fidalgo, D.; Fonseca, M. V.; Font, L.; Frantzen, K.; Fruck, C.; Galindo, D.; García López, R. J.; Garczarczyk, M.; Garrido Terrats, D.; Gaug, M.; Giammaria, P.; Godinović, N.; González Muñoz, A.; Guberman, D.; Hahn, A.; Hanabata, Y.; Hayashida, M.; Herrera, J.; Hose, J.; Hrupec, D.; Hughes, G.; Idec, W.; Kodani, K.; Konno, Y.; Kubo, H.; Kushida, J.; La Barbera, A.; Lelas, D.; Lindfors, E.; Lombardi, S.; Longo, F.; López, M.; López-Coto, R.; López-Oramas, A.; Lorenz, E.; Majumdar, P.; Makariev, M.; Mallot, K.; Maneva, G.; Manganaro, M.; Mannheim, K.; Maraschi, L.; Marcote, B.; Mariotti, M.; Martínez, M.; Mazin, D.; Menzel, U.; Miranda, J. M.; Mirzoyan, R.; Moralejo, A.; Moretti, E.; Nakajima, D.; Neustroev, V.; Niedzwiecki, A.; Nievas Rosillo, M.; Nilsson, K.; Nishijima, K.; Noda, K.; Orito, R.; Overkemping, A.; Paiano, S.; Palacio, J.; Palatiello, M.; Paneque, D.; Paoletti, R.; Paredes, J. M.; Paredes-Fortuny, X.; Persic, M.; Poutanen, J.; Prada Moroni, P. G.; Prandini, E.; Puljak, I.; Rhode, W.; Ribó, M.; Rico, J.; Rodriguez Garcia, J.; Saito, T.; Satalecka, K.; Schultz, C.; Schweizer, T.; Shore, S. N.; Sillanpää, A.; Sitarek, J.; Snidaric, I.; Sobczynska, D.; Stamerra, A.; Steinbring, T.; Strzys, M.; Takalo, L.; Takami, H.; Tavecchio, F.; Temnikov, P.; Terzić, T.; Tescaro, D.; Teshima, M.; Thaele, J.; Torres, D. F.; Toyama, T.; Treves, A.; Verguilov, V.; Vovk, I.; Ward, J. E.; Will, M.; Wu, M. H.; Zanin, R.

    2016-05-01

    Context. During February-March 2014, the MAGIC telescopes observed the high-frequency peaked BL Lac 1ES 1011+496 (z = 0.212) in flaring state at very-high energy (VHE, E> 100 GeV). The flux reached a level of more than ten times higher than any previously recorded flaring state of the source. Aims: To describe the characteristics of the flare presenting the light curve and the spectral parameters of the night-wise spectra and the average spectrum of the whole period. From these data we aim to detect the imprint of the extragalactic background light (EBL) in the VHE spectrum of the source, to constrain its intensity in the optical band. Methods: We analyzed the gamma-ray data from the MAGIC telescopes using the standard MAGIC software for the production of the light curve and the spectra. To constrain the EBL, we implement the method developed by the H.E.S.S. collaboration, in which the intrinsic energy spectrum of the source is modeled with a simple function (≤4 parameters), and the EBL-induced optical depth is calculated using a template EBL model. The likelihood of the observed spectrum is then maximized, including a normalization factor for the EBL opacity among the free parameters. Results: The collected data allowed us to describe the night-wise flux changes and also to produce differential energy spectra for all nights in the observed period. The estimated intrinsic spectra of all the nights could be fitted by power-law functions. Evaluating the changes in the fit parameters, we conclude that the spectral shape for most of the nights were compatible, regardless of the flux level, which enabled us to produce an average spectrum from which the EBL imprint could be constrained. The likelihood ratio test shows that the model with an EBL density 1.07 (-0.20, +0.24)stat+sys, relative to the one in the tested EBL template, is preferred at the 4.6σ level to the no-EBL hypothesis, with the assumption that the intrinsic source spectrum can be modeled as a log-parabola. This would translate into a constraint of the EBL density in the wavelength range [0.24 μm, 4.25 μm], with a peak value at 1.4 μm of λFλ = 12.27-2.29+2.75 nW m-2 sr-1, including systematics.

  12. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  13. The California- Kepler Survey. II. Precise Physical Properties of 2025 Kepler Planets and Their Host Stars

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, John Asher; Cargile, Phillip A.; Sinukoff, Evan

    We present stellar and planetary properties for 1305 Kepler Objects of Interest hosting 2025 planet candidates observed as part of the California- Kepler Survey. We combine spectroscopic constraints, presented in Paper I, with stellar interior modeling to estimate stellar masses, radii, and ages. Stellar radii are typically constrained to 11%, compared to 40% when only photometric constraints are used. Stellar masses are constrained to 4%, and ages are constrained to 30%. We verify the integrity of the stellar parameters through comparisons with asteroseismic studies and Gaia parallaxes. We also recompute planetary radii for 2025 planet candidates. Because knowledge of planetarymore » radii is often limited by uncertainties in stellar size, we improve the uncertainties in planet radii from typically 42% to 12%. We also leverage improved knowledge of stellar effective temperature to recompute incident stellar fluxes for the planets, now precise to 21%, compared to a factor of two when derived from photometry.« less

  14. Determining Size Distribution at the Phoenix Landing Site

    NASA Astrophysics Data System (ADS)

    Mason, E. L.; Lemmon, M. T.

    2016-12-01

    Dust aerosols play a crucial role in determining atmospheric radiative heating on Mars through absorption and scattering of sunlight. How dust scatters and absorbs light is dependent on size, shape, composition, and quantity. Optical properties of the dust have been well constrained in the visible and near infrared wavelengths using various methods [Wolff et al. 2009, Lemmon et al. 2004]. In addition, the dust is nonspherical, and irregular shapes have shown to work well in determining effective particle size [Pollack et al. 1977]. Variance of the size distribution is less constrained but constitutes an important parameter in fully describing the dust. The Phoenix Lander's Surface Stereo Imager performed several cross-sky brightness surveys to determine the size distribution and scattering properties of dust in the wavelength range of 400 to 1000 nm. In combination with a single-layer radiative transfer model, these surveys can be used to help constrain variance of the size distribution. We will present a discussion of seasonal size distribution as it pertains to the Phoenix landing site.

  15. Rayleigh Wave Ellipticity Modeling and Inversion for Shallow Structure at the Proposed InSight Landing Site in Elysium Planitia, Mars

    NASA Astrophysics Data System (ADS)

    Knapmeyer-Endrun, Brigitte; Golombek, Matthew P.; Ohrnberger, Matthias

    2017-10-01

    The SEIS (Seismic Experiment for Interior Structure) instrument onboard the InSight mission will be the first seismometer directly deployed on the surface of Mars. From studies on the Earth and the Moon, it is well known that site amplification in low-velocity sediments on top of more competent rocks has a strong influence on seismic signals, but can also be used to constrain the subsurface structure. Here we simulate ambient vibration wavefields in a model of the shallow sub-surface at the InSight landing site in Elysium Planitia and demonstrate how the high-frequency Rayleigh wave ellipticity can be extracted from these data and inverted for shallow structure. We find that, depending on model parameters, higher mode ellipticity information can be extracted from single-station data, which significantly reduces uncertainties in inversion. Though the data are most sensitive to properties of the upper-most layer and show a strong trade-off between layer depth and velocity, it is possible to estimate the velocity and thickness of the sub-regolith layer by using reasonable constraints on regolith properties. Model parameters are best constrained if either higher mode data can be used or additional constraints on regolith properties from seismic analysis of the hammer strokes of InSight's heat flow probe HP3 are available. In addition, the Rayleigh wave ellipticity can distinguish between models with a constant regolith velocity and models with a velocity increase in the regolith, information which is difficult to obtain otherwise.

  16. Subsidence Modeling of the Over-exploited Granular Aquifer System in Aguascalientes, Mexico

    NASA Astrophysics Data System (ADS)

    Solano Rojas, D. E.; Wdowinski, S.; Minderhoud, P. P. S.; Pacheco, J.; Cabral, E.

    2016-12-01

    The valley of Aguascalientes in central Mexico experiences subsidence rates of up to 100 [mm/yr] due to overexploitation of its aquifer system, as revealed from satellite-based geodetic observations. The spatial pattern of the subsidence over the valley is inhomogeneous and affected by shallow faulting. The understanding of the subsoil mechanics is still limited. A better understanding of the subsidence process in Aguascalientes is needed to provide insights for future subsidence in the valley. We present here a displacement-constrained finite-element subsidence model using Deltares iMOD (interactive MODeling), based on the USGS MODFLOW software. The construction of our model relies on 3 main inputs: (1) groundwater level time series obtained from extraction wells' hydrographs, (2) subsurface lithostratigraphy interpreted from well drilling logs, and (3) hydrogeological parameters obtained from field pumping tests. The groundwater level measurements were converted to pore pressure in our model's layers, and used in Terzaghi's equation for calculating effective stress. We then used the effective stresse along with the displacement obtained from geodetic observations to constrain and optimize five geo-mechanical parameters: compression ratio, reloading ratio, secondary compression index, over consolidation ratio, and consolidation coefficient. Finally, we use the NEN-Bjerrum linear stress model formulation for settlements to determine elastic and visco-plastic strain, accounting for the aquifer system units' aging effect. Preliminary results show higher compaction response in clay-saturated intervals (i.e. aquitards) of the aquifer system, as reflected in the spatial pattern of the surface deformation. The forecasted subsidence for our proposed scenarios show a much more pronounced deformation when we consider higher groundwater extraction regimes.

  17. Seismic velocity and crustal thickness inversions: Moon and Mars

    NASA Astrophysics Data System (ADS)

    Drilleau, Melanie; Blanchette-Guertin, Jean-François; Kawamura, Taichi; Lognonné, Philippe; Wieczorek, Mark

    2017-04-01

    We present results from new inversions of seismic data arrival times acquired by the Apollo active and passive experiments. Markov chain Monte Carlo inversions are used to constrain (i) 1-D lunar crustal and upper mantle velocity models and (ii) 3-D lateral crustal thickness models under the Apollo stations and the artificial and natural impact sites. A full 3-D model of the lunar crustal thickness is then obtained using the GRAIL gravimetric data, anchored by the crustal thicknesses under each Apollo station and impact site. To avoid the use of any seismic reference model, a Bayesian inversion technique is implemented. The advantage of such an approach is to obtain robust probability density functions of interior structure parameters governed by uncertainties on the seismic data arrival times. 1-D seismic velocities are parameterized using C1-Bézier curves, which allow the exploration of both smoothly varying models and first-order discontinuities. The parameters of the inversion include the seismic velocities of P and S waves as a function of depth, the thickness of the crust under each Apollo station and impact epicentre. The forward problem consists in a ray tracing method enabling both the relocation of the natural impact epicenters, and the computation of time corrections associated to the surface topography and the crustal thickness variations under the stations and impact sites. The results show geology-related differences between the different sites, which are due to contrasts in megaregolith thickness and to shallow subsurface composition and structure. Some of the finer structural elements might be difficult to constrain and might fall within the uncertainties of the dataset. However, we use the more precise LROC-located epicentral locations for the lunar modules and Saturn-IV upper stage artificial impacts, reducing some of the uncertainties observed in past studies. In the framework of the NASA InSight/SEIS mission to Mars, the method developed in this study will be used to constrain the Martian crustal thickness as soon as the first data will be available (late 2018). For Insight, impacts will be located by MRO data differential analysis, which provide a known location enabling the direct inversion of all differential travel times with respect to P arrival time. We have performed resolution tests to investigate to what extend impact events might help us to constrain the Martian crustal thickness. Due to the high flexibility of the Bayesian algorithm, the interior model will be refined each time a new event will be detected.

  18. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  19. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  20. Fundamental Parameters of Nearby Young Stars

    NASA Astrophysics Data System (ADS)

    McCarthy, Kyle; Wilhelm, R. J.

    2013-06-01

    We present high resolution (R ~ 60,000) spectroscopic data of F and G members of the nearby, young associations AB Doradus and β Pictoris obtained with the Cross-Dispersed Echelle Spectrograph on the 2.7 meter telescope at the McDonald Observatory. Effective temperatures, log(g), [Fe/H], and microturbulent velocities are first estimated using the TGVIT code, then finely tuned using MOOG. Equivalent width (EW) measurements were made using TAME alongside a self-produced IDL routine to constrain EW accuracy and improve computed fundamental parameters. MOOG is also used to derive the chemical abundance of several elements including Mn which is known to be over abundant in planet hosting stars. Vsin(i) are also computed using a χ2 analysis of our observed data to Atlas9 model atmospheres passed through the SPECTRUM spectral synthesis code on lines which do not depend strongly on surface gravity. Due to the limited number of Fe II lines which govern the surface gravity fit in both TGVIT and MOOG, we implement another χ2 analysis of strongly log(g) dependent lines to ensure the values are correct. Coupling the surface gravities and temperatures derived in this study with the luminosities found in the Tycho-2 catalog, we estimate masses for each star and compare these masses to several evolutionary models to begin the process of constraining pre-main sequence evolutionary models.

  1. PyLDTk: Python toolkit for calculating stellar limb darkening profiles and model-specific coefficients for arbitrary filters

    NASA Astrophysics Data System (ADS)

    Parviainen, Hannu

    2015-10-01

    PyLDTk automates the calculation of custom stellar limb darkening (LD) profiles and model-specific limb darkening coefficients (LDC) using the library of PHOENIX-generated specific intensity spectra by Husser et al. (2013). It facilitates exoplanet transit light curve modeling, especially transmission spectroscopy where the modeling is carried out for custom narrow passbands. PyLDTk construct model-specific priors on the limb darkening coefficients prior to the transit light curve modeling. It can also be directly integrated into the log posterior computation of any pre-existing transit modeling code with minimal modifications to constrain the LD model parameter space directly by the LD profile, allowing for the marginalization over the whole parameter space that can explain the profile without the need to approximate this constraint by a prior distribution. This is useful when using a high-order limb darkening model where the coefficients are often correlated, and the priors estimated from the tabulated values usually fail to include these correlations.

  2. Initialising reservoir models for history matching using pre-production 3D seismic data: constraining methods and uncertainties

    NASA Astrophysics Data System (ADS)

    Niri, Mohammad Emami; Lumley, David E.

    2017-10-01

    Integration of 3D and time-lapse 4D seismic data into reservoir modelling and history matching processes poses a significant challenge due to the frequent mismatch between the initial reservoir model, the true reservoir geology, and the pre-production (baseline) seismic data. A fundamental step of a reservoir characterisation and performance study is the preconditioning of the initial reservoir model to equally honour both the geological knowledge and seismic data. In this paper we analyse the issues that have a significant impact on the (mis)match of the initial reservoir model with well logs and inverted 3D seismic data. These issues include the constraining methods for reservoir lithofacies modelling, the sensitivity of the results to the presence of realistic resolution and noise in the seismic data, the geostatistical modelling parameters, and the uncertainties associated with quantitative incorporation of inverted seismic data in reservoir lithofacies modelling. We demonstrate that in a geostatistical lithofacies simulation process, seismic constraining methods based on seismic litho-probability curves and seismic litho-probability cubes yield the best match to the reference model, even when realistic resolution and noise is included in the dataset. In addition, our analyses show that quantitative incorporation of inverted 3D seismic data in static reservoir modelling carries a range of uncertainties and should be cautiously applied in order to minimise the risk of misinterpretation. These uncertainties are due to the limited vertical resolution of the seismic data compared to the scale of the geological heterogeneities, the fundamental instability of the inverse problem, and the non-unique elastic properties of different lithofacies types.

  3. Application and Evaluation of a Snowmelt Runoff Model in the Tamor River Basin, Eastern Himalaya Using a Markov Chain Monte Carlo (MCMC) Data Assimilation Approach

    NASA Technical Reports Server (NTRS)

    Panday, Prajjwal K.; Williams, Christopher A.; Frey, Karen E.; Brown, Molly E.

    2013-01-01

    Previous studies have drawn attention to substantial hydrological changes taking place in mountainous watersheds where hydrology is dominated by cryospheric processes. Modelling is an important tool for understanding these changes but is particularly challenging in mountainous terrain owing to scarcity of ground observations and uncertainty of model parameters across space and time. This study utilizes a Markov Chain Monte Carlo data assimilation approach to examine and evaluate the performance of a conceptual, degree-day snowmelt runoff model applied in the Tamor River basin in the eastern Nepalese Himalaya. The snowmelt runoff model is calibrated using daily streamflow from 2002 to 2006 with fairly high accuracy (average Nash-Sutcliffe metric approx. 0.84, annual volume bias <3%). The Markov Chain Monte Carlo approach constrains the parameters to which the model is most sensitive (e.g. lapse rate and recession coefficient) and maximizes model fit and performance. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall compared with simulations using observed station precipitation. The average snowmelt contribution to total runoff in the Tamor River basin for the 2002-2006 period is estimated to be 29.7+/-2.9% (which includes 4.2+/-0.9% from snowfall that promptly melts), whereas 70.3+/-2.6% is attributed to contributions from rainfall. On average, the elevation zone in the 4000-5500m range contributes the most to basin runoff, averaging 56.9+/-3.6% of all snowmelt input and 28.9+/-1.1% of all rainfall input to runoff. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall versus snowmelt compared with simulations using observed station precipitation. Model experiments indicate that the hydrograph itself does not constrain estimates of snowmelt versus rainfall contributions to total outflow but that this derives from the degree-day melting model. Lastly, we demonstrate that the data assimilation approach is useful for quantifying and reducing uncertainty related to model parameters and thus provides uncertainty bounds on snowmelt and rainfall contributions in such mountainous watersheds.

  4. Limited ability driven phase transitions in the coevolution process in Axelrod's model

    NASA Astrophysics Data System (ADS)

    Wang, Bing; Han, Yuexing; Chen, Luonan; Aihara, Kazuyuki

    2009-04-01

    We study the coevolution process in Axelrod's model by taking into account of agents' abilities to access information, which is described by a parameter α to control the geographical range of communication. We observe two kinds of phase transitions in both cultural domains and network fragments, which depend on the parameter α. By simulation, we find that not all rewiring processes pervade the dissemination of culture, that is, a very limited ability to access information constrains the cultural dissemination, while an exceptional ability to access information aids the dissemination of culture. Furthermore, by analyzing the network characteristics at the frozen states, we find that there exists a stage at which the network develops to be a small-world network with community structures.

  5. Testing theoretical models of subdwarf B stars using multicolor photometry

    NASA Astrophysics Data System (ADS)

    Reed, Mike; Baran, Andrzej; Ostensen, Roy; O'Toole, Simon

    2012-08-01

    Pulsating stars allow a direct investigation of their structure and evolutionary history from the evaluation of pulsation modes. However, the observed pulsation frequencies must first be identified with spherical harmonics (modes). For subdwarfs B (sdB) stars, such identifications using white light photometry currently have significant limitations. We intend to use multicolor photometry to identify pulsation modes and constrain structure models. We propose to observe the pulsating sdB star PG0154+182 (BI Ari) with our multicolor instrument GT Cam. Our observations will be compared with perturbative atmospheric models (BRUCE/KYLIE) to identify the pulsation modes. This is part of our NSF grant to obtain seismic tools to test structure and evolution models; constraining stellar parameters including total mass, envelope mass, internal composition discontinuities and internal rotation. During winter/spring 2012, we were allocated three runs on the 2.1 m to collect multicolor data on other promising pulsating subdwarf B stars as part of this work. Those runs were very successful, prompting our continued proposals. In addition, we will obtain 3-color data using MAIA on the Mercator Telescope (using guaranteed institutional time).

  6. Optimal Model-Based Fault Estimation and Correction for Particle Accelerators and Industrial Plants Using Combined Support Vector Machines and First Principles Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayyar-Rodsari, Bijan; Schweiger, Carl; /SLAC /Pavilion Technologies, Inc., Austin, TX

    2010-08-25

    Timely estimation of deviations from optimal performance in complex systems and the ability to identify corrective measures in response to the estimated parameter deviations has been the subject of extensive research over the past four decades. The implications in terms of lost revenue from costly industrial processes, operation of large-scale public works projects and the volume of the published literature on this topic clearly indicates the significance of the problem. Applications range from manufacturing industries (integrated circuits, automotive, etc.), to large-scale chemical plants, pharmaceutical production, power distribution grids, and avionics. In this project we investigated a new framework for buildingmore » parsimonious models that are suited for diagnosis and fault estimation of complex technical systems. We used Support Vector Machines (SVMs) to model potentially time-varying parameters of a First-Principles (FP) description of the process. The combined SVM & FP model was built (i.e. model parameters were trained) using constrained optimization techniques. We used the trained models to estimate faults affecting simulated beam lifetime. In the case where a large number of process inputs are required for model-based fault estimation, the proposed framework performs an optimal nonlinear principal component analysis of the large-scale input space, and creates a lower dimension feature space in which fault estimation results can be effectively presented to the operation personnel. To fulfill the main technical objectives of the Phase I research, our Phase I efforts have focused on: (1) SVM Training in a Combined Model Structure - We developed the software for the constrained training of the SVMs in a combined model structure, and successfully modeled the parameters of a first-principles model for beam lifetime with support vectors. (2) Higher-order Fidelity of the Combined Model - We used constrained training to ensure that the output of the SVM (i.e. the parameters of the beam lifetime model) are physically meaningful. (3) Numerical Efficiency of the Training - We investigated the numerical efficiency of the SVM training. More specifically, for the primal formulation of the training, we have developed a problem formulation that avoids the linear increase in the number of the constraints as a function of the number of data points. (4) Flexibility of Software Architecture - The software framework for the training of the support vector machines was designed to enable experimentation with different solvers. We experimented with two commonly used nonlinear solvers for our simulations. The primary application of interest for this project has been the sustained optimal operation of particle accelerators at the Stanford Linear Accelerator Center (SLAC). Particle storage rings are used for a variety of applications ranging from 'colliding beam' systems for high-energy physics research to highly collimated x-ray generators for synchrotron radiation science. Linear accelerators are also used for collider research such as International Linear Collider (ILC), as well as for free electron lasers, such as the Linear Coherent Light Source (LCLS) at SLAC. One common theme in the operation of storage rings and linear accelerators is the need to precisely control the particle beams over long periods of time with minimum beam loss and stable, yet challenging, beam parameters. We strongly believe that beyond applications in particle accelerators, the high fidelity and cost benefits of a combined model-based fault estimation/correction system will attract customers from a wide variety of commercial and scientific industries. Even though the acquisition of Pavilion Technologies, Inc. by Rockwell Automation Inc. in 2007 has altered the small business status of the Pavilion and it no longer qualifies for a Phase II funding, our findings in the course of the Phase I research have convinced us that further research will render a workable model-based fault estimation and correction for particle accelerators and industrial plants feasible.« less

  7. Crustal wavespeed structure of North Texas and Oklahoma based on ambient noise cross-correlation functions and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Zhu, H.

    2017-12-01

    Recently, seismologists observed increasing seismicity in North Texas and Oklahoma. Based on seismic observations and other geophysical measurements, some studies suggested possible links between the increasing seismicity and wastewater injection during unconventional oil and gas exploration. To better monitor seismic events and investigate their mechanisms, we need an accurate 3D crustal wavespeed model for North Texas and Oklahoma. Considering the uneven distribution of earthquakes in this region, seismic tomography with local earthquake records have difficulties to achieve good illumination. To overcome this limitation, in this study, ambient noise cross-correlation functions are used to constrain subsurface variations in wavespeeds. I use adjoint tomography to iteratively fit frequency-dependent phase differences between observed and predicted band-limited Green's functions. The spectral-element method is used to numerically calculate the band-limited Green's functions and the adjoint method is used to calculate misfit gradients with respect to wavespeeds. 25 preconditioned conjugate gradient iterations are used to update model parameters and minimize data misfits. Features in the new crustal model M25 correlates with geological units in the study region, including the Llano uplift, the Anadarko basin and the Ouachita orogenic front. In addition, these seismic anomalies correlate with gravity and magnetic observations. This new model can be used to better constrain earthquake source parameters in North Texas and Oklahoma, such as epicenter location and moment tensor solutions, which are important for investigating potential relations between seismicity and unconventional oil and gas exploration.

  8. GRB 110715A: the peculiar multiwavelength evolution of the first afterglow detected by ALMA

    NASA Astrophysics Data System (ADS)

    Sánchez-Ramírez, R.; Hancock, P. J.; Jóhannesson, G.; Murphy, Tara; de Ugarte Postigo, A.; Gorosabel, J.; Kann, D. A.; Krühler, T.; Oates, S. R.; Japelj, J.; Thöne, C. C.; Lundgren, A.; Perley, D. A.; Malesani, D.; de Gregorio Monsalvo, I.; Castro-Tirado, A. J.; D'Elia, V.; Fynbo, J. P. U.; Garcia-Appadoo, D.; Goldoni, P.; Greiner, J.; Hu, Y.-D.; Jelínek, M.; Jeong, S.; Kamble, A.; Klose, S.; Kuin, N. P. M.; Llorente, A.; Martín, S.; Nicuesa Guelbenzu, A.; Rossi, A.; Schady, P.; Sparre, M.; Sudilovsky, V.; Tello, J. C.; Updike, A.; Wiersema, K.; Zhang, B.-B.

    2017-02-01

    We present the extensive follow-up campaign on the afterglow of GRB 110715A at 17 different wavelengths, from X-ray to radio bands, starting 81 s after the burst and extending up to 74 d later. We performed for the first time a GRB afterglow observation with the ALMA observatory. We find that the afterglow of GRB 110715A is very bright at optical and radio wavelengths. We use the optical and near-infrared spectroscopy to provide further information about the progenitor's environment and its host galaxy. The spectrum shows weak absorption features at a redshift z = 0.8225, which reveal a host-galaxy environment with low ionization, column density, and dynamical activity. Late deep imaging shows a very faint galaxy, consistent with the spectroscopic results. The broad-band afterglow emission is modelled with synchrotron radiation using a numerical algorithm and we determine the best-fitting parameters using Bayesian inference in order to constrain the physical parameters of the jet and the medium in which the relativistic shock propagates. We fitted our data with a variety of models, including different density profiles and energy injections. Although the general behaviour can be roughly described by these models, none of them are able to fully explain all data points simultaneously. GRB 110715A shows the complexity of reproducing extensive multiwavelength broad-band afterglow observations, and the need of good sampling in wavelength and time and more complex models to accurately constrain the physics of GRB afterglows.

  9. Constraining the GENIE model of neutrino-induced single pion production using reanalyzed bubble chamber data

    DOE PAGES

    Rodrigues, Philip; Wilkinson, Callum; McFarland, Kevin

    2016-08-24

    The longstanding discrepancy between bubble chamber measurements of ν μ-induced single pion production channels has led to large uncertainties in pion production cross section parameters for many years. We extend the reanalysis of pion production data in deuterium bubble chambers where this discrepancy is solved to include the ν μn → μ –pπ 0 and ν μn→μ –nπ + channels, and use the resulting data to fit the parameters of the GENIE pion production model. We find a set of parameters that can describe the bubble chamber data better than the GENIE default parameters, and provide updated central values andmore » reduced uncertainties for use in neutrino oscillation and cross section analyses which use the GENIE model. Here, we find that GENIE’s non-resonant background prediction has to be significantly reduced to fit the data, which may help to explain the recent discrepancies between simulation and data observed by the MINERνA coherent pion and NOνA oscillation analyses.« less

  10. The seasonal CO2 cycle on Mars - An application of an energy balance climate model

    NASA Technical Reports Server (NTRS)

    James, P. B.; North, G. R.

    1982-01-01

    Energy balance climate models of the Budyko-Sellers variety are applied to the carbon-dioxide cycle on Mars. Recent data available from the Viking mission, in particular the seasonal pressure variations measured by Viking landers, are used to constrain the models. No set of parameters was found for which a one-dimensional model parameterized in terms of ground temperature gave an adequate fit to the observed pressure variations. A modified, two-dimensional model including the effects of dust storms and the polar hood reasonably reproduces the pressure curve, however. The implications of these results for Martian climate changes are discussed.

  11. Preliminary gravity inversion model of Frenchman Flat Basin, Nevada Test Site, Nevada

    USGS Publications Warehouse

    Phelps, Geoffrey A.; Graham, Scott E.

    2002-01-01

    The depth of the basin beneath Frenchman Flat is estimated using a gravity inversion method. Gamma-gamma density logs from two wells in Frenchman Flat constrained the density profiles used to create the gravity inversion model. Three initial models were considered using data from one well, then a final model is proposed based on new information from the second well. The preferred model indicates that a northeast-trending oval-shaped basin underlies Frenchman Flat at least 2,100 m deep, with a maximum depth of 2,400 m at its northeast end. No major horst and graben structures are predicted. Sensitivity analysis of the model indicates that each parameter contributes the same magnitude change to the model, up to 30 meters change in depth for a 1% change in density, but some parameters affect a broader area of the basin. The horizontal resolution of the model was determined by examining the spacing between data stations, and was set to 500 square meters.

  12. Kinetics of heavy metal adsorption and desorption in soil: Developing a unified model based on chemical speciation

    NASA Astrophysics Data System (ADS)

    Peng, Lanfang; Liu, Paiyu; Feng, Xionghan; Wang, Zimeng; Cheng, Tao; Liang, Yuzhen; Lin, Zhang; Shi, Zhenqing

    2018-03-01

    Predicting the kinetics of heavy metal adsorption and desorption in soil requires consideration of multiple heterogeneous soil binding sites and variations of reaction chemistry conditions. Although chemical speciation models have been developed for predicting the equilibrium of metal adsorption on soil organic matter (SOM) and important mineral phases (e.g. Fe and Al (hydr)oxides), there is still a lack of modeling tools for predicting the kinetics of metal adsorption and desorption reactions in soil. In this study, we developed a unified model for the kinetics of heavy metal adsorption and desorption in soil based on the equilibrium models WHAM 7 and CD-MUSIC, which specifically consider metal kinetic reactions with multiple binding sites of SOM and soil minerals simultaneously. For each specific binding site, metal adsorption and desorption rate coefficients were constrained by the local equilibrium partition coefficients predicted by WHAM 7 or CD-MUSIC, and, for each metal, the desorption rate coefficients of various binding sites were constrained by their metal binding constants with those sites. The model had only one fitting parameter for each soil binding phase, and all other parameters were derived from WHAM 7 and CD-MUSIC. A stirred-flow method was used to study the kinetics of Cd, Cu, Ni, Pb, and Zn adsorption and desorption in multiple soils under various pH and metal concentrations, and the model successfully reproduced most of the kinetic data. We quantitatively elucidated the significance of different soil components and important soil binding sites during the adsorption and desorption kinetic processes. Our model has provided a theoretical framework to predict metal adsorption and desorption kinetics, which can be further used to predict the dynamic behavior of heavy metals in soil under various natural conditions by coupling other important soil processes.

  13. Vertical structure and physical processes of the Madden-Julian Oscillation: Biases and uncertainties at short range

    DOE PAGES

    Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; ...

    2015-05-26

    We present an analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models as part of the “Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)” project. A lead time of 12–36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests thatmore » the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. In conclusion, the wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. Additionally, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.« less

  14. Constraints of beyond Standard Model parameters from the study of neutrinoless double beta decay

    NASA Astrophysics Data System (ADS)

    Stoica, Sabin

    2017-12-01

    Neutrinoless double beta (0νββ) decay is a beyond Standard Model (BSM) process whose discovery would clarify if the lepton number is conserved, decide on the neutrinos character (are they Dirac or Majorana particles?) and give a hint on the scale of their absolute masses. Also, from the study of 0νββ one can constrain other BSM parameters related to different scenarios by which this process can occur. In this paper I make first a short review on the actual challenges to calculate precisely the phase space factors and nuclear matrix elements entering the 0νββ decay lifetimes, and I report results of our group for these quantities. Then, taking advance of the most recent experimental limits for 0νββ lifetimes, I present new constraints of the neutrino mass parameters associated with different mechanisms of occurrence of the 0νββ decay mode.

  15. Constraints on modified gravity from Planck 2015: when the health of your theory makes the difference

    NASA Astrophysics Data System (ADS)

    Salvatelli, Valentina; Piazza, Federico; Marinoni, Christian

    2016-09-01

    We use the effective field theory of dark energy (EFT of DE) formalism to constrain dark energy models belonging to the Horndeski class with the recent Planck 2015 CMB data. The space of theories is spanned by a certain number of parameters determining the linear cosmological perturbations, while the expansion history is set to that of a standard ΛCDM model. We always demand that the theories be free of fatal instabilities. Additionally, we consider two optional conditions, namely that scalar and tensor perturbations propagate with subliminal speed. Such criteria severely restrict the allowed parameter space and are thus very effective in shaping the posteriors. As a result, we confirm that no theory performs better than ΛCDM when CMB data alone are analysed. Indeed, the healthy dark energy models considered here are not able to reproduce those phenomenological behaviours of the effective Newton constant and gravitational slip parameters that, according to previous studies, best fit the data.

  16. Constraints on modified gravity from Planck 2015: when the health of your theory makes the difference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvatelli, Valentina; Piazza, Federico; Marinoni, Christian, E-mail: Valentina.Salvatelli@cpt.univ-mrs.fr, E-mail: Federico.Piazza@cpt.univ-mrs.fr, E-mail: Christian.Marinoni@cpt.univ-mrs.fr

    We use the effective field theory of dark energy (EFT of DE) formalism to constrain dark energy models belonging to the Horndeski class with the recent Planck 2015 CMB data. The space of theories is spanned by a certain number of parameters determining the linear cosmological perturbations, while the expansion history is set to that of a standard ΛCDM model. We always demand that the theories be free of fatal instabilities. Additionally, we consider two optional conditions, namely that scalar and tensor perturbations propagate with subliminal speed. Such criteria severely restrict the allowed parameter space and are thus very effectivemore » in shaping the posteriors. As a result, we confirm that no theory performs better than ΛCDM when CMB data alone are analysed. Indeed, the healthy dark energy models considered here are not able to reproduce those phenomenological behaviours of the effective Newton constant and gravitational slip parameters that, according to previous studies, best fit the data.« less

  17. Constraining the symmetry energy with heavy-ion collisions and Bayesian analysis

    NASA Astrophysics Data System (ADS)

    Tsang, C. Y.; Jhang, G.; Morfouace, P.; Lynch, W. G.; Tsang, M. B.; HiRA Collaboration

    2017-09-01

    To extract constraints on symmetry energy terms in nuclear Equation of State (EoS), data from heavy ion reactions, are often compared to calculations from transport models. As multiple model input parameters are needed in the transport model, it is necessary to do multi-parameter analysis to understand the relationship especially if strong correlations exist among the parameters. In this talk, I will discuss how four symmetry energy parameters, So, (Symmetry energy) and L (slope) at saturation density as well as the nucleon scaler effective mass (ms*) and the nucleon effective mass splitting, (FI) are obtained by comparing transport mode results with experimental data such as isospin diffusions and n/p spectral ratios using MADAI Bayesian analysis software. Probability of each parameter having a certain value given experimental data can be calculated with Bayes theorem by Markov Chain Monte Carlo integration. Results using single and double ratios of neutron and proton spectra from 124Sn +124Sn, 112Sn +112Sn collisions at 120 MeV/u as well as isospin diffusion from Sn +Sn isotopes, at 50 and 35 MeV/u will be presented. This research is supported by the National Science Foundation under Grant No. PHY-1565546.

  18. Fragmentation uncertainties in hadronic observables for top-quark mass measurements

    NASA Astrophysics Data System (ADS)

    Corcella, Gennaro; Franceschini, Roberto; Kim, Doojin

    2018-04-01

    We study the Monte Carlo uncertainties due to modeling of hadronization and showering in the extraction of the top-quark mass from observables that use exclusive hadronic final states in top decays, such as t →anything + J / ψ or t →anything + (B →charged tracks), where B is a B-hadron. To this end, we investigate the sensitivity of the top-quark mass, determined by means of a few observables already proposed in the literature as well as some new proposals, to the relevant parameters of event generators, such as HERWIG 6 and PYTHIA 8. We find that constraining those parameters at O (1%- 10%) is required to avoid a Monte Carlo uncertainty on mt greater than 500 MeV. For the sake of achieving the needed accuracy on such parameters, we examine the sensitivity of the top-quark mass measured from spectral features, such as peaks, endpoints and distributions of EB, mBℓ, and some mT2-like variables. We find that restricting oneself to regions sufficiently close to the endpoints enables one to substantially decrease the dependence on the Monte Carlo parameters, but at the price of inflating significantly the statistical uncertainties. To ameliorate this situation we study how well the data on top-quark production and decay at the LHC can be utilized to constrain the showering and hadronization variables. We find that a global exploration of several calibration observables, sensitive to the Monte Carlo parameters but very mildly to mt, can offer useful constraints on the parameters, as long as such quantities are measured with a 1% precision.

  19. Instant preheating in quintessential inflation with α -attractors

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Konstantinos; Wood, Leonora Donaldson; Owen, Charlotte

    2018-03-01

    We investigate a compelling model of quintessential inflation in the context of α -attractors, which naturally result in a scalar potential featuring two flat regions; the inflationary plateau and the quintessential tail. The "asymptotic freedom" of α -attractors, near the kinetic poles, suppresses radiative corrections and interactions, which would otherwise threaten to lift the flatness of the quintessential tail and cause a 5th-force problem respectively. Since this is a nonoscillatory inflation model, we reheat the Universe through instant preheating. The parameter space is constrained by both inflation and dark energy requirements. We find an excellent correlation between the inflationary observables and model predictions, in agreement with the α -attractors setup. We also obtain successful quintessence for natural values of the parameters. Our model predicts potentially sizeable tensor perturbations (at the level of 1%) and a slightly varying equation of state for dark energy, to be probed in the near future.

  20. On the nature of the TeV emission from the supernova remnant SN 1006

    NASA Astrophysics Data System (ADS)

    Araya, Miguel; Frutos, Francisco

    2012-10-01

    We present a model for the non-thermal emission from the historical supernova remnant SN 1006. We constrain the synchrotron parameters of the model with archival radio and hard X-ray data. Our stationary emission model includes two populations of electrons, which is justified by multifrequency images of the object. From the set of parameters that predict the correct synchrotron flux we select those which are able to account, either partly or entirely, for the gamma-ray emission of the source as seen by HESS. We use the results from this model as well as the latest constraints imposed by the Fermi observatory and conclude that the TeV emission cannot be accounted for by π0 decay from high-energy ions with a single power-law distribution, of the form dN proton /dEp∝Ep-s, and s ≳ 2.

  1. Mergers of Black-Hole Binaries with Aligned Spins: Waveform Characteristics

    NASA Technical Reports Server (NTRS)

    Kelly, Bernard J.; Baker, John G.; vanMeter, James R.; Boggs, William D.; McWilliams, Sean T.; Centrella, Joan

    2011-01-01

    "We apply our gravitational-waveform analysis techniques, first presented in the context of nonspinning black holes of varying mass ratio [1], to the complementary case of equal-mass spinning black-hole binary systems. We find that, as with the nonspinning mergers, the dominant waveform modes phases evolve together in lock-step through inspiral and merger, supporting the previous model of the binary system as an adiabatically rigid rotator driving gravitational-wave emission - an implicit rotating source (IRS). We further apply the late-merger model for the rotational frequency introduced in [1], along with a new mode amplitude model appropriate for the dominant (2, plus or minus 2) modes. We demonstrate that this seven-parameter model performs well in matches with the original numerical waveform for system masses above - 150 solar mass, both when the parameters are freely fit, and when they are almost completely constrained by physical considerations."

  2. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  3. Conformationally constrained farnesoid X receptor (FXR) agonists: heteroaryl replacements of the naphthalene.

    PubMed

    Bass, Jonathan Y; Caravella, Justin A; Chen, Lihong; Creech, Katrina L; Deaton, David N; Madauss, Kevin P; Marr, Harry B; McFadyen, Robert B; Miller, Aaron B; Mills, Wendy Y; Navas, Frank; Parks, Derek J; Smalley, Terrence L; Spearing, Paul K; Todd, Dan; Williams, Shawn P; Wisely, G Bruce

    2011-02-15

    To improve on the drug properties of GSK8062 1b, a series of heteroaryl bicyclic naphthalene replacements were prepared. The quinoline 1c was an equipotent FXR agonist with improved drug developability parameters relative to 1b. In addition, analog 1c lowered body weight gain and serum glucose in a DIO mouse model of diabetes. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Asteroseismic inversions in the Kepler era: application to the Kepler Legacy sample

    NASA Astrophysics Data System (ADS)

    Buldgen, Gaël; Reese, Daniel; Dupret, Marc-Antoine

    2017-10-01

    In the past few years, the CoRoT and Kepler missions have carried out what is now called the space photometry revolution. This revolution is still ongoing thanks to K2 and will be continued by the Tess and Plato2.0 missions. However, the photometry revolution must also be followed by progress in stellar modelling, in order to lead to more precise and accurate determinations of fundamental stellar parameters such as masses, radii and ages. In this context, the long-lasting problems related to mixing processes in stellar interior is the main obstacle to further improvements of stellar modelling. In this contribution, we will apply structural asteroseismic inversion techniques to targets from the Kepler Legacy sample and analyse how these can help us constrain the fundamental parameters and mixing processes in these stars. Our approach is based on previous studies using the SOLA inversion technique [1] to determine integrated quantities such as the mean density [2], the acoustic radius, and core conditions indicators [3], and has already been successfully applied to the 16Cyg binary system [4]. We will show how this technique can be applied to the Kepler Legacy sample and how new indicators can help us to further constrain the chemical composition profiles of stars as well as provide stringent constraints on stellar ages.

  5. Modeling nonstructural carbohydrate reserve dynamics in forest trees

    NASA Astrophysics Data System (ADS)

    Richardson, Andrew; Keenan, Trevor; Carbone, Mariah; Pederson, Neil

    2013-04-01

    Understanding the factors influencing the availability of nonstructural carbohydrate (NSC) reserves is essential for predicting the resilience of forests to climate change and environmental stress. However, carbon allocation processes remain poorly understood and many models either ignore NSC reserves, or use simple and untested representations of NSC allocation and pool dynamics. Using model-data fusion techniques, we combined a parsimonious model of forest ecosystem carbon cycling with novel field sampling and laboratory analyses of NSCs. Simulations were conducted for an evergreen conifer forest and a deciduous broadleaf forest in New England. We used radiocarbon methods based on the 14C "bomb spike" to estimate the age of NSC reserves, and used this to constrain the mean residence time of modeled NSCs. We used additional data, including tower-measured fluxes of CO2, soil and biomass carbon stocks, woody biomass increment, and leaf area index and litterfall, to further constrain the model's parameters and initial conditions. Incorporation of fast- and slow-cycling NSC pools improved the ability of the model to reproduce the measured interannual variability in woody biomass increment. We show how model performance varies according to model structure and total pool size, and we use novel diagnostic criteria, based on autocorrelation statistics of annual biomass growth, to evaluate the model's ability to correctly represent lags and memory effects.

  6. The cost of uniqueness in groundwater model calibration

    NASA Astrophysics Data System (ADS)

    Moore, Catherine; Doherty, John

    2006-04-01

    Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The "cost of uniqueness" is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, in turn, can lead to erroneous predictions made by a model that is ostensibly "well calibrated". Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as an inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based on pilot points, and calibration is implemented using both zones of piecewise constancy and constrained minimization regularization.

  7. Confronting the Uncertainty in Aerosol Forcing Using Comprehensive Observational Data

    NASA Astrophysics Data System (ADS)

    Johnson, J. S.; Regayre, L. A.; Yoshioka, M.; Pringle, K.; Sexton, D.; Lee, L.; Carslaw, K. S.

    2017-12-01

    The effect of aerosols on cloud droplet concentrations and radiative properties is the largest uncertainty in the overall radiative forcing of climate over the industrial period. In this study, we take advantage of a large perturbed parameter ensemble of simulations from the UK Met Office HadGEM-UKCA model (the aerosol component of the UK Earth System Model) to comprehensively sample uncertainty in aerosol forcing. Uncertain aerosol and atmospheric parameters cause substantial aerosol forcing uncertainty in climatically important regions. As the aerosol radiative forcing itself is unobservable, we investigate the potential for observations of aerosol and radiative properties to act as constraints on the large forcing uncertainty. We test how eight different theoretically perfect aerosol and radiation observations can constrain the forcing uncertainty over Europe. We find that the achievable constraint is weak unless many diverse observations are used simultaneously. This is due to the complex relationships between model output responses and the multiple interacting parameter uncertainties: compensating model errors mean there are many ways to produce the same model output (known as model equifinality) which impacts on the achievable constraint. However, using all eight observable quantities together we show that the aerosol forcing uncertainty can potentially be reduced by around 50%. This reduction occurs as we reduce a large sample of model variants (over 1 million) that cover the full parametric uncertainty to around 1% that are observationally plausible.Constraining the forcing uncertainty using real observations is a more complex undertaking, in which we must account for multiple further uncertainties including measurement uncertainties, structural model uncertainties and the model discrepancy from reality. Here, we make a first attempt to determine the true potential constraint on the forcing uncertainty from our model that is achievable using a comprehensive set of real aerosol and radiation observations taken from ground stations, flight campaigns and satellite. This research has been supported by the UK-China Research & Innovation Partnership Fund through the Met Office Climate Science for Service Partnership (CSSP) China as part of the Newton Fund, and by the NERC funded GASSP project.

  8. Environmental Conditions Constrain the Distribution and Diversity of Archaeal merA in Yellowstone National Park, Wyoming, U.S.A.

    USGS Publications Warehouse

    Wang, Y.; Boyd, E.; Crane, S.; Lu-Irving, P.; Krabbenhoft, D.; King, S.; Dighton, J.; Geesey, G.; Barkay, T.

    2011-01-01

    The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient. ?? 2011 Springer Science+Business Media, LLC.

  9. Environmental conditions constrain the distribution and diversity of archaeal merA in Yellowstone National Park, Wyoming, U.S.A.

    PubMed

    Wang, Yanping; Boyd, Eric; Crane, Sharron; Lu-Irving, Patricia; Krabbenhoft, David; King, Susan; Dighton, John; Geesey, Gill; Barkay, Tamar

    2011-11-01

    The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient.

  10. High-redshift post-reionization cosmology with 21cm intensity mapping

    NASA Astrophysics Data System (ADS)

    Obuljen, Andrej; Castorina, Emanuele; Villaescusa-Navarro, Francisco; Viel, Matteo

    2018-05-01

    We investigate the possibility of performing cosmological studies in the redshift range 2.5

  11. Subsidence Modeling of the Over-exploited Granular Aquifer System in Aguascalientes, Mexico

    NASA Astrophysics Data System (ADS)

    Solano Rojas, D. E.; Pacheco, J.; Wdowinski, S.; Minderhoud, P. S. J.; Cabral-Cano, E.; Albino, F.

    2017-12-01

    The valley of Aguascalientes in central Mexico experiences subsidence rates of up to 100 [mm/yr] due to overexploitation of its aquifer system, as revealed from satellite-based geodetic observations. The spatial pattern of the subsidence over the valley is inhomogeneous and affected by shallow faulting. The understanding of the subsoil mechanics is still limited. A better understanding of the subsidence process in Aguascalientes is needed to provide insights for future subsidence in the valley. We present here a displacement-constrained finite-element subsidence model, based on the USGS MODFLOW software. The construction of our model relies on 3 main inputs: (1) groundwater level time series obtained from extraction wells' hydrographs, (2) subsurface lithostratigraphy interpreted from well drilling logs, and (3) hydrogeological parameters obtained from field pumping tests. The groundwater level measurements were converted to pore pressure in our model's layers, and used in Terzaghi's equation for calculating effective stress. We then used the effective stress along with the displacement obtained from geodetic observations to constrain and optimize five geo-mechanical parameters: compression ratio, reloading ratio, secondary compression index, over consolidation ratio, and consolidation coefficient. Finally, we use the NEN-Bjerrum linear stress model formulation for settlements to determine elastic and visco-plastic strain, accounting for the aquifer system units' aging effect. Preliminary results show higher compaction response in clay-saturated intervals (i.e. aquitards) of the aquifer system, as reflected in the spatial pattern of the surface deformation. The forecasted subsidence for our proposed scenarios show a much more pronounced deformation when we consider higher groundwater extraction regimes.

  12. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  13. Planck data versus large scale structure: Methods to quantify discordance

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Battye, Richard A.; Moss, Adam

    2017-06-01

    Discordance in the Λ cold dark matter cosmological model can be seen by comparing parameters constrained by cosmic microwave background (CMB) measurements to those inferred by probes of large scale structure. Recent improvements in observations, including final data releases from both Planck and SDSS-III BOSS, as well as improved astrophysical uncertainty analysis of CFHTLenS, allows for an update in the quantification of any tension between large and small scales. This paper is intended, primarily, as a discussion on the quantifications of discordance when comparing the parameter constraints of a model when given two different data sets. We consider Kullback-Leibler divergence, comparison of Bayesian evidences and other statistics which are sensitive to the mean, variance and shape of the distributions. However, as a byproduct, we present an update to the similar analysis in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508], where we find that, considering new data and treatment of priors, the constraints from the CMB and from a combination of large scale structure (LSS) probes are in greater agreement and any tension only persists to a minor degree. In particular, we find the parameter constraints from the combination of LSS probes which are most discrepant with the Planck 2015 +Pol +BAO parameter distributions can be quantified at a ˜2.55 σ tension using the method introduced in [R. A. Battye, T. Charnock, and A. Moss, Phys. Rev. D 91, 103508 (2015), 10.1103/PhysRevD.91.103508]. If instead we use the distributions constrained by the combination of LSS probes which are in greatest agreement with those from Planck 2015 +Pol +BAO this tension is only 0.76 σ .

  14. Model‐based analysis of the influence of catchment properties on hydrologic partitioning across five mountain headwater subcatchments

    PubMed Central

    Wagener, Thorsten; McGlynn, Brian

    2015-01-01

    Abstract Ungauged headwater basins are an abundant part of the river network, but dominant influences on headwater hydrologic response remain difficult to predict. To address this gap, we investigated the ability of a physically based watershed model (the Distributed Hydrology‐Soil‐Vegetation Model) to represent controls on metrics of hydrologic partitioning across five adjacent headwater subcatchments. The five study subcatchments, located in Tenderfoot Creek Experimental Forest in central Montana, have similar climate but variable topography and vegetation distribution. This facilitated a comparative hydrology approach to interpret how parameters that influence partitioning, detected via global sensitivity analysis, differ across catchments. Model parameters were constrained a priori using existing regional information and expert knowledge. Influential parameters were compared to perceptions of catchment functioning and its variability across subcatchments. Despite between‐catchment differences in topography and vegetation, hydrologic partitioning across all metrics and all subcatchments was sensitive to a similar subset of snow, vegetation, and soil parameters. Results also highlighted one subcatchment with low certainty in parameter sensitivity, indicating that the model poorly represented some complexities in this subcatchment likely because an important process is missing or poorly characterized in the mechanistic model. For use in other basins, this method can assess parameter sensitivities as a function of the specific ungauged system to which it is applied. Overall, this approach can be employed to identify dominant modeled controls on catchment response and their agreement with system understanding. PMID:27642197

  15. GRAVITATIONAL-WAVE OBSERVATIONS MAY CONSTRAIN GAMMA-RAY BURST MODELS: THE CASE OF GW150914–GBM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veres, P.; Preece, R. D.; Goldstein, A.

    The possible short gamma-ray burst (GRB) observed by Fermi /GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peakmore » energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (∼10{sup −3} cm{sup −3}) and a high Lorentz factor (∼2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford–Znajek model. If future joint observations confirm the GW–short-GRB association we can provide similar but more detailed tests for prompt emission models.« less

  16. Model based systems engineering (MBSE) applied to Radio Aurora Explorer (RAX) CubeSat mission operational scenarios

    NASA Astrophysics Data System (ADS)

    Spangelo, S. C.; Cutler, J.; Anderson, L.; Fosse, E.; Cheng, L.; Yntema, R.; Bajaj, M.; Delp, C.; Cole, B.; Soremekum, G.; Kaslow, D.

    Small satellites are more highly resource-constrained by mass, power, volume, delivery timelines, and financial cost relative to their larger counterparts. Small satellites are operationally challenging because subsystem functions are coupled and constrained by the limited available commodities (e.g. data, energy, and access times to ground resources). Furthermore, additional operational complexities arise because small satellite components are physically integrated, which may yield thermal or radio frequency interference. In this paper, we extend our initial Model Based Systems Engineering (MBSE) framework developed for a small satellite mission by demonstrating the ability to model different behaviors and scenarios. We integrate several simulation tools to execute SysML-based behavior models, including subsystem functions and internal states of the spacecraft. We demonstrate utility of this approach to drive the system analysis and design process. We demonstrate applicability of the simulation environment to capture realistic satellite operational scenarios, which include energy collection, the data acquisition, and downloading to ground stations. The integrated modeling environment enables users to extract feasibility, performance, and robustness metrics. This enables visualization of both the physical states (e.g. position, attitude) and functional states (e.g. operating points of various subsystems) of the satellite for representative mission scenarios. The modeling approach presented in this paper offers satellite designers and operators the opportunity to assess the feasibility of vehicle and network parameters, as well as the feasibility of operational schedules. This will enable future missions to benefit from using these models throughout the full design, test, and fly cycle. In particular, vehicle and network parameters and schedules can be verified prior to being implemented, during mission operations, and can also be updated in near real-time with oper- tional performance feedback.

  17. Test of the FLRW Metric and Curvature with Strong Lens Time Delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Kai; Li, Zhengxiang; Wang, Guo-Jian

    We present a new model-independent strategy for testing the Friedmann–Lemaître–Robertson–Walker (FLRW) metric and constraining cosmic curvature, based on future time-delay measurements of strongly lensed quasar-elliptical galaxy systems from the Large Synoptic Survey Telescope and supernova observations from the Dark Energy Survey. The test only relies on geometric optics. It is independent of the energy contents of the universe and the validity of the Einstein equation on cosmological scales. The study comprises two levels: testing the FLRW metric through the distance sum rule (DSR) and determining/constraining cosmic curvature. We propose an effective and efficient (redshift) evolution model for performing the formermore » test, which allows us to concretely specify the violation criterion for the FLRW DSR. If the FLRW metric is consistent with the observations, then on the second level the cosmic curvature parameter will be constrained to ∼0.057 or ∼0.041 (1 σ ), depending on the availability of high-redshift supernovae, which is much more stringent than current model-independent techniques. We also show that the bias in the time-delay method might be well controlled, leading to robust results. The proposed method is a new independent tool for both testing the fundamental assumptions of homogeneity and isotropy in cosmology and for determining cosmic curvature. It is complementary to cosmic microwave background plus baryon acoustic oscillation analyses, which normally assume a cosmological model with dark energy domination in the late-time universe.« less

  18. Pareto-optimal estimates that constrain mean California precipitation change

    NASA Astrophysics Data System (ADS)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  19. The massive halos of spiral galaxies

    NASA Technical Reports Server (NTRS)

    Zaritsky, Dennis; White, Simon D. M.

    1994-01-01

    We use a sample of satellite galaxies to demonstrate the existence of extended massive dark halos around spiral galaxies. Isolated spirals with rotation velocities near 250 km/s have a typical halo mass within 200 kpc of 1.5-2.6 x 10(exp 12) solar mass (90% confidence range for H(sub 0) = 75 km/s/Mpc). This result is most easily derived using standard mass estimator techniques, but such techniques do not account for the strong observational selection effects in the sample, nor for the extended mass distributions that the data imply. These complications can be addressed using scale-free models similar to those previously employed to study binary galaxies. When satellite velocities are assumed isotropic, both methods imply massive and extended halos. However, the derived masses depend sensitively on the assumed shape of satellite orbits. Furthermore, both methods ignore the fact that many of the satellites in the sample have orbital periods comparable to the Hubble time. The orbital phases of such satellites cannot be random, and their distribution in radius cannot be freely adjusted; rather these properties reflect ongoing infall onto the outer halos of their primaries. We use detailed dynamical models for halo formation to evaluate these problems, and we devise a maximum likelihood technique for estimating the parameters of such models from the data. The most strongly constrained parameter is the mass within 200-300 kpc, giving the confidence limits quoted above. The eccentricity, e, of satellite orbits is also strongly constrained, 0.50 less than e less than 0.88 at 90% confidence, implying a near-isotropic distribution of satellite velocities. The cosmic density parameter in the vicinity of our isolated halos exceeds 0.13 at 90% confidence, with preferred values exceeding 0.3.

  20. Primordial features and Planck polarization

    NASA Astrophysics Data System (ADS)

    Hazra, Dhiraj Kumar; Shafieloo, Arman; Smoot, George F.; Starobinsky, Alexei A.

    2016-09-01

    With the Planck 2015 Cosmic Microwave Background (CMB) temperature and polarization data, we search for possible features in the primordial power spectrum (PPS). We revisit the Wiggly Whipped Inflation (WWI) framework and demonstrate how generation of some particular primordial features can improve the fit to Planck data. WWI potential allows the scalar field to transit from a steeper potential to a nearly flat potential through a discontinuity either in potential or in its derivatives. WWI offers the inflaton potential parametrizations that generate a wide variety of features in the primordial power spectra incorporating most of the localized and non-local inflationary features that are obtained upon reconstruction from temperature and polarization angular power spectrum. At the same time, in a single framework it allows us to have a background parameter estimation with a nearly free-form primordial spectrum. Using Planck 2015 data, we constrain the primordial features in the context of Wiggly Whipped Inflation and present the features that are supported both by temperature and polarization. WWI model provides more than 13 improvement in χ2 fit to the data with respect to the best fit power law model considering combined temperature and polarization data from Planck and B-mode polarization data from BICEP and Planck dust map. We use 2-4 extra parameters in the WWI model compared to the featureless strict slow roll inflaton potential. We find that the differences between the temperature and polarization data in constraining background cosmological parameters such as baryon density, cold dark matter density are reduced to a good extent if we use primordial power spectra from WWI. We also discuss the extent of bispectra obtained from the best potentials in arbitrary triangular configurations using the BI-spectra and Non-Gaussianity Operator (BINGO).

  1. Primordial features and Planck polarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hazra, Dhiraj Kumar; Smoot, George F.; Shafieloo, Arman

    2016-09-01

    With the Planck 2015 Cosmic Microwave Background (CMB) temperature and polarization data, we search for possible features in the primordial power spectrum (PPS). We revisit the Wiggly Whipped Inflation (WWI) framework and demonstrate how generation of some particular primordial features can improve the fit to Planck data. WWI potential allows the scalar field to transit from a steeper potential to a nearly flat potential through a discontinuity either in potential or in its derivatives. WWI offers the inflaton potential parametrizations that generate a wide variety of features in the primordial power spectra incorporating most of the localized and non-local inflationarymore » features that are obtained upon reconstruction from temperature and polarization angular power spectrum. At the same time, in a single framework it allows us to have a background parameter estimation with a nearly free-form primordial spectrum. Using Planck 2015 data, we constrain the primordial features in the context of Wiggly Whipped Inflation and present the features that are supported both by temperature and polarization. WWI model provides more than 13 improvement in χ{sup 2} fit to the data with respect to the best fit power law model considering combined temperature and polarization data from Planck and B-mode polarization data from BICEP and Planck dust map. We use 2-4 extra parameters in the WWI model compared to the featureless strict slow roll inflaton potential. We find that the differences between the temperature and polarization data in constraining background cosmological parameters such as baryon density, cold dark matter density are reduced to a good extent if we use primordial power spectra from WWI. We also discuss the extent of bispectra obtained from the best potentials in arbitrary triangular configurations using the BI-spectra and Non-Gaussianity Operator (BINGO).« less

  2. The Trend Odds Model for Ordinal Data‡

    PubMed Central

    Capuano, Ana W.; Dawson, Jeffrey D.

    2013-01-01

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520

  3. The trend odds model for ordinal data.

    PubMed

    Capuano, Ana W; Dawson, Jeffrey D

    2013-06-15

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Estimates of atmospheric O2 in the Paleoproterozoic from paleosols

    NASA Astrophysics Data System (ADS)

    Kanzaki, Yoshiki; Murakami, Takashi

    2016-02-01

    A weathering model was developed to constrain the partial pressure of atmospheric O2 (PO2) in the Paleoproterozoic from the Fe records in paleosols. The model describes the Fe behavior in a weathering profile by dissolution/precipitation of Fe-bearing minerals, oxidation of dissolved Fe(II) to Fe(III) by oxygen and transport of dissolved Fe by water flow, in steady state. The model calculates the ratio of the precipitated Fe(III)-(oxyhydr)oxides from the dissolved Fe(II) to the dissolved Fe(II) during weathering (ϕ), as a function of PO2 . An advanced kinetic expression for Fe(II) oxidation by O2 was introduced into the model from the literature to calculate accurate ϕ-PO2 relationships. The model's validity is supported by the consistency of the calculated ϕ-PO2 relationships with those in the literature. The model can calculate PO2 for a given paleosol, once a ϕ value and values of the other parameters relevant to weathering, namely, pH of porewater, partial pressure of carbon dioxide (PCO2), water flow, temperature and O2 diffusion into soil, are obtained for the paleosol. The above weathering-relevant parameters were scrutinized for individual Paleoproterozoic paleosols. The values of ϕ, temperature, pH and PCO2 were obtained from the literature on the Paleoproterozoic paleosols. The parameter value of water flow was constrained for each paleosol from the mass balance of Si between water and rock phases and the relationships between water saturation ratio and hydraulic conductivity. The parameter value of O2 diffusion into soil was calculated for each paleosol based on the equation for soil O2 concentration with the O2 transport parameters in the literature. Then, we conducted comprehensive PO2 calculations for individual Paleoproterozoic paleosols which reflect all uncertainties in the weathering-relevant parameters. Consequently, robust estimates of PO2 in the Paleoproterozoic were obtained: 10-7.1-10-5.4 atm at ∼2.46 Ga, 10-5.0-10-2.5 atm at ∼2.15 Ga, 10-5.2-10-1.7 atm at ∼2.08 Ga and more than 10-4.6-10-2.0 atm at ∼1.85 Ga. Comparison of the present PO2 estimates to those in the literature suggests that a drastic rise of oxygen would not have occurred at ∼2.4 Ga, supporting a slightly rapid rise of oxygen at ∼2.4 Ga and a gradual rise of oxygen in the Paleoproterozoic in long term.

  5. Constraining Dark Matter Interactions with Pseudoscalar and Scalar Mediators Using Collider Searches for Multijets plus Missing Transverse Energy.

    PubMed

    Buchmueller, Oliver; Malik, Sarah A; McCabe, Christopher; Penning, Bjoern

    2015-10-30

    The monojet search, looking for events involving missing transverse energy (E_{T}) plus one or two jets, is the most prominent collider dark matter search. We show that multijet searches, which look for E_{T} plus two or more jets, are significantly more sensitive than the monojet search for pseudoscalar- and scalar-mediated interactions. We demonstrate this in the context of a simplified model with a pseudoscalar interaction that explains the excess in GeV energy gamma rays observed by the Fermi Large Area Telescope. We show that multijet searches already constrain a pseudoscalar interpretation of the excess in much of the parameter space where the mass of the mediator M_{A} is more than twice the dark matter mass m_{DM}. With the forthcoming run of the Large Hadron Collider at higher energies, the remaining regions of the parameter space where M_{A}>2m_{DM} will be fully explored. Furthermore, we highlight the importance of complementing the monojet final state with multijet final states to maximize the sensitivity of the search for the production of dark matter at colliders.

  6. Dark energy and the BOOMERANG data.

    PubMed

    Amendola, L

    2001-01-08

    The recent high-quality BOOMERANG data allow the testing of many competing cosmological models. Here I present a seven-parameter likelihood analysis of dark energy models with exponential potential and explicit coupling to dark matter. The BOOMERANG data constrain the dimensionless coupling beta to be smaller than 0.1, an order of magnitude better than previous limits. In terms of the constant xi of nonminimally coupled theories, this amounts to xi<0.01. On the other hand, BOOMERANG does not have enough sensitivity to put constraints on the potential slope.

  7. Tunnelling in Dante's Inferno

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furuuchi, Kazuyuki; Sperling, Marcus, E-mail: kazuyuki.furuuchi@manipal.edu, E-mail: marcus.sperling@univie.ac.at

    2017-05-01

    We study quantum tunnelling in Dante's Inferno model of large field inflation. Such a tunnelling process, which will terminate inflation, becomes problematic if the tunnelling rate is rapid compared to the Hubble time scale at the time of inflation. Consequently, we constrain the parameter space of Dante's Inferno model by demanding a suppressed tunnelling rate during inflation. The constraints are derived and explicit numerical bounds are provided for representative examples. Our considerations are at the level of an effective field theory; hence, the presented constraints have to hold regardless of any UV completion.

  8. Long-term visibility data in the UK - how does visibility vary with meteorological and pollutant parameters?

    NASA Astrophysics Data System (ADS)

    Singh, Ajit; Bloss, William J.; Pope, Francis D.

    2016-04-01

    Poor visibility can be an indicator of poor air quality. Moreover, degradation in visibility can be hazardous to human safety; for example, low visibility can lead to accidents particularly during winter when fogs are prevalent. The present quantitative analysis attempts to explain the influence of aerosol concentration and composition, and meteorology on long-term UK visibility. We use visibility data from eight UK meteorological stations which have been running since the 1950s. The site locations include urban, rural and marine environments. Overall, most stations show a long term trend of visibility increase, which is indicative of reductions in aerosol pollution, especially in urban areas. Additionally, results at all sites show a very clear dependence on relative humidity, indicating the importance of aerosol hygroscopicity on the ability of aerosols to scatter radiation and hence impact upon visibility. The dependence of visibility on other meteorological parameters (e.g. relative humidity, air temperature, wind speed & direction) is also investigated. To explain the long term visibility trends and their dependence on meteorological conditions, a light extinction model was constructed incorporating the concentrations and composition of historic aerosol. The lack of historic aerosol size distributions and aerosol composition data, which determine hygroscopicity and refractive index, leads to an under-constrained model. Aerosol measurements from the last 10 years are used to constrain these model parameters, and hence their historical variation can be estimated; sensitivity analyses are used to estimate errors for the time period before regular aerosol measurements are available. A good agreement is observed between modelled and measured visibility. This work has generated a unique 60 year data set with which to understand how aerosol concentration and composition has varied over the UK. The model is applicable and easily transferrable to other data sets worldwide. Hence, different clean air legislation can be assessed for its effectiveness in reducing aerosol pollution. The implications for the UK will be discussed.

  9. Ion Yields in the Coupled Chemical and Physical Dynamics Model of Matrix-Assisted Laser Desorption/Ionization

    NASA Astrophysics Data System (ADS)

    Knochenmuss, Richard

    2015-08-01

    The Coupled Chemical and Physical Dynamics (CPCD) model of matrix assisted laser desorption ionization has been restricted to relative rather than absolute yield comparisons because the rate constant for one step in the model was not accurately known. Recent measurements are used to constrain this constant, leading to good agreement with experimental yield versus fluence data for 2,5-dihydroxybenzoic acid. Parameters for alpha-cyano-4-hydroxycinnamic acid are also estimated, including contributions from a possible triplet state. The results are compared with the polar fluid model, the CPCD is found to give better agreement with the data.

  10. Modelling baryonic effects on galaxy cluster mass profiles

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Lau, Erwin T.; Nagai, Daisuke

    2018-06-01

    Gravitational lensing is a powerful probe of the mass distribution of galaxy clusters and cosmology. However, accurate measurements of the cluster mass profiles are limited by uncertainties in cluster astrophysics. In this work, we present a physically motivated model of baryonic effects on the cluster mass profiles, which self-consistently takes into account the impact of baryons on the concentration as well as mass accretion histories of galaxy clusters. We calibrate this model using the Omega500 hydrodynamical cosmological simulations of galaxy clusters with varying baryonic physics. Our model will enable us to simultaneously constrain cluster mass, concentration, and cosmological parameters using stacked weak lensing measurements from upcoming optical cluster surveys.

  11. A chemical model for generating the sources of mare basalts - Combined equilibrium and fractional crystallization of the lunar magmasphere

    NASA Technical Reports Server (NTRS)

    Snyder, Gregory A.; Taylor, Lawrence A.; Neal, Clive R.

    1992-01-01

    A chemical model for simulating the sources of the lunar mare basalts was developed by considering a modified mafic cumulate source formed during the combined equilibrium and fractional crystallization of a lunar magma ocean (LMO). The parameters which influence the initial LMO and its subsequent crystallization are examined, and both trace and major elements are modeled. It is shown that major elements tightly constrain the composition of mare basalt sources and the pathways to their creation. The ability of this LMO model to generate viable mare basalt source regions was tested through a case study involving the high-Ti basalts.

  12. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    NASA Astrophysics Data System (ADS)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling parameter for the aerosols. The estimation method is computationally fast and can be used with more complex models where climate sensitivity is diagnosed rather than prescribed. The parameter estimates can be used to create probabilistic climate projections using the UVic ESCM model in future studies.

  13. Mass, Radius, and Composition of the Transiting Planet 55 Cnc e: Using Interferometry and Correlations

    NASA Astrophysics Data System (ADS)

    Crida, Aurélien; Ligi, Roxanne; Dorn, Caroline; Lebreton, Yveline

    2018-06-01

    The characterization of exoplanets relies on that of their host star. However, stellar evolution models cannot always be used to derive the mass and radius of individual stars, because many stellar internal parameters are poorly constrained. Here, we use the probability density functions (PDFs) of directly measured parameters to derive the joint PDF of the stellar and planetary mass and radius. Because combining the density and radius of the star is our most reliable way of determining its mass, we find that the stellar (respectively planetary) mass and radius are strongly (respectively moderately) correlated. We then use a generalized Bayesian inference analysis to characterize the possible interiors of 55 Cnc e. We quantify how our ability to constrain the interior improves by accounting for correlation. The information content of the mass–radius correlation is also compared with refractory element abundance constraints. We provide posterior distributions for all interior parameters of interest. Given all available data, we find that the radius of the gaseous envelope is 0.08+/- 0.05{R}p. A stronger correlation between the planetary mass and radius (potentially provided by a better estimate of the transit depth) would significantly improve interior characterization and reduce drastically the uncertainty on the gas envelope properties.

  14. Constraining a land-surface model with multiple observations by application of the MPI-Carbon Cycle Data Assimilation System V1.0

    NASA Astrophysics Data System (ADS)

    Schürmann, Gregor J.; Kaminski, Thomas; Köstler, Christoph; Carvalhais, Nuno; Voßbeck, Michael; Kattge, Jens; Giering, Ralf; Rödenbeck, Christian; Heimann, Martin; Zaehle, Sönke

    2016-09-01

    We describe the Max Planck Institute Carbon Cycle Data Assimilation System (MPI-CCDAS) built around the tangent-linear version of the JSBACH land-surface scheme, which is part of the MPI-Earth System Model v1. The simulated phenology and net land carbon balance were constrained by globally distributed observations of the fraction of absorbed photosynthetically active radiation (FAPAR, using the TIP-FAPAR product) and atmospheric CO2 at a global set of monitoring stations for the years 2005 to 2009. When constrained by FAPAR observations alone, the system successfully, and computationally efficiently, improved simulated growing-season average FAPAR, as well as its seasonality in the northern extra-tropics. When constrained by atmospheric CO2 observations alone, global net and gross carbon fluxes were improved, despite a tendency of the system to underestimate tropical productivity. Assimilating both data streams jointly allowed the MPI-CCDAS to match both observations (TIP-FAPAR and atmospheric CO2) equally well as the single data stream assimilation cases, thereby increasing the overall appropriateness of the simulated biosphere dynamics and underlying parameter values. Our study thus demonstrates the value of multiple-data-stream assimilation for the simulation of terrestrial biosphere dynamics. It further highlights the potential role of remote sensing data, here the TIP-FAPAR product, in stabilising the strongly underdetermined atmospheric inversion problem posed by atmospheric transport and CO2 observations alone. Notwithstanding these advances, the constraint of the observations on regional gross and net CO2 flux patterns on the MPI-CCDAS is limited through the coarse-scale parametrisation of the biosphere model. We expect improvement through a refined initialisation strategy and inclusion of further biosphere observations as constraints.

  15. Simulation of aerobic and anaerobic biodegradation processes at a crude oil spill site

    USGS Publications Warehouse

    Essaid, Hedeff I.; Bekins, Barbara A.; Godsy, E. Michael; Warren, Ean; Baedecker, Mary Jo; Cozzarelli, Isabelle M.

    1995-01-01

    A two-dimensional, multispecies reactive solute transport model with sequential aerobic and anaerobic degradation processes was developed and tested. The model was used to study the field-scale solute transport and degradation processes at the Bemidji, Minnesota, crude oil spill site. The simulations included the biodegradation of volatile and nonvolatile fractions of dissolved organic carbon by aerobic processes, manganese and iron reduction, and methanogenesis. Model parameter estimates were constrained by published Monod kinetic parameters, theoretical yield estimates, and field biomass measurements. Despite the considerable uncertainty in the model parameter estimates, results of simulations reproduced the general features of the observed groundwater plume and the measured bacterial concentrations. In the simulation, 46% of the total dissolved organic carbon (TDOC) introduced into the aquifer was degraded. Aerobic degradation accounted for 40% of the TDOC degraded. Anaerobic processes accounted for the remaining 60% of degradation of TDOC: 5% by Mn reduction, 19% by Fe reduction, and 36% by methanogenesis. Thus anaerobic processes account for more than half of the removal of DOC at this site.

  16. Optimal HRF and smoothing parameters for fMRI time series within an autoregressive modeling framework.

    PubMed

    Galka, Andreas; Siniatchkin, Michael; Stephani, Ulrich; Groening, Kristina; Wolff, Stephan; Bosch-Bayard, Jorge; Ozaki, Tohru

    2010-12-01

    The analysis of time series obtained by functional magnetic resonance imaging (fMRI) may be approached by fitting predictive parametric models, such as nearest-neighbor autoregressive models with exogeneous input (NNARX). As a part of the modeling procedure, it is possible to apply instantaneous linear transformations to the data. Spatial smoothing, a common preprocessing step, may be interpreted as such a transformation. The autoregressive parameters may be constrained, such that they provide a response behavior that corresponds to the canonical haemodynamic response function (HRF). We present an algorithm for estimating the parameters of the linear transformations and of the HRF within a rigorous maximum-likelihood framework. Using this approach, an optimal amount of both the spatial smoothing and the HRF can be estimated simultaneously for a given fMRI data set. An example from a motor-task experiment is discussed. It is found that, for this data set, weak, but non-zero, spatial smoothing is optimal. Furthermore, it is demonstrated that activated regions can be estimated within the maximum-likelihood framework.

  17. Hypercat - Hypercube of AGN tori

    NASA Astrophysics Data System (ADS)

    Nikutta, Robert; Lopez-Rodriguez, Enrique; Ichikawa, Kohei; Levenson, Nancy A.; Packham, Christopher C.

    2018-06-01

    AGN unification and observations hold that a dusty torus obscures the central accretion engine along some lines of sight. SEDs of dust tori have been modeled for a long time, but resolved emission morphologies have not been studied in much detail, because resolved observations are only possible recently (VLTI,ALMA) and in the near future (TMT,ELT,GMT). Some observations challenge a simple torus model, because in several objects most of MIR emission appears to emanate from polar regions high above the equatorial plane, i.e. not where the dust supposedly resides.We introduce our software framework and hypercube of AGN tori (Hypercat) made with CLUMPY (www.clumpy.org), a large set of images (6 model parameters + wavelength) to facilitate studies of emission and dust morphologies. We make use of Hypercat to study the morphological properties of the emission and dust distributions as function of model parameters. We find that a simple clumpy torus can indeed produce 10-micron emission patterns extended in polar directions, with extension ratios compatible with those found in observations. We are able to constrain the range of parameters that produce such morphologies.

  18. Status of GRMHD simulations and radiative models of Sgr A*

    NASA Astrophysics Data System (ADS)

    Mościbrodzka, Monika

    2017-01-01

    The Galactic center is a perfect laboratory for testing various theoretical models of accretion flows onto a supermassive black hole. Here, I review general relativistic magnetohydrodynamic simulations that were used to model emission from the central object - Sgr A*. These models predict dynamical and radiative properties of hot, magnetized, thick accretion disks with jets around a Kerr black hole. Models are compared to radio-VLBI, mm-VLBI, NIR, and X-ray observations of Sgr A*. I present the recent constrains on the free parameters of the model such as accretion rate onto the black hole, the black hole angular momentum, and orientation of the system with respect to our line of sight.

  19. Understanding tectonic stress and rock strength in the Nankai Trough accretionary prism, offshore SW Japan

    NASA Astrophysics Data System (ADS)

    Huffman, Katelyn A.

    Understanding the orientation and magnitude of tectonic stress in active tectonic margins like subduction zones is important for understanding fault mechanics. In the Nankai Trough subduction zone, faults in the accretionary prism are thought to have historically slipped during or immediately following deep plate boundary earthquakes, often generating devastating tsunamis. I focus on quantifying stress at two locations of interest in the Nankai Trough accretionary prism, offshore Southwest Japan. I employ a method to constrain stress magnitude that combines observations of compressional borehole failure from logging-while-drilling resistivity-at-the-bit generated images (RAB) with estimates of rock strength and the relationship between tectonic stress and stress at the wall of a borehole. I use the method to constrain stress at Ocean Drilling Program (ODP) Site 808 and Integrated Ocean Drilling Program (IODP) Site C0002. At Site 808, I consider a range of parameters (assumed rock strength, friction coefficient, breakout width, and fluid pressure) in the method to constrain stress to explore uncertainty in stress magnitudes and discuss stress results in terms of the seismic cycle. I find a combination of increased fluid pressure and decreased friction along the frontal thrust or other weak faults could produce thrust-style failure, without the entire prism being at critical state failure, as other kinematic models of accretionary prism behavior during earthquakes imply. Rock strength is typically inferred using a failure criterion and unconfined compressive strength from empirical relations with P-wave velocity. I minimize uncertainty in rock strength by measuring rock strength in triaxial tests on Nankai core. I find strength of Nankai core is significantly less than empirical relations predict. I create a new empirical fit to our experiments and explore implications of this on stress magnitude estimates. I find using the new empirical fit can decrease stress predicted in the method by as much as 4 MPa at Site C0002. I constrain stress at Site C0002 using geophysical logging data from two adjacent boreholes drilled into the same sedimentary sequence with different drilling conditions in a forward model that predicts breakout width over a range of horizontal stresses (where SHmax is constrained by the ratio of stresses that would produce active faulting and Shmin is constrained from leak-off-tests) and rock strength. I then compare predicted breakout widths to observations of breakout widths from RAB images to determine the combination of stresses in the model that best match real world observations. This is the first published method to constrain both stress and strength simultaneously. Finally, I explore uncertainty in rock behavior during compressional breakout formation using a finite element model (FEM) that predicts Biot poroelastic changes in fluid pressure in rock adjacent to the borehole upon its excavation and explore the effect this has on rock failure. I test a range of permeability and rock stiffness. I find that when rock stiffness and permeability are in the range of what exists at Nankai, pore fluid pressure increase +/- 45° from Shmin and can lead to weakening of wall rock and a wider compressional failure zone than what would exist at equilibrium conditions. In a case example at, we find this can lead to an overestimate of tectonic stress using compressional failures of ~2 MPa in the area of the borehole where fluid pressure increases. In areas around the borehole where pore fluid decreases (+/- 45° from SHmax), the wall rock can strengthen which suppresses tensile failure. The implications of this research is that there are many potential pitfalls in the method to constrain stress using borehole breakouts in Nankai Trough mudstone, mostly due to uncertainty in parameters such as strength and underlying assumptions regarding constitutive rock behavior. More laboratory measurement and/or models of rock properties and rock constitutive behavior is needed to ensure the method is accurately providing constraints on stress magnitude. (Abstract shortened by ProQuest.).

  20. Analysis of the Wbt Vertex from the Measurement of Triple Differential Angular Decay Rates of Single Top Quarks Produced in the T-Channel at □S =8 TeV with ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Su, Jun

    The electroweak production and subsequent decay of single top quarks is determined by the properties of the Wtb vertex, which can be described by the complex parameters of an effective Lagrangian. An analysis of angular distributions of the decay products of single top quarks produced in the t-channel constrains these parameters simultaneously. The thesis presents an analysis using 20.2 fb-1 of proton-proton collision data at a centre-of-mass energy of 8 TeV collected with the ATLAS detector at the LHC. The fraction ƒ1 of decays containing transversely polarised W bosons is measured to be ƒ1 = 0:296 +0:048 -0:051 (stat. + syst.). The phase delta_ between amplitudes for transversely and longitudinally polarised W bosons recoiling against left-handed b quarks, is measured to be delta_ = 0:002pi+0:016pi -0:017pi (stat. + syst.), giving no indication of CP violation. The fraction of longitudinal to transverse W bosons accompanied by right-handed b-quarks are also constrained at 95% C.L. to ƒ+1 < 0:118 and ƒ+ 0 < 0:085. Based on these measurements limits are placed at 95% C.L. on the ratio of the complex coupling parameters gR and VL such that Re [gR =VL] epsilon [-0:122; 0:168] and Im [gR=VL] epsilon [-0:066; 0:059]. Constraints are also placed on the magnitudes of the ratios | VL/VL|, and |g L/VL|. Finally the polarisation of single top quarks in the t-channel is constrained to be P > 0:718 (95% C.L.). None of the above measurements make assumptions on the value of any of the other parameters or couplings and all of them are in agreement with the Standard Model.

Top