Sample records for constrained model predictive

  1. A Multidimensional Item Response Model: Constrained Latent Class Analysis Using the Gibbs Sampler and Posterior Predictive Checks.

    ERIC Educational Resources Information Center

    Hoijtink, Herbert; Molenaar, Ivo W.

    1997-01-01

    This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)

  2. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    PubMed

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  3. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    PubMed

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  4. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    NASA Astrophysics Data System (ADS)

    Volk, Brent L.; Lagoudas, Dimitris C.; Maitland, Duncan J.

    2011-09-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5-4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data.

  5. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    PubMed Central

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272

  6. Inverse and Predictive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, Ellen Marie

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less

  7. Universally Sloppy Parameter Sensitivities in Systems Biology Models

    PubMed Central

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-01-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568

  8. Universally sloppy parameter sensitivities in systems biology models.

    PubMed

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  9. Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.

    PubMed

    Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei

    2015-08-01

    In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.

  10. Improved design of constrained model predictive tracking control for batch processes against unknown uncertainties.

    PubMed

    Wu, Sheng; Jin, Qibing; Zhang, Ridong; Zhang, Junfeng; Gao, Furong

    2017-07-01

    In this paper, an improved constrained tracking control design is proposed for batch processes under uncertainties. A new process model that facilitates process state and tracking error augmentation with further additional tuning is first proposed. Then a subsequent controller design is formulated using robust stable constrained MPC optimization. Unlike conventional robust model predictive control (MPC), the proposed method enables the controller design to bear more degrees of tuning so that improved tracking control can be acquired, which is very important since uncertainties exist inevitably in practice and cause model/plant mismatches. An injection molding process is introduced to illustrate the effectiveness of the proposed MPC approach in comparison with conventional robust MPC. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Periodic Forced Response of Structures Having Three-Dimensional Frictional Constraints

    NASA Astrophysics Data System (ADS)

    CHEN, J. J.; YANG, B. D.; MENQ, C. H.

    2000-01-01

    Many mechanical systems have moving components that are mutually constrained through frictional contacts. When subjected to cyclic excitations, a contact interface may undergo constant changes among sticks, slips and separations, which leads to very complex contact kinematics. In this paper, a 3-D friction contact model is employed to predict the periodic forced response of structures having 3-D frictional constraints. Analytical criteria based on this friction contact model are used to determine the transitions among sticks, slips and separations of the friction contact, and subsequently the constrained force which consists of the induced stick-slip friction force on the contact plane and the contact normal load. The resulting constrained force is often a periodic function and can be considered as a feedback force that influences the response of the constrained structures. By using the Multi-Harmonic Balance Method along with Fast Fourier Transform, the constrained force can be integrated with the receptance of the structures so as to calculate the forced response of the constrained structures. It results in a set of non-linear algebraic equations that can be solved iteratively to yield the relative motion as well as the constrained force at the friction contact. This method is used to predict the periodic response of a frictionally constrained 3-d.o.f. oscillator. The predicted results are compared with those of the direct time integration method so as to validate the proposed method. In addition, the effect of super-harmonic components on the resonant response and jump phenomenon is examined.

  12. Configuration of the thermal landscape determines thermoregulatory performance of ectotherms

    PubMed Central

    Sears, Michael W.; Angilletta, Michael J.; Schuler, Matthew S.; Borchert, Jason; Dilliplane, Katherine F.; Stegman, Monica; Rusch, Travis W.; Mitchell, William A.

    2016-01-01

    Although most organisms thermoregulate behaviorally, biologists still cannot easily predict whether mobile animals will thermoregulate in natural environments. Current models fail because they ignore how the spatial distribution of thermal resources constrains thermoregulatory performance over space and time. To overcome this limitation, we modeled the spatially explicit movements of animals constrained by access to thermal resources. Our models predict that ectotherms thermoregulate more accurately when thermal resources are dispersed throughout space than when these resources are clumped. This prediction was supported by thermoregulatory behaviors of lizards in outdoor arenas with known distributions of environmental temperatures. Further, simulations showed how the spatial structure of the landscape qualitatively affects responses of animals to climate. Biologists will need spatially explicit models to predict impacts of climate change on local scales. PMID:27601639

  13. Nonlinear model predictive control of a wave energy converter based on differential flatness parameterisation

    NASA Astrophysics Data System (ADS)

    Li, Guang

    2017-01-01

    This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.

  14. Phase-field model of domain structures in ferroelectric thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y. L.; Hu, S. Y.; Liu, Z. K.

    A phase-field model for predicting the coherent microstructure evolution in constrained thin films is developed. It employs an analytical elastic solution derived for a constrained film with arbitrary eigenstrain distributions. The domain structure evolution during a cubic{r_arrow}tetragonal proper ferroelectric phase transition is studied. It is shown that the model is able to simultaneously predict the effects of substrate constraint and temperature on the volume fractions of domain variants, domain-wall orientations, domain shapes, and their temporal evolution. {copyright} 2001 American Institute of Physics.

  15. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  16. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    NASA Astrophysics Data System (ADS)

    Quinn Thomas, R.; Brooks, Evan B.; Jersild, Annika L.; Ward, Eric J.; Wynne, Randolph H.; Albaugh, Timothy J.; Dinon-Aldridge, Heather; Burkhart, Harold E.; Domec, Jean-Christophe; Fox, Thomas R.; Gonzalez-Benecke, Carlos A.; Martin, Timothy A.; Noormets, Asko; Sampson, David A.; Teskey, Robert O.

    2017-07-01

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model-data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.

  17. OCO-2 Column Carbon Dioxide and Biometric Data Jointly Constrain Parameterization and Projection of a Global Land Model

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III

    2015-12-01

    Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.

  18. Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network

    PubMed Central

    2012-01-01

    Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945

  19. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less

  20. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    DOE PAGES

    Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.; ...

    2017-07-26

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less

  1. The in situ transverse lamina strength of composite laminates

    NASA Technical Reports Server (NTRS)

    Flaggs, D. L.

    1983-01-01

    The objective of the work reported in this presentation is to determine the in situ transverse strength of a lamina within a composite laminate. From a fracture mechanics standpoint, in situ strength may be viewed as constrained cracking that has been shown to be a function of both lamina thickness and the stiffness of adjacent plies that serve to constrain the cracking process. From an engineering point of view, however, constrained cracking can be perceived as an apparent increase in lamina strength. With the growing need to design more highly loaded composite structures, the concept of in situ strength may prove to be a viable means of increasing the design allowables of current and future composite material systems. A simplified one dimensional analytical model is presented that is used to predict the strain at onset of transverse cracking. While it is accurate only for the most constrained cases, the model is important in that the predicted failure strain is seen to be a function of a lamina's thickness d and of the extensional stiffness bE theta of the adjacent laminae that constrain crack propagation in the 90 deg laminae.

  2. SITE CHARACTERIZATION TO SUPPORT MODEL DEVELOPMENT FOR CONTAMINANTS IN GROUND WATER

    EPA Science Inventory

    The development of conceptual and predictive models is an important tool to guide site characterization in support of monitoring contaminants in ground water. The accuracy of predictive models is limited by the adequacy of the input data and the assumptions made to constrain mod...

  3. Extracting falsifiable predictions from sloppy models.

    PubMed

    Gutenkunst, Ryan N; Casey, Fergal P; Waterfall, Joshua J; Myers, Christopher R; Sethna, James P

    2007-12-01

    Successful predictions are among the most compelling validations of any model. Extracting falsifiable predictions from nonlinear multiparameter models is complicated by the fact that such models are commonly sloppy, possessing sensitivities to different parameter combinations that range over many decades. Here we discuss how sloppiness affects the sorts of data that best constrain model predictions, makes linear uncertainty approximations dangerous, and introduces computational difficulties in Monte-Carlo uncertainty analysis. We also present a useful test problem and suggest refinements to the standards by which models are communicated.

  4. Stock management in hospital pharmacy using chance-constrained model predictive control.

    PubMed

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. SITE CHARACTERIZATION TO SUPPORT DEVELOPMENT OF CONCEPTUAL SITE MODELS AND TRANSPORT MODELS FOR MONITORING CONTAMINANTS IN GROUND WATER

    EPA Science Inventory

    The development of conceptual and predictive models is an important tool to guide site characterization in support of monitoring contaminants in ground water. The accuracy of predictive models is limited by the adequacy of the input data and the assumptions made to constrain mod...

  6. Using seismically constrained magnetotelluric inversion to recover velocity structure in the shallow lithosphere

    NASA Astrophysics Data System (ADS)

    Moorkamp, M.; Fishwick, S.; Jones, A. G.

    2015-12-01

    Typical surface wave tomography can recover well the velocity structure of the upper mantle in the depth range between 70-200km. For a successful inversion, we have to constrain the crustal structure and assess the impact on the resulting models. In addition,we often observe potentially interesting features in the uppermost lithosphere which are poorly resolved and thus their interpretationhas to be approached with great care.We are currently developing a seismically constrained magnetotelluric (MT) inversion approach with the aim of better recovering the lithospheric properties (and thus seismic velocities) in these problematic areas. We perform a 3D MT inversion constrained by a fixed seismic velocity model from surface wave tomography. In order to avoid strong bias, we only utilize information on structural boundaries to combine these two methods. Within the region that is well resolved by both methods, we can then extract a velocity-conductivity relationship. By translating the conductivitiesretrieved from MT into velocities in areas where the velocity model is poorly resolved, we can generate an updated velocity model and test what impactthe updated velocities have on the predicted data.We test this new approach using a MT dataset acquired in central Botswana over the Okwa terrane and the adjacent Kaapvaal and Zimbabwe Cratons togetherwith a tomographic models for the region. Here, both datasets have previously been used to constrain lithospheric structure and show some similarities.We carefully asses the validity of our results by comparing with observations and petrophysical predictions for the conductivity-velocity relationship.

  7. Robust model predictive control for constrained continuous-time nonlinear systems

    NASA Astrophysics Data System (ADS)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  8. Empirical models of Jupiter's interior from Juno data. Moment of inertia and tidal Love number k2

    NASA Astrophysics Data System (ADS)

    Ni, Dongdong

    2018-05-01

    Context. The Juno spacecraft has significantly improved the accuracy of gravitational harmonic coefficients J4, J6 and J8 during its first two perijoves. However, there are still differences in the interior model predictions of core mass and envelope metallicity because of the uncertainties in the hydrogen-helium equations of state. New theoretical approaches or observational data are hence required in order to further constrain the interior models of Jupiter. A well constrained interior model of Jupiter is helpful for understanding not only the dynamic flows in the interior, but also the formation history of giant planets. Aims: We present the radial density profiles of Jupiter fitted to the Juno gravity field observations. Also, we aim to investigate our ability to constrain the core properties of Jupiter using its moment of inertia and tidal Love number k2 which could be accessible by the Juno spacecraft. Methods: In this work, the radial density profile was constrained by the Juno gravity field data within the empirical two-layer model in which the equations of state are not needed as an input model parameter. Different two-layer models are constructed in terms of core properties. The dependence of the calculated moment of inertia and tidal Love number k2 on the core properties was investigated in order to discern their abilities to further constrain the internal structure of Jupiter. Results: The calculated normalized moment of inertia (NMOI) ranges from 0.2749 to 0.2762, in reasonable agreement with the other predictions. There is a good correlation between the NMOI value and the core properties including masses and radii. Therefore, measurements of NMOI by Juno can be used to constrain both the core mass and size of Jupiter's two-layer interior models. For the tidal Love number k2, the degeneracy of k2 is found and analyzed within the two-layer interior model. In spite of this, measurements of k2 can still be used to further constrain the core mass and size of Jupiter's two-layer interior models.

  9. Exploring stellar evolution with gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Dvorkin, Irina; Uzan, Jean-Philippe; Vangioni, Elisabeth; Silk, Joseph

    2018-05-01

    Recent detections of gravitational waves from merging binary black holes opened new possibilities to study the evolution of massive stars and black hole formation. In particular, stellar evolution models may be constrained on the basis of the differences in the predicted distribution of black hole masses and redshifts. In this work we propose a framework that combines galaxy and stellar evolution models and use it to predict the detection rates of merging binary black holes for various stellar evolution models. We discuss the prospects of constraining the shape of the time delay distribution of merging binaries using just the observed distribution of chirp masses. Finally, we consider a generic model of primordial black hole formation and discuss the possibility of distinguishing it from stellar-origin black holes.

  10. Hamiltonian Effective Field Theory Study of the N^{*}(1535) Resonance in Lattice QCD.

    PubMed

    Liu, Zhan-Wei; Kamleh, Waseem; Leinweber, Derek B; Stokes, Finn M; Thomas, Anthony W; Wu, Jia-Jun

    2016-02-26

    Drawing on experimental data for baryon resonances, Hamiltonian effective field theory (HEFT) is used to predict the positions of the finite-volume energy levels to be observed in lattice QCD simulations of the lowest-lying J^{P}=1/2^{-} nucleon excitation. In the initial analysis, the phenomenological parameters of the Hamiltonian model are constrained by experiment and the finite-volume eigenstate energies are a prediction of the model. The agreement between HEFT predictions and lattice QCD results obtained on volumes with spatial lengths of 2 and 3 fm is excellent. These lattice results also admit a more conventional analysis where the low-energy coefficients are constrained by lattice QCD results, enabling a determination of resonance properties from lattice QCD itself. Finally, the role and importance of various components of the Hamiltonian model are examined.

  11. Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.

    PubMed

    O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao

    2017-07-01

    Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.

  12. Vibration control of beams using stand-off layer damping: finite element modeling and experiments

    NASA Astrophysics Data System (ADS)

    Chaudry, A.; Baz, A.

    2006-03-01

    Damping treatments with stand-off layer (SOL) have been widely accepted as an attractive alternative to conventional constrained layer damping (CLD) treatments. Such an acceptance stems from the fact that the SOL, which is simply a slotted spacer layer sandwiched between the viscoelastic layer and the base structure, acts as a strain magnifier that considerably amplifies the shear strain and hence the energy dissipation characteristics of the viscoelastic layer. Accordingly, more effective vibration suppression can be achieved by using SOL as compared to employing CLD. In this paper, a comprehensive finite element model of the stand-off layer constrained damping treatment is developed. The model accounts for the geometrical and physical parameters of the slotted SOL, the viscoelastic, layer the constraining layer, and the base structure. The predictions of the model are validated against the predictions of a distributed transfer function model and a model built using a commercial finite element code (ANSYS). Furthermore, the theoretical predictions are validated experimentally for passive SOL treatments of different configurations. The obtained results indicate a close agreement between theory and experiments. Furthermore, the obtained results demonstrate the effectiveness of the CLD with SOL in enhancing the energy dissipation as compared to the conventional CLD. Extension of the proposed one-dimensional CLD with SOL to more complex structures is a natural extension to the present study.

  13. Experimental Validation of a Thermoelastic Model for SMA Hybrid Composites

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2001-01-01

    This study presents results from experimental validation of a recently developed model for predicting the thermomechanical behavior of shape memory alloy hybrid composite (SMAHC) structures, composite structures with an embedded SMA constituent. The model captures the material nonlinearity of the material system with temperature and is capable of modeling constrained, restrained, or free recovery behavior from experimental measurement of fundamental engineering properties. A brief description of the model and analysis procedures is given, followed by an overview of a parallel effort to fabricate and characterize the material system of SMAHC specimens. Static and dynamic experimental configurations for the SMAHC specimens are described and experimental results for thermal post-buckling and random response are presented. Excellent agreement is achieved between the measured and predicted results, fully validating the theoretical model for constrained recovery behavior of SMAHC structures.

  14. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    PubMed Central

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; Tao, Yujie; Egolfopoulos, Fokion N.; Wang, Hai

    2016-01-01

    Laminar flame speed measurements were carried for mixture of air with eight C3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C3 and C4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel. PMID:27890938

  15. Chemical kinetic model uncertainty minimization through laminar flame speed measurements.

    PubMed

    Park, Okjoo; Veloo, Peter S; Sheen, David A; Tao, Yujie; Egolfopoulos, Fokion N; Wang, Hai

    2016-10-01

    Laminar flame speed measurements were carried for mixture of air with eight C 3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso -butene, n -butane, and iso -butane) at the room temperature and ambient pressure. Along with C 1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011, 158, 2358-2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C 3 and C 4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C 3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C 4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C 4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel.

  16. Combined constraints on the structure and physical properties of the East Antarctic lithosphere from geology and geophysics.

    NASA Astrophysics Data System (ADS)

    Reading, A. M.; Staal, T.; Halpin, J.; Whittaker, J. M.; Morse, P. E.

    2017-12-01

    The lithosphere of East Antarctica is one of the least explored regions of the planet, yet it is gaining in importance in global scientific research. Continental heat flux density and 3D glacial isostatic adjustment studies, for example, rely on a good knowledge of the deep structure in constraining model inputs.In this contribution, we use a multidisciplinary approach to constrain lithospheric domains. To seismic tomography models, we add constraints from magnetic studies and also new geological constraints. Geological knowledge exists around the periphery of East Antarctica and is reinforced in the knowledge of plate tectonic reconstructions. The subglacial geology of the Antarctic hinterland is largely unknown but the plate reconstructions allow the well-posed extrapolation of major terranes into the interior of the continent, guided by the seismic tomography and magnetic images. We find that the northern boundary of the lithospheric domain centred on the Gamburtsev Subglacial Mountains has a possible trend that runs south of the Lambert Glacier region, turning coastward through Wilkes Land. Other periphery-to-interior connections are less well constrained and the possibility of lithospheric domains that are entirely sub-glacial is high. We develop this framework to include a probabilistic method of handling alternate models and quantifiable uncertainties. We also show first results in using a Bayesian approach to predicting lithospheric boundaries from multivariate data.Within the newly constrained domains, we constrain heat flux (density) as the sum of basal heat flux and upper crustal heat flux. The basal heat flux is constrained by geophysical methods while the upper crustal heat flux is constrained by geology or predicted geology. In addition to heat flux constraints, we also consider the variations in friction experienced by moving ice sheets due to varying geology.

  17. Testing a Constrained MPC Controller in a Process Control Laboratory

    ERIC Educational Resources Information Center

    Ricardez-Sandoval, Luis A.; Blankespoor, Wesley; Budman, Hector M.

    2010-01-01

    This paper describes an experiment performed by the fourth year chemical engineering students in the process control laboratory at the University of Waterloo. The objective of this experiment is to test the capabilities of a constrained Model Predictive Controller (MPC) to control the operation of a Double Pipe Heat Exchanger (DPHE) in real time.…

  18. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  19. Sensitivity to gaze-contingent contrast increments in naturalistic movies: An exploratory report and model comparison

    PubMed Central

    Wallis, Thomas S. A.; Dorr, Michael; Bex, Peter J.

    2015-01-01

    Sensitivity to luminance contrast is a prerequisite for all but the simplest visual systems. To examine contrast increment detection performance in a way that approximates the natural environmental input of the human visual system, we presented contrast increments gaze-contingently within naturalistic video freely viewed by observers. A band-limited contrast increment was applied to a local region of the video relative to the observer's current gaze point, and the observer made a forced-choice response to the location of the target (≈25,000 trials across five observers). We present exploratory analyses showing that performance improved as a function of the magnitude of the increment and depended on the direction of eye movements relative to the target location, the timing of eye movements relative to target presentation, and the spatiotemporal image structure at the target location. Contrast discrimination performance can be modeled by assuming that the underlying contrast response is an accelerating nonlinearity (arising from a nonlinear transducer or gain control). We implemented one such model and examined the posterior over model parameters, estimated using Markov-chain Monte Carlo methods. The parameters were poorly constrained by our data; parameters constrained using strong priors taken from previous research showed poor cross-validated prediction performance. Atheoretical logistic regression models were better constrained and provided similar prediction performance to the nonlinear transducer model. Finally, we explored the properties of an extended logistic regression that incorporates both eye movement and image content features. Models of contrast transduction may be better constrained by incorporating data from both artificial and natural contrast perception settings. PMID:26057546

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cembranos, Jose A. R.; Diaz-Cruz, J. Lorenzo; Prado, Lilian

    Dark Matter direct detection experiments are able to exclude interesting parameter space regions of particle models which predict an important amount of thermal relics. We use recent data to constrain the branon model and to compute the region that is favored by CDMS measurements. Within this work, we also update present colliders constraints with new studies coming from the LHC. Despite the present low luminosity, it is remarkable that for heavy branons, CMS and ATLAS measurements are already more constraining than previous analyses performed with TEVATRON and LEP data.

  1. A Two-stage Approach for Water Demand Prediction under Constrained total water use and Water Environmental Capacity

    NASA Astrophysics Data System (ADS)

    He, Y.; Xiaohong, C.; Lin, K.; Wang, Z.

    2016-12-01

    Water demand (WD) is the basis for water allocation (WA) because it can fully reflect the pressure on water resources from population and socioeconomic development. To deal with the great uncertainties and the absence of consideration of water environmental capacity (WEC) in traditional water demand prediction methods, e.g. Statistical models, System Dynamics and quota method, this study develops a two-stage approach to predict WD under constrained total water use from the perspective of ecological restraint. Regional total water demand (RTWD) is constrained by WEC, available water resources amount and total water use quota. Based on RTWD, WD is allocated in two stages according to the game theory, including predicting sub regional total water demand (SRWD) by calculating the sub region weights based on the selected indicators of socioeconomic development and predicting industrial water demand (IWD) according to the game theory. Taking the Dongjiang river basin, South China as an example of WD prediction, according to its constrained total water use quota and WEC, RTWD in 2020 is 9.83 billion m3, and IWD for agriculture, industry, service, ecology (off-stream), and domesticity are 2.32 billion m3, 3.79 billion m3, 0.75 billion m3 , 0.18 billion m3and 1.79 billion m3 respectively. The results from this study provide useful insights for effective water allocation under climate change and the strict policy of water resources management.

  2. Uncertainty assessment and implications for data acquisition in support of integrated hydrologic models

    NASA Astrophysics Data System (ADS)

    Brunner, Philip; Doherty, J.; Simmons, Craig T.

    2012-07-01

    The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.

  3. A Method to Constrain Genome-Scale Models with 13C Labeling Data

    PubMed Central

    García Martín, Héctor; Kumar, Vinay Satish; Weaver, Daniel; Ghosh, Amit; Chubukov, Victor; Mukhopadhyay, Aindrila; Arkin, Adam; Keasling, Jay D.

    2015-01-01

    Current limitations in quantitatively predicting biological behavior hinder our efforts to engineer biological systems to produce biofuels and other desired chemicals. Here, we present a new method for calculating metabolic fluxes, key targets in metabolic engineering, that incorporates data from 13C labeling experiments and genome-scale models. The data from 13C labeling experiments provide strong flux constraints that eliminate the need to assume an evolutionary optimization principle such as the growth rate optimization assumption used in Flux Balance Analysis (FBA). This effective constraining is achieved by making the simple but biologically relevant assumption that flux flows from core to peripheral metabolism and does not flow back. The new method is significantly more robust than FBA with respect to errors in genome-scale model reconstruction. Furthermore, it can provide a comprehensive picture of metabolite balancing and predictions for unmeasured extracellular fluxes as constrained by 13C labeling data. A comparison shows that the results of this new method are similar to those found through 13C Metabolic Flux Analysis (13C MFA) for central carbon metabolism but, additionally, it provides flux estimates for peripheral metabolism. The extra validation gained by matching 48 relative labeling measurements is used to identify where and why several existing COnstraint Based Reconstruction and Analysis (COBRA) flux prediction algorithms fail. We demonstrate how to use this knowledge to refine these methods and improve their predictive capabilities. This method provides a reliable base upon which to improve the design of biological systems. PMID:26379153

  4. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    DOE PAGES

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.; ...

    2016-07-25

    Laminar flame speed measurements were carried for mixture of air with eight C 3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C 1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011,more » 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C 3 and C 4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C 3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C 4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C 4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel.« less

  5. Chemical kinetic model uncertainty minimization through laminar flame speed measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Okjoo; Veloo, Peter S.; Sheen, David A.

    Laminar flame speed measurements were carried for mixture of air with eight C 3-4 hydrocarbons (propene, propane, 1,3-butadiene, 1-butene, 2-butene, iso-butene, n-butane, and iso-butane) at the room temperature and ambient pressure. Along with C 1-2 hydrocarbon data reported in a recent study, the entire dataset was used to demonstrate how laminar flame speed data can be utilized to explore and minimize the uncertainties in a reaction model for foundation fuels. The USC Mech II kinetic model was chosen as a case study. The method of uncertainty minimization using polynomial chaos expansions (MUM-PCE) (D.A. Sheen and H. Wang, Combust. Flame 2011,more » 158, 2358–2374) was employed to constrain the model uncertainty for laminar flame speed predictions. Results demonstrate that a reaction model constrained only by the laminar flame speed values of methane/air flames notably reduces the uncertainty in the predictions of the laminar flame speeds of C 3 and C 4 alkanes, because the key chemical pathways of all of these flames are similar to each other. The uncertainty in model predictions for flames of unsaturated C 3-4 hydrocarbons remain significant without considering fuel specific laminar flames speeds in the constraining target data set, because the secondary rate controlling reaction steps are different from those in the saturated alkanes. It is shown that the constraints provided by the laminar flame speeds of the foundation fuels could reduce notably the uncertainties in the predictions of laminar flame speeds of C 4 alcohol/air mixtures. Furthermore, it is demonstrated that an accurate prediction of the laminar flame speed of a particular C 4 alcohol/air mixture is better achieved through measurements for key molecular intermediates formed during the pyrolysis and oxidation of the parent fuel.« less

  6. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  7. PREDICTING CME EJECTA AND SHEATH FRONT ARRIVAL AT L1 WITH A DATA-CONSTRAINED PHYSICAL MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hess, Phillip; Zhang, Jie, E-mail: phess4@gmu.edu

    2015-10-20

    We present a method for predicting the arrival of a coronal mass ejection (CME) flux rope in situ, as well as the sheath of solar wind plasma accumulated ahead of the driver. For faster CMEs, the front of this sheath will be a shock. The method is based upon geometrical separate measurement of the CME ejecta and sheath. These measurements are used to constrain a drag-based model, improved by including both a height dependence and accurate de-projected velocities. We also constrain the geometry of the model to determine the error introduced as a function of the deviation of the CMEmore » nose from the Sun–Earth line. The CME standoff-distance in the heliosphere fit is also calculated, fit, and combined with the ejecta model to determine sheath arrival. Combining these factors allows us to create predictions for both fronts at the L1 point and compare them against observations. We demonstrate an ability to predict the sheath arrival with an average error of under 3.5 hr, with an rms error of about 1.58 hr. For the ejecta the error is less than 1.5 hr, with an rms error within 0.76 hr. We also discuss the physical implications of our model for CME expansion and density evolution. We show the power of our method with ideal data and demonstrate the practical implications of having a permanent L5 observer with space weather forecasting capabilities, while also discussing the limitations of the method that will have to be addressed in order to create a real-time forecasting tool.« less

  8. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    NASA Astrophysics Data System (ADS)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2015-09-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data; and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the Statistical Oxidation Model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional UCD/CIT air quality model and applied to air quality episodes in California and the eastern US. The mass, composition and properties of SOA predicted using SOM are compared to SOA predictions generated by a traditional "two-product" model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation. Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields perturbs SOA concentrations by a factor of two and are probably a much stronger determinant in 3-D models than constrained multi-generational oxidation. While total predicted SOA mass is similar for the SOM and two-product models, the SOM model predicts increased SOA contributions from anthropogenic (alkane, aromatic) and sesquiterpenes and decreased SOA contributions from isoprene and monoterpene relative to the two-product model calculations. The SOA predicted by SOM has a much lower volatility than that predicted by the traditional model resulting in better qualitative agreement with volatility measurements of ambient OA. On account of its lower-volatility, the SOA mass produced by SOM does not appear to be as strongly influenced by the inclusion of oligomerization reactions, whereas the two-product model relies heavily on oligomerization to form low volatility SOA products. Finally, an unconstrained contemporary hybrid scheme to model multi-generational oxidation within the framework of a two-product model in which "ageing" reactions are added on top of the existing two-product parameterization is considered. This hybrid scheme formed at least three times more SOA than the SOM during regional simulations as a result of excessive transformation of semi-volatile vapors into lower volatility material that strongly partitions to the particle phase. This finding suggests that these "hybrid" multi-generational schemes should be used with great caution in regional models.

  9. Concurrent prediction of muscle and tibiofemoral contact forces during treadmill gait.

    PubMed

    Guess, Trent M; Stylianou, Antonis P; Kia, Mohammad

    2014-02-01

    Detailed knowledge of knee kinematics and dynamic loading is essential for improving the design and outcomes of surgical procedures, tissue engineering applications, prosthetics design, and rehabilitation. This study used publicly available data provided by the "Grand Challenge Competition to Predict in-vivo Knee Loads" for the 2013 American Society of Mechanical Engineers Summer Bioengineering Conference (Fregly et al., 2012, "Grand Challenge Competition to Predict in vivo Knee Loads," J. Orthop. Res., 30, pp. 503-513) to develop a full body, musculoskeletal model with subject specific right leg geometries that can concurrently predict muscle forces, ligament forces, and knee and ground contact forces. The model includes representation of foot/floor interactions and predicted tibiofemoral joint loads were compared to measured tibial loads for two different cycles of treadmill gait. The model used anthropometric data (height and weight) to scale the joint center locations and mass properties of a generic model and then used subject bone geometries to more accurately position the hip and ankle. The musculoskeletal model included 44 muscles on the right leg, and subject specific geometries were used to create a 12 degrees-of-freedom anatomical right knee that included both patellofemoral and tibiofemoral articulations. Tibiofemoral motion was constrained by deformable contacts defined between the tibial insert and femoral component geometries and by ligaments. Patellofemoral motion was constrained by contact between the patellar button and femoral component geometries and the patellar tendon. Shoe geometries were added to the feet, and shoe motion was constrained by contact between three shoe segments per foot and the treadmill surface. Six-axis springs constrained motion between the feet and shoe segments. Experimental motion capture data provided input to an inverse kinematics stage, and the final forward dynamics simulations tracked joint angle errors for the left leg and upper body and tracked muscle length errors for the right leg. The one cycle RMS errors between the predicted and measured tibia contact were 178 N and 168 N for the medial and lateral sides for the first gait cycle and 209 N and 228 N for the medial and lateral sides for the faster second gait cycle. One cycle RMS errors between predicted and measured ground reaction forces were 12 N, 13 N, and 65 N in the anterior-posterior, medial-lateral, and vertical directions for the first gait cycle and 43 N, 15 N, and 96 N in the anterior-posterior, medial-lateral, and vertical directions for the second gait cycle.

  10. Parameter and prediction uncertainty in an optimized terrestrial carbon cycle model: Effects of constraining variables and data record length

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.

    2011-03-01

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.

  11. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    NASA Astrophysics Data System (ADS)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  12. Predictability of Subsurface Temperature and the AMOC

    NASA Astrophysics Data System (ADS)

    Chang, Y.; Schubert, S. D.

    2013-12-01

    GEOS 5 coupled model is extensively used for experimental decadal climate prediction. Understanding the limits of decadal ocean predictability is critical for making progress in these efforts. Using this model, we study the subsurface temperature initial value predictability, the variability of the Atlantic meridional overturning circulation (AMOC) and its impacts on the global climate. Our approach is to utilize the idealized data assimilation technology developed at the GMAO. The technique 'replay' allows us to assess, for example, the impact of the surface wind stresses and/or precipitation on the ocean in a very well controlled environment. By running the coupled model in replay mode we can in fact constrain the model using any existing reanalysis data set. We replay the model constraining (nudging) it to the MERRA reanalysis in various fields from 1948-2012. The fields, u,v,T,q,ps, are adjusted towards the 6-hourly analyzed fields in atmosphere. The simulated AMOC variability is studied with a 400-year-long segment of replay integration. The 84 cases of 10-year hindcasts are initialized from 4 different replay cycles. Here, the variability and predictability are examined further by a measure to quantify how much the subsurface temperature and AMOC variability has been influenced by atmospheric forcing and by ocean internal variability. The simulated impact of the AMOC on the multi-decadal variability of the SST, sea surface height (SSH) and sea ice extent is also studied.

  13. Multiplexed Predictive Control of a Large Commercial Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Richter, hanz; Singaraju, Anil; Litt, Jonathan S.

    2008-01-01

    Model predictive control is a strategy well-suited to handle the highly complex, nonlinear, uncertain, and constrained dynamics involved in aircraft engine control problems. However, it has thus far been infeasible to implement model predictive control in engine control applications, because of the combination of model complexity and the time allotted for the control update calculation. In this paper, a multiplexed implementation is proposed that dramatically reduces the computational burden of the quadratic programming optimization that must be solved online as part of the model-predictive-control algorithm. Actuator updates are calculated sequentially and cyclically in a multiplexed implementation, as opposed to the simultaneous optimization taking place in conventional model predictive control. Theoretical aspects are discussed based on a nominal model, and actual computational savings are demonstrated using a realistic commercial engine model.

  14. Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.

    2018-03-01

    Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2D seismic reflection data processing flow focused on pre - stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching (BHM), to estimate the uncertainties of the depths of key horizons near the borehole DSDP-258 located in the Mentelle Basin, south west of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ± 2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program (IODP), leg 369.

  15. Uncertainty analysis of depth predictions from seismic reflection data using Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Michelioudakis, Dimitrios G.; Hobbs, Richard W.; Caiado, Camila C. S.

    2018-06-01

    Estimating the depths of target horizons from seismic reflection data is an important task in exploration geophysics. To constrain these depths we need a reliable and accurate velocity model. Here, we build an optimum 2-D seismic reflection data processing flow focused on pre-stack deghosting filters and velocity model building and apply Bayesian methods, including Gaussian process emulation and Bayesian History Matching, to estimate the uncertainties of the depths of key horizons near the Deep Sea Drilling Project (DSDP) borehole 258 (DSDP-258) located in the Mentelle Basin, southwest of Australia, and compare the results with the drilled core from that well. Following this strategy, the tie between the modelled and observed depths from DSDP-258 core was in accordance with the ±2σ posterior credibility intervals and predictions for depths to key horizons were made for the two new drill sites, adjacent to the existing borehole of the area. The probabilistic analysis allowed us to generate multiple realizations of pre-stack depth migrated images, these can be directly used to better constrain interpretation and identify potential risk at drill sites. The method will be applied to constrain the drilling targets for the upcoming International Ocean Discovery Program, leg 369.

  16. Constraining the Sensitivity of Amazonian Rainfall with Observations of Surface Temperature

    NASA Astrophysics Data System (ADS)

    Dolman, A. J.; von Randow, C.; de Oliveira, G. S.; Martins, G.; Nobre, C. A.

    2016-12-01

    Earth System models generally do a poor job in predicting Amazonian rainfall, necessitating the need to look for observational constraints on their predictability. We use observed surface temperature and precipitation of the Amazon and a set of 21 CMIP5 models to derive an observational constraint of the sensitivity of rainfall to surface temperature (dP/dT). From first principles such a relation between the surface temperature of the earth and the amount of precipitation through the surface energy balance should exist, particularly in the tropics. When de-trended anomalies in surface temperature and precipitation from a set of datasets are plotted, a clear linear relation between surface temperature and precipitation appears. CMIP5 models show a similar relation with relatively cool models having a larger sensitivity, producing more rainfall. Using the ensemble of models and the observed surface temperature we were able to derive an emerging constraint, reducing the dPdt sensitivity of the CMIP5 model from -0.75 mm day-1 0C-1 (+/- 0.54 SD) to -0.77 mm day-1 0C-1 with a reduced uncertainty of about a factor 5. dPdT from the observation is -0.89 mm day-1 0C-1 . We applied the method to wet and dry season separately noticing that in the wet season we shifted the mean and reduced uncertainty, while in the dry season we very much reduced uncertainty only. The method can be applied to other model simulations such as specific deforestation scenarios to constrain the sensitivity of rainfall to surface temperature. We discuss the implications of the constrained sensitivity for future Amazonian predictions.

  17. Aperiodic Robust Model Predictive Control for Constrained Continuous-Time Nonlinear Systems: An Event-Triggered Approach.

    PubMed

    Liu, Changxin; Gao, Jian; Li, Huiping; Xu, Demin

    2018-05-01

    The event-triggered control is a promising solution to cyber-physical systems, such as networked control systems, multiagent systems, and large-scale intelligent systems. In this paper, we propose an event-triggered model predictive control (MPC) scheme for constrained continuous-time nonlinear systems with bounded disturbances. First, a time-varying tightened state constraint is computed to achieve robust constraint satisfaction, and an event-triggered scheduling strategy is designed in the framework of dual-mode MPC. Second, the sufficient conditions for ensuring feasibility and closed-loop robust stability are developed, respectively. We show that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm. Finally, numerical simulations and comparison studies are performed to verify the theoretical results.

  18. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  19. A proof for loop-law constraints in stoichiometric metabolic networks

    PubMed Central

    2012-01-01

    Background Constraint-based modeling is increasingly employed for metabolic network analysis. Its underlying assumption is that natural metabolic phenotypes can be predicted by adding physicochemical constraints to remove unrealistic metabolic flux solutions. The loopless-COBRA approach provides an additional constraint that eliminates thermodynamically infeasible internal cycles (or loops) from the space of solutions. This allows the prediction of flux solutions that are more consistent with experimental data. However, it is not clear if this approach over-constrains the models by removing non-loop solutions as well. Results Here we apply Gordan’s theorem from linear algebra to prove for the first time that the constraints added in loopless-COBRA do not over-constrain the problem beyond the elimination of the loops themselves. Conclusions The loopless-COBRA constraints can be reliably applied. Furthermore, this proof may be adapted to evaluate the theoretical soundness for other methods in constraint-based modeling. PMID:23146116

  20. Interoceptive predictions in the brain

    PubMed Central

    Barrett, Lisa Feldman; Simmons, W. Kyle

    2016-01-01

    Intuition suggests that perception follows sensation and therefore bodily feelings originate in the body. However, recent evidence goes against this logic: interoceptive experience may largely reflect limbic predictions about the expected state of the body that are constrained by ascending visceral sensations. In this Opinion article, we introduce the Embodied Predictive Interoception Coding model, which integrates an anatomical model of corticocortical connections with Bayesian active inference principles, to propose that agranular visceromotor cortices contribute to interoception by issuing interoceptive predictions. We then discuss how disruptions in interoceptive predictions could function as a common vulnerability for mental and physical illness. PMID:26016744

  1. Constraining 3-PG with a new δ13C submodel: a test using the δ13C of tree rings.

    PubMed

    Wei, Liang; Marshall, John D; Link, Timothy E; Kavanagh, Kathleen L; DU, Enhao; Pangle, Robert E; Gag, Peter J; Ubierna, Nerea

    2014-01-01

    A semi-mechanistic forest growth model, 3-PG (Physiological Principles Predicting Growth), was extended to calculate δ(13)C in tree rings. The δ(13)C estimates were based on the model's existing description of carbon assimilation and canopy conductance. The model was tested in two ~80-year-old natural stands of Abies grandis (grand fir) in northern Idaho. We used as many independent measurements as possible to parameterize the model. Measured parameters included quantum yield, specific leaf area, soil water content and litterfall rate. Predictions were compared with measurements of transpiration by sap flux, stem biomass, tree diameter growth, leaf area index and δ(13)C. Sensitivity analysis showed that the model's predictions of δ(13)C were sensitive to key parameters controlling carbon assimilation and canopy conductance, which would have allowed it to fail had the model been parameterized or programmed incorrectly. Instead, the simulated δ(13)C of tree rings was no different from measurements (P > 0.05). The δ(13)C submodel provides a convenient means of constraining parameter space and avoiding model artefacts. This δ(13)C test may be applied to any forest growth model that includes realistic simulations of carbon assimilation and transpiration. © 2013 John Wiley & Sons Ltd.

  2. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    USDA-ARS?s Scientific Manuscript database

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  3. CCTOP: a Consensus Constrained TOPology prediction web server.

    PubMed

    Dobson, László; Reményi, István; Tusnády, Gábor E

    2015-07-01

    The Consensus Constrained TOPology prediction (CCTOP; http://cctop.enzim.ttk.mta.hu) server is a web-based application providing transmembrane topology prediction. In addition to utilizing 10 different state-of-the-art topology prediction methods, the CCTOP server incorporates topology information from existing experimental and computational sources available in the PDBTM, TOPDB and TOPDOM databases using the probabilistic framework of hidden Markov model. The server provides the option to precede the topology prediction with signal peptide prediction and transmembrane-globular protein discrimination. The initial result can be recalculated by (de)selecting any of the prediction methods or mapped experiments or by adding user specified constraints. CCTOP showed superior performance to existing approaches. The reliability of each prediction is also calculated, which correlates with the accuracy of the per protein topology prediction. The prediction results and the collected experimental information are visualized on the CCTOP home page and can be downloaded in XML format. Programmable access of the CCTOP server is also available, and an example of client-side script is provided. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. A hierarchical spatial model for well yield in complex aquifers

    NASA Astrophysics Data System (ADS)

    Montgomery, J.; O'sullivan, F.

    2017-12-01

    Efficiently siting and managing groundwater wells requires reliable estimates of the amount of water that can be produced, or the well yield. This can be challenging to predict in highly complex, heterogeneous fractured aquifers due to the uncertainty around local hydraulic properties. Promising statistical approaches have been advanced in recent years. For instance, kriging and multivariate regression analysis have been applied to well test data with limited but encouraging levels of prediction accuracy. Additionally, some analytical solutions to diffusion in homogeneous porous media have been used to infer "effective" properties consistent with observed flow rates or drawdown. However, this is an under-specified inverse problem with substantial and irreducible uncertainty. We describe a flexible machine learning approach capable of combining diverse datasets with constraining physical and geostatistical models for improved well yield prediction accuracy and uncertainty quantification. Our approach can be implemented within a hierarchical Bayesian framework using Markov Chain Monte Carlo, which allows for additional sources of information to be incorporated in priors to further constrain and improve predictions and reduce the model order. We demonstrate the usefulness of this approach using data from over 7,000 wells in a fractured bedrock aquifer.

  5. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    Treesearch

    R. Quinn Thomas; Evan B. Brooks; Annika L. Jersild; Eric J. Ward; Randolph H. Wynne; Timothy J. Albaugh; Heather Dinon-Aldridge; Harold E. Burkhart; Jean-Christophe Domec; Timothy R. Fox; Carlos A. Gonzalez-Benecke; Timothy A. Martin; Asko Noormets; David A. Sampson; Robert O. Teskey

    2017-01-01

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of...

  6. The added value of remote sensing products in constraining hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.

  7. The electrochemistry of carbon steel in simulated concrete pore water in boom clay repository environments

    NASA Astrophysics Data System (ADS)

    MacDonald, D. D.; Saleh, A.; Lee, S. K.; Azizi, O.; Rosas-Camacho, O.; Al-Marzooqi, A.; Taylor, M.

    2011-04-01

    The prediction of corrosion damage of canisters to experimentally inaccessible times is vitally important in assessing various concepts for the disposal of High Level Nuclear Waste. Such prediction can only be made using deterministic models, whose predictions are constrained by the time-invariant natural laws. In this paper, we describe the measurement of experimental electrochemical data that will allow the prediction of damage to the carbon steel overpack of the super container in Belgium's proposed Boom Clay repository by using the Point Defect Model (PDM). PDM parameter values are obtained by optimizing the model on experimental, wide-band electrochemical impedance spectroscopy data.

  8. On the comparison of stochastic model predictive control strategies applied to a hydrogen-based microgrid

    NASA Astrophysics Data System (ADS)

    Velarde, P.; Valverde, L.; Maestre, J. M.; Ocampo-Martinez, C.; Bordons, C.

    2017-03-01

    In this paper, a performance comparison among three well-known stochastic model predictive control approaches, namely, multi-scenario, tree-based, and chance-constrained model predictive control is presented. To this end, three predictive controllers have been designed and implemented in a real renewable-hydrogen-based microgrid. The experimental set-up includes a PEM electrolyzer, lead-acid batteries, and a PEM fuel cell as main equipment. The real experimental results show significant differences from the plant components, mainly in terms of use of energy, for each implemented technique. Effectiveness, performance, advantages, and disadvantages of these techniques are extensively discussed and analyzed to give some valid criteria when selecting an appropriate stochastic predictive controller.

  9. Kinetics of heavy metal adsorption and desorption in soil: Developing a unified model based on chemical speciation

    NASA Astrophysics Data System (ADS)

    Peng, Lanfang; Liu, Paiyu; Feng, Xionghan; Wang, Zimeng; Cheng, Tao; Liang, Yuzhen; Lin, Zhang; Shi, Zhenqing

    2018-03-01

    Predicting the kinetics of heavy metal adsorption and desorption in soil requires consideration of multiple heterogeneous soil binding sites and variations of reaction chemistry conditions. Although chemical speciation models have been developed for predicting the equilibrium of metal adsorption on soil organic matter (SOM) and important mineral phases (e.g. Fe and Al (hydr)oxides), there is still a lack of modeling tools for predicting the kinetics of metal adsorption and desorption reactions in soil. In this study, we developed a unified model for the kinetics of heavy metal adsorption and desorption in soil based on the equilibrium models WHAM 7 and CD-MUSIC, which specifically consider metal kinetic reactions with multiple binding sites of SOM and soil minerals simultaneously. For each specific binding site, metal adsorption and desorption rate coefficients were constrained by the local equilibrium partition coefficients predicted by WHAM 7 or CD-MUSIC, and, for each metal, the desorption rate coefficients of various binding sites were constrained by their metal binding constants with those sites. The model had only one fitting parameter for each soil binding phase, and all other parameters were derived from WHAM 7 and CD-MUSIC. A stirred-flow method was used to study the kinetics of Cd, Cu, Ni, Pb, and Zn adsorption and desorption in multiple soils under various pH and metal concentrations, and the model successfully reproduced most of the kinetic data. We quantitatively elucidated the significance of different soil components and important soil binding sites during the adsorption and desorption kinetic processes. Our model has provided a theoretical framework to predict metal adsorption and desorption kinetics, which can be further used to predict the dynamic behavior of heavy metals in soil under various natural conditions by coupling other important soil processes.

  10. Phenomenological Consequences of the Constrained Exceptional Supersymmetric Standard Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athron, Peter; King, S. F.; Miller, D. J.

    2010-02-10

    The Exceptional Supersymmetric Standard Model (E{sub 6}SSM) provides a low energy alternative to the MSSM, with an extra gauged U(1){sub N} symmetry, solving the mu-problem of the MSSM. Inspired by the possible embedding into an E{sub 6} GUT, the matter content fills three generations of E{sub 6} multiplets, thus predicting exciting exotic matter such as diquarks or leptoquarks. We present predictions from a constrained version of the model (cE{sub 6}SSM), with a universal scalar mass m{sub 0}, trilinear mass A and gaugino mass M{sub 1/2}. We reveal a large volume of the cE{sub 6}SSM parameter space where the correct breakdownmore » of the gauge symmetry is achieved and all experimental constraints satisfied. We predict a hierarchical particle spectrum with heavy scalars and light gauginos, while the new exotic matter can be light or heavy depending on parameters. We present representative cE{sub 6}SSM scenarios, demonstrating that there could be light exotic particles, like leptoquarks and a U(1){sub N} Z' boson, with spectacular signals at the LHC.« less

  11. Sequential Probability Ratio Test for Collision Avoidance Maneuver Decisions Based on a Bank of Norm-Inequality-Constrained Epoch-State Filters

    NASA Technical Reports Server (NTRS)

    Carpenter, J. R.; Markley, F. L.; Alfriend, K. T.; Wright, C.; Arcido, J.

    2011-01-01

    Sequential probability ratio tests explicitly allow decision makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models 1he null hypothesis 1ha1 the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming highly-elliptical orbit formation flying mission.

  12. Statistical Issues in Galaxy Cluster Cosmology

    NASA Technical Reports Server (NTRS)

    Mantz, Adam

    2013-01-01

    The number and growth of massive galaxy clusters are sensitive probes of cosmological structure formation. Surveys at various wavelengths can detect clusters to high redshift, but the fact that cluster mass is not directly observable complicates matters, requiring us to simultaneously constrain scaling relations of observable signals with mass. The problem can be cast as one of regression, in which the data set is truncated, the (cosmology-dependent) underlying population must be modeled, and strong, complex correlations between measurements often exist. Simulations of cosmological structure formation provide a robust prediction for the number of clusters in the Universe as a function of mass and redshift (the mass function), but they cannot reliably predict the observables used to detect clusters in sky surveys (e.g. X-ray luminosity). Consequently, observers must constrain observable-mass scaling relations using additional data, and use the scaling relation model in conjunction with the mass function to predict the number of clusters as a function of redshift and luminosity.

  13. Constrains on the South Atlantic Anomaly from Réunion Island

    NASA Astrophysics Data System (ADS)

    Béguin, A.; de Groot, L. V.

    2017-12-01

    The South Atlantic Anomaly (SAA) is a region where the geomagnetic field intensity is about half as strong as would be expected from the current geomagnetic dipole moment that arises from geomagnetic field models. Those field models predict a westward movement of the SAA and predicts its origin East of Africa around 1500 AD. The onset and evolution of the SAA, however, are poorly constrained due to a lack of full-vector paleomagnetic data from Africa and the Indian Ocean for the past centuries. Here we present a full-vector paleosecular variation (PSV) curve for Réunion Island (21°S, 55°E) located East the African continent, in the region that currently shows the fastest increase in geomagnetic field strength in contrast to the average global decay. We sampled 27 sites covering the last 700 years, and subjected them to a directional and multi-method paleointensity study. The obtained directional records reveal shallower inclinations and less variation in the declination compared to current geomagnetic field model predictions. Scrutinizing the IZZI-Thellier, Multispecimen, and calibrated pseudo-Thellier results produces a coherent paleointensity record. The predicted intensity trend from the geomagnetic field models generally agrees with the trend in our data, however, the high paleointensities are higher than the models predict, and the low paleointensities are lower than the models. This illustrates the inevitable smoothing inherent to geomagnetic field modelling. We will discuss the constraints on the onset of the SAA that arise from the new full-vector PSV curve for Réunion that we present and the implications for the past and future evolution of this geomagnetic phenomenon.

  14. Experimental evaluation of model predictive control and inverse dynamics control for spacecraft proximity and docking maneuvers

    NASA Astrophysics Data System (ADS)

    Virgili-Llop, Josep; Zagaris, Costantinos; Park, Hyeongjun; Zappulla, Richard; Romano, Marcello

    2018-03-01

    An experimental campaign has been conducted to evaluate the performance of two different guidance and control algorithms on a multi-constrained docking maneuver. The evaluated algorithms are model predictive control (MPC) and inverse dynamics in the virtual domain (IDVD). A linear-quadratic approach with a quadratic programming solver is used for the MPC approach. A nonconvex optimization problem results from the IDVD approach, and a nonlinear programming solver is used. The docking scenario is constrained by the presence of a keep-out zone, an entry cone, and by the chaser's maximum actuation level. The performance metrics for the experiments and numerical simulations include the required control effort and time to dock. The experiments have been conducted in a ground-based air-bearing test bed, using spacecraft simulators that float over a granite table.

  15. Reducing usage of the computational resources by event driven approach to model predictive control

    NASA Astrophysics Data System (ADS)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  16. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  17. Enhancing model prediction reliability through improved soil representation and constrained model auto calibration - A paired waterhsed study

    USDA-ARS?s Scientific Manuscript database

    Process based and distributed watershed models possess a large number of parameters that are not directly measured in field and need to be calibrated through matching modeled in-stream fluxes with monitored data. Recently, there have been waves of concern about the reliability of this common practic...

  18. Mechanistic variables can enhance predictive models of endotherm distributions: the American pika under current, past, and future climates.

    PubMed

    Mathewson, Paul D; Moyer-Horner, Lucas; Beever, Erik A; Briscoe, Natalie J; Kearney, Michael; Yahn, Jeremiah M; Porter, Warren P

    2017-03-01

    How climate constrains species' distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8-19% less habitat loss in response to annual temperature increases of ~3-5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect: climate-imposed restrictions on activity. This more complete understanding is necessary to inform climate adaptation actions, management strategies, and conservation plans. © 2016 John Wiley & Sons Ltd.

  19. Mechanistic variables can enhance predictive models of endotherm distributions: The American pika under current, past, and future climates

    USGS Publications Warehouse

    Mathewson, Paul; Moyer-Horner, Lucas; Beever, Erik; Briscoe, Natalie; Kearney, Michael T.; Yahn, Jeremiah; Porter, Warren P.

    2017-01-01

    How climate constrains species’ distributions through time and space is an important question in the context of conservation planning for climate change. Despite increasing awareness of the need to incorporate mechanism into species distribution models (SDMs), mechanistic modeling of endotherm distributions remains limited in this literature. Using the American pika (Ochotona princeps) as an example, we present a framework whereby mechanism can be incorporated into endotherm SDMs. Pika distribution has repeatedly been found to be constrained by warm temperatures, so we used Niche Mapper, a mechanistic heat-balance model, to convert macroclimate data to pika-specific surface activity time in summer across the western United States. We then explored the difference between using a macroclimate predictor (summer temperature) and using a mechanistic predictor (predicted surface activity time) in SDMs. Both approaches accurately predicted pika presences in current and past climate regimes. However, the activity models predicted 8–19% less habitat loss in response to annual temperature increases of ~3–5 °C predicted in the region by 2070, suggesting that pikas may be able to buffer some climate change effects through behavioral thermoregulation that can be captured by mechanistic modeling. Incorporating mechanism added value to the modeling by providing increased confidence in areas where different modeling approaches agreed and providing a range of outcomes in areas of disagreement. It also provided a more proximate variable relating animal distribution to climate, allowing investigations into how unique habitat characteristics and intraspecific phenotypic variation may allow pikas to exist in areas outside those predicted by generic SDMs. Only a small number of easily obtainable data are required to parameterize this mechanistic model for any endotherm, and its use can improve SDM predictions by explicitly modeling a widely applicable direct physiological effect: climate-imposed restrictions on activity. This more complete understanding is necessary to inform climate adaptation actions, management strategies, and conservation plans.

  20. Understanding tectonic stress and rock strength in the Nankai Trough accretionary prism, offshore SW Japan

    NASA Astrophysics Data System (ADS)

    Huffman, Katelyn A.

    Understanding the orientation and magnitude of tectonic stress in active tectonic margins like subduction zones is important for understanding fault mechanics. In the Nankai Trough subduction zone, faults in the accretionary prism are thought to have historically slipped during or immediately following deep plate boundary earthquakes, often generating devastating tsunamis. I focus on quantifying stress at two locations of interest in the Nankai Trough accretionary prism, offshore Southwest Japan. I employ a method to constrain stress magnitude that combines observations of compressional borehole failure from logging-while-drilling resistivity-at-the-bit generated images (RAB) with estimates of rock strength and the relationship between tectonic stress and stress at the wall of a borehole. I use the method to constrain stress at Ocean Drilling Program (ODP) Site 808 and Integrated Ocean Drilling Program (IODP) Site C0002. At Site 808, I consider a range of parameters (assumed rock strength, friction coefficient, breakout width, and fluid pressure) in the method to constrain stress to explore uncertainty in stress magnitudes and discuss stress results in terms of the seismic cycle. I find a combination of increased fluid pressure and decreased friction along the frontal thrust or other weak faults could produce thrust-style failure, without the entire prism being at critical state failure, as other kinematic models of accretionary prism behavior during earthquakes imply. Rock strength is typically inferred using a failure criterion and unconfined compressive strength from empirical relations with P-wave velocity. I minimize uncertainty in rock strength by measuring rock strength in triaxial tests on Nankai core. I find strength of Nankai core is significantly less than empirical relations predict. I create a new empirical fit to our experiments and explore implications of this on stress magnitude estimates. I find using the new empirical fit can decrease stress predicted in the method by as much as 4 MPa at Site C0002. I constrain stress at Site C0002 using geophysical logging data from two adjacent boreholes drilled into the same sedimentary sequence with different drilling conditions in a forward model that predicts breakout width over a range of horizontal stresses (where SHmax is constrained by the ratio of stresses that would produce active faulting and Shmin is constrained from leak-off-tests) and rock strength. I then compare predicted breakout widths to observations of breakout widths from RAB images to determine the combination of stresses in the model that best match real world observations. This is the first published method to constrain both stress and strength simultaneously. Finally, I explore uncertainty in rock behavior during compressional breakout formation using a finite element model (FEM) that predicts Biot poroelastic changes in fluid pressure in rock adjacent to the borehole upon its excavation and explore the effect this has on rock failure. I test a range of permeability and rock stiffness. I find that when rock stiffness and permeability are in the range of what exists at Nankai, pore fluid pressure increase +/- 45° from Shmin and can lead to weakening of wall rock and a wider compressional failure zone than what would exist at equilibrium conditions. In a case example at, we find this can lead to an overestimate of tectonic stress using compressional failures of ~2 MPa in the area of the borehole where fluid pressure increases. In areas around the borehole where pore fluid decreases (+/- 45° from SHmax), the wall rock can strengthen which suppresses tensile failure. The implications of this research is that there are many potential pitfalls in the method to constrain stress using borehole breakouts in Nankai Trough mudstone, mostly due to uncertainty in parameters such as strength and underlying assumptions regarding constitutive rock behavior. More laboratory measurement and/or models of rock properties and rock constitutive behavior is needed to ensure the method is accurately providing constraints on stress magnitude. (Abstract shortened by ProQuest.).

  1. An enhanced beam model for constrained layer damping and a parameter study of damping contribution

    NASA Astrophysics Data System (ADS)

    Xie, Zhengchao; Shepard, W. Steve, Jr.

    2009-01-01

    An enhanced analytical model is presented based on an extension of previous models for constrained layer damping (CLD) in beam-like structures. Most existing CLD models are based on the assumption that shear deformation in the core layer is the only source of damping in the structure. However, previous research has shown that other types of deformation in the core layer, such as deformations from longitudinal extension and transverse compression, can also be important. In the enhanced analytical model developed here, shear, extension, and compression deformations are all included. This model can be used to predict the natural frequencies and modal loss factors. The numerical study shows that compared to other models, this enhanced model is accurate in predicting the dynamic characteristics. As a result, the model can be accepted as a general computation model. With all three types of damping included and the formulation used here, it is possible to study the impact of the structure's geometry and boundary conditions on the relative contribution of each type of damping. To that end, the relative contributions in the frequency domain for a few sample cases are presented.

  2. Constraining geostatistical models with hydrological data to improve prediction realism

    NASA Astrophysics Data System (ADS)

    Demyanov, V.; Rojas, T.; Christie, M.; Arnold, D.

    2012-04-01

    Geostatistical models reproduce spatial correlation based on the available on site data and more general concepts about the modelled patters, e.g. training images. One of the problem of modelling natural systems with geostatistics is in maintaining realism spatial features and so they agree with the physical processes in nature. Tuning the model parameters to the data may lead to geostatistical realisations with unrealistic spatial patterns, which would still honour the data. Such model would result in poor predictions, even though although fit the available data well. Conditioning the model to a wider range of relevant data provide a remedy that avoid producing unrealistic features in spatial models. For instance, there are vast amounts of information about the geometries of river channels that can be used in describing fluvial environment. Relations between the geometrical channel characteristics (width, depth, wave length, amplitude, etc.) are complex and non-parametric and are exhibit a great deal of uncertainty, which is important to propagate rigorously into the predictive model. These relations can be described within a Bayesian approach as multi-dimensional prior probability distributions. We propose a way to constrain multi-point statistics models with intelligent priors obtained from analysing a vast collection of contemporary river patterns based on previously published works. We applied machine learning techniques, namely neural networks and support vector machines, to extract multivariate non-parametric relations between geometrical characteristics of fluvial channels from the available data. An example demonstrates how ensuring geological realism helps to deliver more reliable prediction of a subsurface oil reservoir in a fluvial depositional environment.

  3. A new algorithm for stand table projection models.

    Treesearch

    Quang V. Cao; V. Clark Baldwin

    1999-01-01

    The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....

  4. Economic Analysis of Biological Invasions in Forests

    Treesearch

    Tomas P. Holmes; Julian Aukema; Jeffrey Englin; Robert G. Haight; Kent Kovacs; Brian Leung

    2014-01-01

    Biological invasions of native forests by nonnative pests result from complex stochastic processes that are difficult to predict. Although economic optimization models describe efficient controls across the stages of an invasion, the ability to calibrate such models is constrained by lack of information on pest population dynamics and consequent economic damages. Here...

  5. A predictive parameter estimation approach for the thermodynamically constrained averaging theory applied to diffusion in porous media

    NASA Astrophysics Data System (ADS)

    Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.

    2017-12-01

    Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.

  6. Constrained model predictive control, state estimation and coordination

    NASA Astrophysics Data System (ADS)

    Yan, Jun

    In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to guarantee local stability or convergence to a target state. If these conditions are met for all subsystems, then this stability is inherited by the overall system. For the case when each subsystem suffers from disturbances in the dynamics, own self-measurement noises, and quantization errors on neighbors' information due to the finite-bit-rate channels, the constrained MPC strategy developed in Part (i) is appropriate to apply. In Part (iii), we discuss the local predictor design and bandwidth assignment problem in a coordinated vehicle formation context. The MPC controller used in Part (ii) relates the formation control performance and the information quality in the way that large standoff implies conservative performance. We first develop an LMI (Linear Matrix Inequality) formulation for cross-estimator design in a simple two-vehicle scenario with non-standard information: one vehicle does not have access to the other's exact control value applied at each sampling time, but to its known, pre-computed, coupling linear feedback control law. Then a similar LMI problem is formulated for the bandwidth assignment problem that minimizes the total number of bits by adjusting the prediction gain matrices and the number of bits assigned to each variable. (Abstract shortened by UMI.)

  7. An open-source model and solution method to predict co-contraction in the finger.

    PubMed

    MacIntosh, Alexander R; Keir, Peter J

    2017-10-01

    A novel open-source biomechanical model of the index finger with an electromyography (EMG)-constrained static optimization solution method are developed with the goal of improving co-contraction estimates and providing means to assess tendon tension distribution through the finger. The Intrinsic model has four degrees of freedom and seven muscles (with a 14 component extensor mechanism). A novel plugin developed for the OpenSim modelling software applied the EMG-constrained static optimization solution method. Ten participants performed static pressing in three finger postures and five dynamic free motion tasks. Index finger 3D kinematics, force (5, 15, 30 N), and EMG (4 extrinsic muscles and first dorsal interosseous) were used in the analysis. The Intrinsic model predicted co-contraction increased by 29% during static pressing over the existing model. Further, tendon tension distribution patterns and forces, known to be essential to produce finger action, were determined by the model across all postures. The Intrinsic model and custom solution method improved co-contraction estimates to facilitate force propagation through the finger. These tools improve our interpretation of loads in the finger to develop better rehabilitation and workplace injury risk reduction strategies.

  8. Using eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements, and PhenoCams to constrain a process-based biogeochemical model for carbon market-funded wetland restoration

    NASA Astrophysics Data System (ADS)

    Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.

    2015-12-01

    We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1% of observed annual budgets of CO2 and CH4, respectively). The use of multiple data streams is critical for constraining parameters and reducing uncertainty in model predictions, thereby providing accurate simulation of greenhouse gas exchange in a wetland restoration project with implications for C market-funded wetland restoration worldwide.

  9. Where does the carbon go? A model–data intercomparison of vegetation carbon allocation and turnover processes at two temperate forest free-air CO2 enrichment sites

    PubMed Central

    De Kauwe, Martin G; Medlyn, Belinda E; Zaehle, Sönke; Walker, Anthony P; Dietze, Michael C; Wang, Ying-Ping; Luo, Yiqi; Jain, Atul K; El-Masri, Bassil; Hickler, Thomas; Wårlind, David; Weng, Ensheng; Parton, William J; Thornton, Peter E; Wang, Shusen; Prentice, I Colin; Asao, Shinichi; Smith, Benjamin; McCarthy, Heather R; Iversen, Colleen M; Hanson, Paul J; Warren, Jeffrey M; Oren, Ram; Norby, Richard J

    2014-01-01

    Elevated atmospheric CO2 concentration (eCO2) has the potential to increase vegetation carbon storage if increased net primary production causes increased long-lived biomass. Model predictions of eCO2 effects on vegetation carbon storage depend on how allocation and turnover processes are represented. We used data from two temperate forest free-air CO2 enrichment (FACE) experiments to evaluate representations of allocation and turnover in 11 ecosystem models. Observed eCO2 effects on allocation were dynamic. Allocation schemes based on functional relationships among biomass fractions that vary with resource availability were best able to capture the general features of the observations. Allocation schemes based on constant fractions or resource limitations performed less well, with some models having unintended outcomes. Few models represent turnover processes mechanistically and there was wide variation in predictions of tissue lifespan. Consequently, models did not perform well at predicting eCO2 effects on vegetation carbon storage. Our recommendations to reduce uncertainty include: use of allocation schemes constrained by biomass fractions; careful testing of allocation schemes; and synthesis of allocation and turnover data in terms of model parameters. Data from intensively studied ecosystem manipulation experiments are invaluable for constraining models and we recommend that such experiments should attempt to fully quantify carbon, water and nutrient budgets. PMID:24844873

  10. Climate change in fish: effects of respiratory constraints on optimal life history and behaviour.

    PubMed

    Holt, Rebecca E; Jørgensen, Christian

    2015-02-01

    The difference between maximum metabolic rate and standard metabolic rate is referred to as aerobic scope, and because it constrains performance it is suggested to constitute a key limiting process prescribing how fish may cope with or adapt to climate warming. We use an evolutionary bioenergetics model for Atlantic cod (Gadus morhua) to predict optimal life histories and behaviours at different temperatures. The model assumes common trade-offs and predicts that optimal temperatures for growth and fitness lie below that for aerobic scope; aerobic scope is thus a poor predictor of fitness at high temperatures. Initially, warming expands aerobic scope, allowing for faster growth and increased reproduction. Beyond the optimal temperature for fitness, increased metabolic requirements intensify foraging and reduce survival; oxygen budgeting conflicts thus constrain successful completion of the life cycle. The model illustrates how physiological adaptations are part of a suite of traits that have coevolved. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  11. Effects of life-history requirements on the distribution of a threatened reptile.

    PubMed

    Thompson, Denise M; Ligon, Day B; Patton, Jason C; Papeş, Monica

    2017-04-01

    Survival and reproduction are the two primary life-history traits essential for species' persistence; however, the environmental conditions that support each of these traits may not be the same. Despite this, reproductive requirements are seldom considered when estimating species' potential distributions. We sought to examine potentially limiting environmental factors influencing the distribution of an oviparous reptile of conservation concern with respect to the species' survival and reproduction and to assess the implications of the species' predicted climatic constraints on current conservation practices. We used ecological niche modeling to predict the probability of environmental suitability for the alligator snapping turtle (Macrochelys temminckii). We built an annual climate model to examine survival and a nesting climate model to examine reproduction. We combined incubation temperature requirements, products of modeled soil temperature data, and our estimated distributions to determine whether embryonic development constrained the northern distribution of the species. Low annual precipitation constrained the western distribution of alligator snapping turtles, whereas the northern distribution was constrained by thermal requirements during embryonic development. Only a portion of the geographic range predicted to have a high probability of suitability for alligator snapping turtle survival was estimated to be capable of supporting successful embryonic development. Historic occurrence records suggest adult alligator snapping turtles can survive in regions with colder climes than those associated with consistent and successful production of offspring. Estimated egg-incubation requirements indicated that current reintroductions at the northern edge of the species' range are within reproductively viable environmental conditions. Our results highlight the importance of considering survival and reproduction when estimating species' ecological niches, implicating conservation plans, and benefits of incorporating physiological data when evaluating species' distributions. © 2016 Society for Conservation Biology.

  12. Developing Tools to Test the Thermo-Mechanical Models, Examples at Crustal and Upper Mantle Scale

    NASA Astrophysics Data System (ADS)

    Le Pourhiet, L.; Yamato, P.; Burov, E.; Gurnis, M.

    2005-12-01

    Testing geodynamical model is never an easy task. Depending on the spatio-temporal scale of the model, different testable predictions are needed and no magic reciepe exist. This contribution first presents different methods that have been used to test themo-mechanical modeling results at upper crustal, lithospheric and upper mantle scale using three geodynamical examples : the Gulf of Corinth (Greece), the Western Alps, and the Sierra Nevada. At short spatio-temporal scale (e.g. Gulf of Corinth). The resolution of the numerical models is usually sufficient to catch the timing and kinematics of the faults precisely enough to be tested by tectono-stratigraphic arguments. In active deforming area, microseismicity can be compared to the effective rheology and P and T axes of the focal mechanism can be compared with local orientation of the major component of the stress tensor. At lithospheric scale the resolution of the models doesn't permit anymore to constrain the models by direct observations (i.e. structural data from field or seismic reflection). Instead, synthetic P-T-t path may be computed and compared to natural ones in term of rate of exhumation for ancient orogens. Topography may also help but on continent it mainly depends on erosion laws that are complicated to constrain. Deeper in the mantle, the only available constrain are long wave length topographic data and tomographic "data". The major problem to overcome now at lithospheric and upper mantle scale, is that the so called "data" results actually from inverse models of the real data and that those inverse model are based on synthetic models. Post processing P and S wave velocities is not sufficient to be able to make testable prediction at upper mantle scale. Instead of that, direct wave propagations model must be computed. This allows checking if the differences between two models constitute a testable prediction or not. On longer term, we may be able to use those synthetic models to reduce the residue in the inversion of elastic wave arrival time

  13. Constrained range expansion and climate change assessments

    Treesearch

    Yohay Carmel; Curtis H. Flather

    2006-01-01

    Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...

  14. Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator

    NASA Astrophysics Data System (ADS)

    Rehmatullah, Faizan

    In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.

  15. Prediction uncertainty and optimal experimental design for learning dynamical systems.

    PubMed

    Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  16. Towards spatially constrained gust models

    NASA Astrophysics Data System (ADS)

    Bos, René; Bierbooms, Wim; van Bussel, Gerard

    2014-06-01

    With the trend of moving towards 10-20 MW turbines, rotor diameters are growing beyond the size of the largest turbulent structures in the atmospheric boundary layer. As a consequence, the fully uniform transients that are commonly used to predict extreme gust loads are losing their connection to reality and may lead to gross overdimensioning. More suiting would be to represent gusts by advecting air parcels and posing certain physical constraints on size and position. However, this would introduce several new degrees of freedom that significantly increase the computational burden of extreme load prediction. In an attempt to elaborate on the costs and benefits of such an approach, load calculations were done on the DTU 10 MW reference turbine where a single uniform gust shape was given various spatial dimensions with the transverse wavelength ranging up to twice the rotor diameter (357 m). The resulting loads displayed a very high spread, but remained well under the level of a uniform gust. Moving towards spatially constrained gust models would therefore yield far less conservative, though more realistic predictions at the cost of higher computation time.

  17. PPSITE - A New Method of Site Evaluation for Longleaf Pine: Model Development and User's Guide

    Treesearch

    Constance A. Harrington

    1990-01-01

    A model was developed to predict site index (base age 50 years) for longleaf pine (Pinus palustris Mill.). The model, named PPSITE, was based on soil characteristics, site location on the landscape, and land history. The model was constrained so that the relationship between site index and each soil-site variable was consistent with what was known...

  18. Pretest predictions for the response of a 1:8-scale steel LWR containment building model to static overpressurization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clauss, D.B.

    The analyses used to predict the behavior of a 1:8-scale model of a steel LWR containment building to static overpressurization are described and results are presented. Finite strain, large displacement, and nonlinear material properties were accounted for using finite element methods. Three-dimensional models were needed to analyze the penetrations, which included operable equipment hatches, personnel lock representations, and a constrained pipe. It was concluded that the scale model would fail due to leakage caused by large deformations of the equipment hatch sleeves. 13 refs., 34 figs., 1 tab.

  19. Postglacial Rebound and Current Ice Loss Estimates from Space Geodesy: The New ICE-6G (VM5a) Global Model

    NASA Astrophysics Data System (ADS)

    Peltier, W. R.; Argus, D.; Drummond, R.; Moore, A. W.

    2012-12-01

    We compare, on a global basis, estimates of site velocity against predictions of the newly constructed postglacial rebound model ICE-6G (VM5a). This model is fit to observations of North American postglacial rebound thereby demonstrating that the ice sheet at last glacial maximum must have been, relative to ICE-5G,thinner in southern Manitoba, thinner near Yellowknife (northwest Territories), thicker in eastern and southern Quebec, and thicker along the British Columbia-Alberta border. The GPS based estimates of site velocity that we employ are more accurate than were previously available because they are based on GPS estimates of position as a function of time determined by incorporating satellite phase center variations [Desai et al. 2011]. These GPS estimates are constraining postglacial rebound in North America and Europe more tightly than ever before. In particular, given the high density of GPS sites in North America, and the fact that the velocity of the mass center (CM) of Earth is also more tightly constrained, the new model much more strongly constrains both the lateral extent of the proglacial forebulge and the rate at which this peripheral bulge (that was emplaced peripheral to the late Pleistocence Laurentia ice sheet) is presently collapsing. This fact proves to be important to the more accurate inference of the current rate of ice loss from both Greenland and Alaska based upon the time dependent gravity observations being provided by the GRACE satellite system. In West Antarctica we have also been able to significantly revise the previously prevalent ICE-5G deglaciation history so as to enable its predictions to be optimally consistent with GPS site velocities determined by connecting campaign WAGN measurements to those provided by observations from the permanent ANET sites. Ellsworth Land (south of the Antarctic peninsula), is observed to be rising at 6 ±3 mm/yr according to our latest analyses; the Ellsworth mountains themselves are observed to be rising at 5 ±4 mm/yr; Palmer Land is observed to be rising at 3 ±3 mm/yr. The predictions of the ICE-5G (VM2) model and those of the postglacial rebound component of the model of Simons, Ivins, and James [2010] had predicted uplift to be significantly faster than observed in this region, as previously documented in Argus et al [2011]. From a global perspective the new ICE-6G (VM5a) model is also a further significant improvement on the previous ICE-5G (VM2) model in that the degree two and order one components of its predicted time dependence of geoid height are tightly constrained by the recent inferences of Roy and Peltier [2011] of the post-GRACE-launch values of the speed and direction of true polar wander and the non-tidal acceleration of the lod. .

  20. Programmable logic controller implementation of an auto-tuned predictive control based on minimal plant information.

    PubMed

    Valencia-Palomo, G; Rossiter, J A

    2011-01-01

    This paper makes two key contributions. First, it tackles the issue of the availability of constrained predictive control for low-level control loops. Hence, it describes how the constrained control algorithm is embedded in an industrial programmable logic controller (PLC) using the IEC 61131-3 programming standard. Second, there is a definition and implementation of a novel auto-tuned predictive controller; the key novelty is that the modelling is based on relatively crude but pragmatic plant information. Laboratory experiment tests were carried out in two bench-scale laboratory systems to prove the effectiveness of the combined algorithm and hardware solution. For completeness, the results are compared with a commercial proportional-integral-derivative (PID) controller (also embedded in the PLC) using the most up to date auto-tuning rules. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Training Signaling Pathway Maps to Biochemical Data with Constrained Fuzzy Logic: Quantitative Analysis of Liver Cell Responses to Inflammatory Stimuli

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Clarke, David C.; Sorger, Peter K.; Lauffenburger, Douglas A.

    2011-01-01

    Predictive understanding of cell signaling network operation based on general prior knowledge but consistent with empirical data in a specific environmental context is a current challenge in computational biology. Recent work has demonstrated that Boolean logic can be used to create context-specific network models by training proteomic pathway maps to dedicated biochemical data; however, the Boolean formalism is restricted to characterizing protein species as either fully active or inactive. To advance beyond this limitation, we propose a novel form of fuzzy logic sufficiently flexible to model quantitative data but also sufficiently simple to efficiently construct models by training pathway maps on dedicated experimental measurements. Our new approach, termed constrained fuzzy logic (cFL), converts a prior knowledge network (obtained from literature or interactome databases) into a computable model that describes graded values of protein activation across multiple pathways. We train a cFL-converted network to experimental data describing hepatocytic protein activation by inflammatory cytokines and demonstrate the application of the resultant trained models for three important purposes: (a) generating experimentally testable biological hypotheses concerning pathway crosstalk, (b) establishing capability for quantitative prediction of protein activity, and (c) prediction and understanding of the cytokine release phenotypic response. Our methodology systematically and quantitatively trains a protein pathway map summarizing curated literature to context-specific biochemical data. This process generates a computable model yielding successful prediction of new test data and offering biological insight into complex datasets that are difficult to fully analyze by intuition alone. PMID:21408212

  2. A mathematical model for predicting the life of polymer electrolyte fuel cell membranes subjected to hydration cycling

    NASA Astrophysics Data System (ADS)

    Burlatsky, S. F.; Gummalla, M.; O'Neill, J.; Atrazhev, V. V.; Varyukhin, A. N.; Dmitriev, D. V.; Erikhman, N. S.

    2012-10-01

    Under typical Polymer Electrolyte Membrane Fuel Cell (PEMFC) fuel cell operating conditions, part of the membrane electrode assembly is subjected to humidity cycling due to variation of inlet gas RH and/or flow rate. Cyclic membrane hydration/dehydration would cause cyclic swelling/shrinking of the unconstrained membrane. In a constrained membrane, it causes cyclic stress resulting in mechanical failure in the area adjacent to the gas inlet. A mathematical modeling framework for prediction of the lifetime of a PEMFC membrane subjected to hydration cycling is developed in this paper. The model predicts membrane lifetime as a function of RH cycling amplitude and membrane mechanical properties. The modeling framework consists of three model components: a fuel cell RH distribution model, a hydration/dehydration induced stress model that predicts stress distribution in the membrane, and a damage accrual model that predicts membrane lifetime. Short descriptions of the model components along with overall framework are presented in the paper. The model was used for lifetime prediction of a GORE-SELECT membrane.

  3. Constraining the physical properties of compositionally distinctive surfaces on Mars from overlapping THEMIS observations

    NASA Astrophysics Data System (ADS)

    Ahern, A.; Rogers, D.

    2017-12-01

    Better constraints on the physical properties (e.g. grain size, rock abundance, cohesion, porosity and amount of induration) of Martian surface materials can lead to greater understanding of outcrop origin (e.g. via sedimentary, effusive volcanic, pyroclastic processes). Many outcrop surfaces on Mars likely contain near-surface (<3 cm) vertical heterogeneity in physical properties due to thin sediment cover, induration, and physical weathering, that can obscure measurement of the bulk thermal conductivity of the outcrop materials just below. Fortunately, vertical heterogeneity within near-surface materials can result in unique, and possibly predictable, diurnal and seasonal temperature patterns. The KRC thermal model has been utilized in a number of previous studies to predict thermal inertia of surface materials on Mars. Here we use KRC to model surface temperatures from overlapping Mars Odyssey THEMIS surface temperature observations that span multiple seasons and local times, in order to constrain both the nature of vertical heterogeneity and the underlying outcrop thermal inertia for various spectrally distinctive outcrops on Mars. We utilize spectral observations from TES and CRISM to constrain the particle size of the uppermost surface. For this presentation, we will focus specifically on chloride-bearing units in Terra Sirenum and Meridiani Planum, as well as mafic and feldspathic bedrock locations with distinct spectral properties, yet uncertain origins, in Noachis Terra and Nili Fossae. We find that many of these surfaces exhibit variations in apparent thermal inertia with season and local time that are consistent with low thermal inertia materials overlying higher thermal inertia substrates. Work is ongoing to compare surface temperature measurements with modeled two-layer scenarios in order to constrain the top layer thickness and bottom layer thermal inertia. The information will be used to better interpret the origins of these distinctive outcrops.

  4. Modeling evapotranspiration based on plant hydraulic theory can predict spatial variability across an elevation gradient and link to biogeochemical fluxes

    NASA Astrophysics Data System (ADS)

    Mackay, D. S.; Frank, J.; Reed, D.; Whitehouse, F.; Ewers, B. E.; Pendall, E.; Massman, W. J.; Sperry, J. S.

    2012-04-01

    In woody plant systems transpiration is often the dominant component of total evapotranspiration, and so it is key to understanding water and energy cycles. Moreover, transpiration is tightly coupled to carbon and nutrient fluxes, and so it is also vital to understanding spatial variability of biogeochemical fluxes. However, the spatial variability of transpiration and its links to biogeochemical fluxes, within- and among-ecosystems, has been a challenge to constrain because of complex feedbacks between physical and biological controls. Plant hydraulics provides an emerging theory with the rigor needed to develop testable hypotheses and build useful models for scaling these coupled fluxes from individual plants to regional scales. This theory predicts that vegetative controls over water, energy, carbon, and nutrient fluxes can be determined from the limitation of plant water transport through the soil-xylem-stomata pathway. Limits to plant water transport can be predicted from measurable plant structure and function (e.g., vulnerability to cavitation). We present a next-generation coupled transpiration-biogeochemistry model based on this emerging theory. The model, TREEScav, is capable of predicting transpiration, along with carbon and nutrient flows, constrained by plant structure and function. The model incorporates tightly coupled mechanisms of the demand and supply of water through the soil-xylem-stomata system, with the feedbacks to photosynthesis and utilizable carbohydrates. The model is evaluated by testing it against transpiration and carbon flux data along an elevation gradient of woody plants comprising sagebrush steppe, mid-elevation lodgepole pine forests, and subalpine spruce/fir forests in the Rocky Mountains. The model accurately predicts transpiration and carbon fluxes as measured from gas exchange, sap flux, and eddy covariance towers. The results of this work demonstrate that credible spatial predictions of transpiration and related biogeochemical fluxes will be possible at regional scales using relatively easily obtained vegetation structural and functional information.

  5. Plant physiological models of heat, water and photoinhibition stress for climate change modelling and agricultural prediction

    NASA Astrophysics Data System (ADS)

    Nicolas, B.; Gilbert, M. E.; Paw U, K. T.

    2015-12-01

    Soil-Vegetation-Atmosphere Transfer (SVAT) models are based upon well understood steady state photosynthetic physiology - the Farquhar-von Caemmerer-Berry model (FvCB). However, representations of physiological stress and damage have not been successfully integrated into SVAT models. Generally, it has been assumed that plants will strive to conserve water at higher temperatures by reducing stomatal conductance or adjusting osmotic balance, until potentially damaging temperatures and the need for evaporative cooling become more important than water conservation. A key point is that damage is the result of combined stresses: drought leads to stomatal closure, less evaporative cooling, high leaf temperature, less photosynthetic dissipation of absorbed energy, all coupled with high light (photosynthetic photon flux density; PPFD). This leads to excess absorbed energy by Photosystem II (PSII) and results in photoinhibition and damage, neither are included in SVAT models. Current representations of photoinhibition are treated as a function of PPFD, not as a function of constrained photosynthesis under heat or water. Thus, it seems unlikely that current models can predict responses of vegetation to climate variability and change. We propose a dynamic model of damage to Rubisco and RuBP-regeneration that accounts, mechanistically, for the interactions between high temperature, light, and constrained photosynthesis under drought. Further, these predictions are illustrated by key experiments allowing model validation. We also integrated this new framework within the Advanced Canopy-Atmosphere-Soil Algorithm (ACASA). Preliminary results show that our approach can be used to predict reasonable photosynthetic dynamics. For instances, a leaf undergoing one day of drought stress will quickly decrease its maximum quantum yield of PSII (Fv/Fm), but it won't recover to unstressed levels for several days. Consequently, cumulative effect of photoinhibition on photosynthesis can cause a decrease of about 35% of CO2 uptake. As a result, the incorporation of stress and damage into SVAT models could considerably improve our ability to predict global responses to climate change.

  6. Network-constrained group lasso for high-dimensional multinomial classification with application to cancer subtype prediction.

    PubMed

    Tian, Xinyu; Wang, Xuefeng; Chen, Jun

    2014-01-01

    Classic multinomial logit model, commonly used in multiclass regression problem, is restricted to few predictors and does not take into account the relationship among variables. It has limited use for genomic data, where the number of genomic features far exceeds the sample size. Genomic features such as gene expressions are usually related by an underlying biological network. Efficient use of the network information is important to improve classification performance as well as the biological interpretability. We proposed a multinomial logit model that is capable of addressing both the high dimensionality of predictors and the underlying network information. Group lasso was used to induce model sparsity, and a network-constraint was imposed to induce the smoothness of the coefficients with respect to the underlying network structure. To deal with the non-smoothness of the objective function in optimization, we developed a proximal gradient algorithm for efficient computation. The proposed model was compared to models with no prior structure information in both simulations and a problem of cancer subtype prediction with real TCGA (the cancer genome atlas) gene expression data. The network-constrained mode outperformed the traditional ones in both cases.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saide, Pablo E.; Peterson, David A.; de Silva, Arlindo

    We couple airborne, ground-based, and satellite observations; conduct regional simulations; and develop and apply an inversion technique to constrain hourly smoke emissions from the Rim Fire, the third largest observed in California, USA. Emissions constrained with multiplatform data show notable nocturnal enhancements (sometimes over a factor of 20), correlate better with daily burned area data, and are a factor of 2–4 higher than a priori estimates, highlighting the need for improved characterization of diurnal profiles and day-to-day variability when modeling extreme fires. Constraining only with satellite data results in smaller enhancements mainly due to missing retrievals near the emissions source,more » suggesting that top-down emission estimates for these events could be underestimated and a multiplatform approach is required to resolve them. Predictions driven by emissions constrained with multiplatform data present significant variations in downwind air quality and in aerosol feedback on meteorology, emphasizing the need for improved emissions estimates during exceptional events.« less

  8. Active/Passive Control of Sound Radiation from Panels using Constrained Layer Damping

    NASA Technical Reports Server (NTRS)

    Gibbs, Gary P.; Cabell, Randolph H.

    2003-01-01

    A hybrid passive/active noise control system utilizing constrained layer damping and model predictive feedback control is presented. This system is used to control the sound radiation of panels due to broadband disturbances. To facilitate the hybrid system design, a methodology for placement of constrained layer damping which targets selected modes based on their relative radiated sound power is developed. The placement methodology is utilized to determine two constrained layer damping configurations for experimental evaluation of a hybrid system. The first configuration targets the (4,1) panel mode which is not controllable by the piezoelectric control actuator, and the (2,3) and (5,2) panel modes. The second configuration targets the (1,1) and (3,1) modes. The experimental results demonstrate the improved reduction of radiated sound power using the hybrid passive/active control system as compared to the active control system alone.

  9. Does Nudging Squelch the Extremes in Regional Climate Modeling?

    EPA Science Inventory

    An important question in regional climate downscaling is whether to constrain (nudge) the interior of the limited-area domain toward the larger-scale driving fields. Prior research has demonstrated that interior nudging can increase the skill of regional climate predictions origin...

  10. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    NASA Astrophysics Data System (ADS)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2016-02-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT) air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields perturbs SOA concentrations by a factor of two and are probably a much stronger determinant in 3-D models than multi-generational oxidation. While total predicted SOA mass is similar for the SOM and two-product models, the SOM model predicts increased SOA contributions from anthropogenic (alkane, aromatic) and sesquiterpenes and decreased SOA contributions from isoprene and monoterpene relative to the two-product model calculations. The SOA predicted by SOM has a much lower volatility than that predicted by the traditional model, resulting in better qualitative agreement with volatility measurements of ambient OA. On account of its lower-volatility, the SOA mass produced by SOM does not appear to be as strongly influenced by the inclusion of oligomerization reactions, whereas the two-product model relies heavily on oligomerization to form low-volatility SOA products. Finally, an unconstrained contemporary hybrid scheme to model multi-generational oxidation within the framework of a two-product model in which ageing reactions are added on top of the existing two-product parameterization is considered. This hybrid scheme formed at least 3 times more SOA than the SOM during regional simulations as a result of excessive transformation of semi-volatile vapors into lower volatility material that strongly partitions to the particle phase. This finding suggests that these hybrid multi-generational schemes should be used with great caution in regional models.

  11. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  12. Thermomechanical Fatigue of Ductile Cast Iron and Its Life Prediction

    NASA Astrophysics Data System (ADS)

    Wu, Xijia; Quan, Guangchun; MacNeil, Ryan; Zhang, Zhong; Liu, Xiaoyang; Sloss, Clayton

    2015-06-01

    Thermomechanical fatigue (TMF) behaviors of ductile cast iron (DCI) were investigated under out-of-phase (OP), in-phase (IP), and constrained strain-control conditions with temperature hold in various temperature ranges: 573 K to 1073 K, 723 K to 1073 K, and 433 K to 873 K (300 °C to 800 °C, 450 °C to 800 °C, and 160 °C to 600 °C). The integrated creep-fatigue theory (ICFT) model was incorporated into the finite element method to simulate the hysteresis behavior and predict the TMF life of DCI under those test conditions. With the consideration of four deformation/damage mechanisms: (i) plasticity-induced fatigue, (ii) intergranular embrittlement, (iii) creep, and (iv) oxidation, as revealed from the previous study on low cycle fatigue of the material, the model delineates the contributions of these physical mechanisms in the asymmetrical hysteresis behavior and the damage accumulation process leading to final TMF failure. This study shows that the ICFT model can simulate the stress-strain response and life of DCI under complex TMF loading profiles (OP and IP, and constrained with temperature hold).

  13. Whole Atmosphere Modeling and Data Analysis: Success Stories, Challenges and Perspectives

    NASA Astrophysics Data System (ADS)

    Yudin, V. A.; Akmaev, R. A.; Goncharenko, L. P.; Fuller-Rowell, T. J.; Matsuo, T.; Ortland, D. A.; Maute, A. I.; Solomon, S. C.; Smith, A. K.; Liu, H.; Wu, Q.

    2015-12-01

    At the end of the 20-th century Raymond Roble suggested an ambitious target of developing an atmospheric general circulation model (GCM) that spans from the surface to the thermosphere for modeling the coupled atmosphere-ionosphere with drivers from terrestrial meteorology and solar-geomagnetic inputs. He pointed out several areas of research and applications that would benefit highly from the development and improvement of whole atmosphere modeling. At present several research groups using middle and whole atmosphere models have attempted to perform coupled ionosphere-thermosphere predictions to interpret the "unexpected" anomalies in the electron content, ions and plasma drifts observed during recent stratospheric warming events. The recent whole atmosphere inter-comparison case studies also displayed striking differences in simulations of prevailing flows, planetary waves and dominant tidal modes even when the lower atmosphere domain of those models were constrained by similar meteorological analyses. We will present the possible reasons of such differences between data-constrained whole atmosphere simulations when analyses with 6-hour time resolution are used and discuss the potential model-data and model-model differences above the stratopause. The possible shortcomings of the whole atmosphere simulations associated with model physics, dynamical cores and resolutions will be discussed. With the increased confidence in the space-borne temperature, winds and ozone observations and extensive collections of ground-based upper atmosphere observational facilities, the whole atmosphere modelers will be able to quantify annual and year-to-variability of the zonal mean flows, planetary wave and tides. We will demonstrate the value of tidal and planetary wave variability deduced from the space-borne data and ground-based systems for evaluation and tune-up of whole atmosphere simulations including corrections of systematic model errors. Several success stories on the middle and whole atmosphere simulations coupled with the ionosphere models will be highlighted, and future perspectives for links of the space and terrestrial weather predictions constrained by current and scheduled ionosphere-thermosphere-mesosphere satellite missions will be presented

  14. Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction

    NASA Technical Reports Server (NTRS)

    Li, Zhijin; Chao, Yi; Li, P. Peggy

    2012-01-01

    A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.

  15. EMG prediction from Motor Cortical Recordings via a Non-Negative Point Process Filter

    PubMed Central

    Nazarpour, Kianoush; Ethier, Christian; Paninski, Liam; Rebesco, James M.; Miall, R. Chris; Miller, Lee E.

    2012-01-01

    A constrained point process filtering mechanism for prediction of electromyogram (EMG) signals from multi-channel neural spike recordings is proposed here. Filters from the Kalman family are inherently sub-optimal in dealing with non-Gaussian observations, or a state evolution that deviates from the Gaussianity assumption. To address these limitations, we modeled the non-Gaussian neural spike train observations by using a generalized linear model (GLM) that encapsulates covariates of neural activity, including the neurons’ own spiking history, concurrent ensemble activity, and extrinsic covariates (EMG signals). In order to predict the envelopes of EMGs, we reformulated the Kalman filter (KF) in an optimization framework and utilized a non-negativity constraint. This structure characterizes the non-linear correspondence between neural activity and EMG signals reasonably. The EMGs were recorded from twelve forearm and hand muscles of a behaving monkey during a grip-force task. For the case of limited training data, the constrained point process filter improved the prediction accuracy when compared to a conventional Wiener cascade filter (a linear causal filter followed by a static non-linearity) for different bin sizes and delays between input spikes and EMG output. For longer training data sets, results of the proposed filter and that of the Wiener cascade filter were comparable. PMID:21659018

  16. Explaining postseismic and aseismic transient deformation in subduction zones with rate and state friction modeling constrained by lab and geodetic observations

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Dedontney, N. L.; Rice, J. R.

    2007-12-01

    Rate and state friction, as applied to modeling subduction earthquake sequences, routinely predicts postseismic slip. It also predicts spontaneous aseismic slip transients, at least when pore pressure p is highly elevated near and downdip from the stability transition [Liu and Rice, 2007]. Here we address how to make such postseismic and transient predictions more fully compatible with geophysical observations. For example, lab observations can determine the a, b parameters and state evolution slip L of rate and state friction as functions of lithology and temperature and, with aid of a structural and thermal model of the subduction zone, as functions of downdip distance. Geodetic observations constrain interseismic, postseismic and aseismic transient deformations, which are controlled in the modeling by the distributions of a \\barσ and b \\barσ (parameters which also partly control the seismic rupture phase), where \\barσ = σ - p. Elevated p, controlled by tectonic compression and dehydration, may be constrained by petrologic and seismic observations. The amount of deformation and downdip extent of the slipping zone associated with the spontaneous quasi- periodic transients, as thus far modeled [Liu and Rice, 2007], is generally smaller than that observed during episodes of slow slip events in northern Cascadia and SW Japan subduction zones. However, the modeling was based on lab data for granite gouge under hydrothermal conditions because data is most complete for that case. We here report modeling based on lab data on dry granite gouge [Stesky, 1975; Lockner et al., 1986], involving no or lessened chemical interaction with water and hence being a possibly closer analog to dehydrated oceanic crust, and limited data on gabbro gouge [He et al., 2007], an expected lithology. Both data sets show a much less rapid increase of a-b with temperature above the stability transition (~ 350 °C) than does wet granite gouge; a-b increases to ~ 0.08 for wet granite at 600 °C, but to only ~ 0.01 in the dry granite and gabbro cases. We find that the lessened high-T a - b does, for the same \\barσ, modestly extend the transient slip episodes further downdip, although a majority of slip is still contributed near and in the updip rate-weakening region. However, postseismic slip, for the same \\barσ, propagates much further downdip into the rate-strengthening region. To better constrain the downdip distribution of (a - b) \\barσ, and possibly a \\barσ and L, we focus on the geodetically constrained [Hutton et al., 2001] space-time distribution of postseismic slip for the 1995 Mw = 8.0 Colima-Jalisco earthquake. This is a similarly shallow dipping subduction zone with a thermal profile [Currie et al., 2001] comparable to those that have thus far been shown to exhibit aseismic transients and non-volcanic tremor [Peacock et al., 2002]. We extrapolate the modeled 2-D postseismic slip, following a thrust earthquake with a coseismic slip similar to the 1995 event, to a spatial-temporal 3-D distribution. Surface deformation due to such slips on the thrust fault in an elastic half space is calculated and compared to that observed at western Mexico GPS stations, to constrain the above depth-variable model parameters.

  17. Alpha and theta band dynamics related to sentential constraint and word expectancy.

    PubMed

    Rommers, Joost; Dickson, Danielle S; Norton, James J S; Wlotko, Edward W; Federmeier, Kara D

    2017-01-01

    Despite strong evidence for prediction during language comprehension, the underlying mechanisms, and the extent to which they are specific to language, remain unclear. Re-analyzing an ERP study, we examined responses in the time-frequency domain to expected and unexpected (but plausible) words in strongly and weakly constraining sentences, and found results similar to those reported in nonverbal domains. Relative to expected words, unexpected words elicited an increase in the theta band (4-7 Hz) in strongly constraining contexts, suggesting the involvement of control processes to deal with the consequences of having a prediction disconfirmed. Prior to critical word onset, strongly constraining sentences exhibited a decrease in the alpha band (8-12 Hz) relative to weakly constraining sentences, suggesting that comprehenders can take advantage of predictive sentence contexts to prepare for the input. The results suggest that the brain recruits domain-general preparation and control mechanisms when making and assessing predictions during sentence comprehension.

  18. Potential New Lidar Observations for Cloud Studies

    NASA Technical Reports Server (NTRS)

    Winker, Dave; Hu, Yong; Narir, Amin; Cai, Xia

    2015-01-01

    The response of clouds to global warming represents a major uncertainty in estimating climate sensitivity. These uncertainties have been tracked to shallow marine clouds in the tropics and subtropics. CALIOP observations have already been used extensively to evaluate model predictions of shallow cloud fraction and top height (Leahy et al. 2013; Nam et al 2012). Tools are needed to probe the lowest levels of the troposphere. The large footprint of satellite lidars gives large multiple scattering from clouds which presents new possibilities for cloud retrievals to constrain model predictions.

  19. Zee-Babu type model with U (1 )Lμ-Lτ gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-05-01

    We extend the Zee-Babu model, introducing local U (1 )Lμ-Lτ symmetry with several singly charged bosons. We find a predictive neutrino mass texture in a simple hypothesis in which mixings among singly charged bosons are negligible. Also, lepton-flavor violations are less constrained compared with the original model. Then, we explore the testability of the model, focusing on doubly charged boson physics at the LHC and the International Linear Collider.

  20. Cosmological structure formation in Decaying Dark Matter models

    NASA Astrophysics Data System (ADS)

    Cheng, Dalong; Chu, M.-C.; Tang, Jiayu

    2015-07-01

    The standard cold dark matter (CDM) model predicts too many and too dense small structures. We consider an alternative model that the dark matter undergoes two-body decays with cosmological lifetime τ into only one type of massive daughters with non-relativistic recoil velocity Vk. This decaying dark matter model (DDM) can suppress the structure formation below its free-streaming scale at time scale comparable to τ. Comparing with warm dark matter (WDM), DDM can better reduce the small structures while being consistent with high redshfit observations. We study the cosmological structure formation in DDM by performing self-consistent N-body simulations and point out that cosmological simulations are necessary to understand the DDM structures especially on non-linear scales. We propose empirical fitting functions for the DDM suppression of the mass function and the concentration-mass relation, which depend on the decay parameters lifetime τ, recoil velocity Vk and redshift. The fitting functions lead to accurate reconstruction of the the non-linear power transfer function of DDM to CDM in the framework of halo model. Using these results, we set constraints on the DDM parameter space by demanding that DDM does not induce larger suppression than the Lyman-α constrained WDM models. We further generalize and constrain the DDM models to initial conditions with non-trivial mother fractions and show that the halo model predictions are still valid after considering a global decayed fraction. Finally, we point out that the DDM is unlikely to resolve the disagreement on cluster numbers between the Planck primary CMB prediction and the Sunyaev-Zeldovich (SZ) effect number count for τ ~ H0-1.

  1. Model-data assimilation of multiple phenological observations to constrain and predict leaf area index.

    PubMed

    Viskari, Toni; Hardiman, Brady; Desai, Ankur R; Dietze, Michael C

    2015-03-01

    Our limited ability to accurately simulate leaf phenology is a leading source of uncertainty in models of ecosystem carbon cycling. We evaluate if continuously updating canopy state variables with observations is beneficial for predicting phenological events. We employed ensemble adjustment Kalman filter (EAKF) to update predictions of leaf area index (LAI) and leaf extension using tower-based photosynthetically active radiation (PAR) and moderate resolution imaging spectrometer (MODIS) data for 2002-2005 at Willow Creek, Wisconsin, USA, a mature, even-aged, northern hardwood, deciduous forest. The ecosystem demography model version 2 (ED2) was used as the prediction model, forced by offline climate data. EAKF successfully incorporated information from both the observations and model predictions weighted by their respective uncertainties. The resulting. estimate reproduced the observed leaf phenological cycle in the spring and the fall better than a parametric model prediction. These results indicate that during spring the observations contribute most in determining the correct bud-burst date, after which the model performs well, but accurately modeling fall leaf senesce requires continuous model updating from observations. While the predicted net ecosystem exchange (NEE) of CO2 precedes tower observations and unassimilated model predictions in the spring, overall the prediction follows observed NEE better than the model alone. Our results show state data assimilation successfully simulates the evolution of plant leaf phenology and improves model predictions of forest NEE.

  2. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    NASA Astrophysics Data System (ADS)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  3. Gamma Prime Precipitate Evolution During Aging of a Model Nickel-Based Superalloy

    NASA Astrophysics Data System (ADS)

    Goodfellow, A. J.; Galindo-Nava, E. I.; Christofidou, K. A.; Jones, N. G.; Martin, T.; Bagot, P. A. J.; Boyer, C. D.; Hardy, M. C.; Stone, H. J.

    2018-03-01

    The microstructural stability of nickel-based superalloys is critical for maintaining alloy performance during service in gas turbine engines. In this study, the precipitate evolution in a model polycrystalline Ni-based superalloy during aging to 1000 hours has been studied via transmission electron microscopy, atom probe tomography, and neutron diffraction. Variations in phase composition and precipitate morphology, size, and volume fraction were observed during aging, while the constrained lattice misfit remained constant at approximately zero. The experimental composition of the γ matrix phase was consistent with thermodynamic equilibrium predictions, while significant differences were identified between the experimental and predicted results from the γ' phase. These results have implications for the evolution of mechanical properties in service and their prediction using modeling methods.

  4. Why Bother to Calibrate? Model Consistency and the Value of Prior Information

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal

    2015-04-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  5. Why Bother and Calibrate? Model Consistency and the Value of Prior Information.

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.

    2014-12-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  6. Process consistency in models: The importance of system signatures, expert knowledge, and process complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.

    2014-09-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.

  7. Order-constrained linear optimization.

    PubMed

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dierickx, Marion I. P.; Loeb, Abraham, E-mail: mdierickx@cfa.harvard.edu, E-mail: aloeb@cfa.harvard.edu

    The extensive span of the Sagittarius (Sgr) stream makes it a promising tool for studying the gravitational potential of the Milky Way (MW). Characterizing its stellar kinematics can constrain halo properties and provide a benchmark for the paradigm of galaxy formation from cold dark matter. Accurate models of the disruption dynamics of the Sgr progenitor are necessary to employ this tool. Using a combination of analytic modeling and N -body simulations, we build a new model of the Sgr orbit and resulting stellar stream. In contrast to previous models, we simulate the full infall trajectory of the Sgr progenitor frommore » the time it first crossed the MW virial radius 8 Gyr ago. An exploration of the parameter space of initial phase-space conditions yields tight constraints on the angular momentum of the Sgr progenitor. Our best-fit model is the first to accurately reproduce existing data on the 3D positions and radial velocities of the debris detected 100 kpc away in the MW halo. In addition to replicating the mapped stream, the simulation also predicts the existence of several arms of the Sgr stream extending to hundreds of kiloparsecs. The two most distant stars known in the MW halo coincide with the predicted structure. Additional stars in the newly predicted arms can be found with future data from the Large Synoptic Survey Telescope. Detecting a statistical sample of stars in the most distant Sgr arms would provide an opportunity to constrain the MW potential out to unprecedented Galactocentric radii.« less

  9. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  10. The highest-frequency kHz QPOs in neutron star low mass X-ray binaries

    NASA Astrophysics Data System (ADS)

    van Doesburgh, Marieke; van der Klis, Michiel; Morsink, Sharon M.

    2018-05-01

    We investigate the detections with RXTE of the highest-frequency kHz QPOs previously reported in six neutron star (NS) low mass X-ray binaries. We find that the highest-frequency kHz QPO detected in 4U 0614+09 has a 1267 Hz 3σ confidence lower limit on its centroid frequency. This is the highest such limit reported to date, and of direct physical interest as it can be used to constrain QPO models and the supranuclear density equation of state (EoS). We compare our measured frequencies to maximum orbital frequencies predicted in full GR using models of rotating neutron stars with a number of different modern EoS and show that these can accommodate the observed QPO frequencies. Orbital motion constrained by NS and ISCO radii is therefore a viable explanation of these QPOs. In the most constraining case of 4U 0614+09 we find the NS mass must be M<2.1 M⊙. From our measured QPO frequencies we can constrain the NS radii for five of the six sources we studied to narrow ranges (±0.1-0.7 km) different for each source and each EoS.

  11. Interactions of timing and prediction error learning.

    PubMed

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Monitoring is not enough: on the need for a model-based approach to migratory bird management

    USGS Publications Warehouse

    Nichols, J.D.; Bonney, Rick; Pashley, David N.; Cooper, Robert; Niles, Larry

    2000-01-01

    Informed management requires information about system state and about effects of potential management actions on system state. Population monitoring can provide the needed information about system state, as well as information that can be used to investigate effects of management actions. Three methods for investigating effects of management on bird populations are (1) retrospective analysis, (2) formal experimentation and constrained-design studies, and (3) adaptive management. Retrospective analyses provide weak inferences, regardless of the quality of the monitoring data. The active use of monitoring data in experimental or constrained-design studies or in adaptive management is recommended. Under both approaches, learning occurs via the comparison of estimates from the monitoring program with predictions from competing management models.

  13. Derivation of the Energy and Flux Morphology in an Aurora Observed at Midlatitude Using Multispectral Imaging

    NASA Astrophysics Data System (ADS)

    Aryal, Saurav; Finn, Susanna C.; Hewawasam, Kuravi; Maguire, Ryan; Geddes, George; Cook, Timothy; Martel, Jason; Baumgardner, Jeffrey L.; Chakrabarti, Supriya

    2018-05-01

    Energies and fluxes of precipitating electrons in an aurora over Lowell, MA on 22-23 June 2015 were derived based on simultaneous, high-resolution (≈ 0.02 nm) brightness measurements of N2+ (427.8 nm, blue line), OI (557.7 nm, green line), and OI (630.0 nm, red line) emissions. The electron energies and energy fluxes as a function of time and look direction were derived by nonlinear minimization of model predictions with respect to the measurements. Three different methods were compared; in the first two methods, we constrained the modeled brightnesses and brightness ratios, respectively, with measurements to simultaneously derive energies and fluxes. Then we used a hybrid method where we constrained the individual modeled brightness ratios with measurements to derive energies and then constrained modeled brightnesses with measurements to derive fluxes. Derived energy, assuming Maxwellian distribution, during this storm ranged from 109 to 262 eV and the total energy flux ranged from 0.8 to 2.2 ergs·cm-2·s-1. This approach provides a way to estimate energies and energy fluxes of the precipitating electrons using simultaneous multispectral measurements.

  14. Thalamic functional connectivity predicts seizure laterality in individual TLE patients: application of a biomarker development strategy.

    PubMed

    Barron, Daniel S; Fox, Peter T; Pardoe, Heath; Lancaster, Jack; Price, Larry R; Blackmon, Karen; Berry, Kristen; Cavazos, Jose E; Kuzniecky, Ruben; Devinsky, Orrin; Thesen, Thomas

    2015-01-01

    Noninvasive markers of brain function could yield biomarkers in many neurological disorders. Disease models constrained by coordinate-based meta-analysis are likely to increase this yield. Here, we evaluate a thalamic model of temporal lobe epilepsy that we proposed in a coordinate-based meta-analysis and extended in a diffusion tractography study of an independent patient population. Specifically, we evaluated whether thalamic functional connectivity (resting-state fMRI-BOLD) with temporal lobe areas can predict seizure onset laterality, as established with intracranial EEG. Twenty-four lesional and non-lesional temporal lobe epilepsy patients were studied. No significant differences in functional connection strength in patient and control groups were observed with Mann-Whitney Tests (corrected for multiple comparisons). Notwithstanding the lack of group differences, individual patient difference scores (from control mean connection strength) successfully predicted seizure onset zone as shown in ROC curves: discriminant analysis (two-dimensional) predicted seizure onset zone with 85% sensitivity and 91% specificity; logistic regression (four-dimensional) achieved 86% sensitivity and 100% specificity. The strongest markers in both analyses were left thalamo-hippocampal and right thalamo-entorhinal cortex functional connection strength. Thus, this study shows that thalamic functional connections are sensitive and specific markers of seizure onset laterality in individual temporal lobe epilepsy patients. This study also advances an overall strategy for the programmatic development of neuroimaging biomarkers in clinical and genetic populations: a disease model informed by coordinate-based meta-analysis was used to anatomically constrain individual patient analyses.

  15. A Lattice-Misfit-Dependent Damage Model for Non-linear Damage Accumulations Under Monotonous Creep in Single Crystal Superalloys

    NASA Astrophysics Data System (ADS)

    le Graverend, J.-B.

    2018-05-01

    A lattice-misfit-dependent damage density function is developed to predict the non-linear accumulation of damage when a thermal jump from 1050 °C to 1200 °C is introduced somewhere in the creep life. Furthermore, a phenomenological model aimed at describing the evolution of the constrained lattice misfit during monotonous creep load is also formulated. The response of the lattice-misfit-dependent plasticity-coupled damage model is compared with the experimental results obtained at 140 and 160 MPa on the first generation Ni-based single crystal superalloy MC2. The comparison reveals that the damage model is well suited at 160 MPa and less at 140 MPa because the transfer of stress to the γ' phase occurs for stresses above 150 MPa which leads to larger variations and, therefore, larger effects of the constrained lattice misfit on the lifetime during thermo-mechanical loading.

  16. Advance in prediction of soil slope instabilities

    NASA Astrophysics Data System (ADS)

    Sigarán-Loría, C.; Hack, R.; Nieuwenhuis, J. D.

    2012-04-01

    Six generic soils (clays and sands) were systematically modeled with plane-strain finite elements (FE) at varying heights and inclinations. A dataset was generated in order to develop predictive relations of soil slope instabilities, in terms of co-seismic displacements (u), under strong motions with a linear multiple regression. For simplicity, the seismic loads are monochromatic artificial sinusoidal functions at four frequencies: 1, 2, 4, and 6 Hz, and the slope failure criterion used corresponds to near 10% Cartesian shear strains along a continuous region comparable to a slip surface. The generated dataset comprises variables from the slope geometry and site conditions: height, H, inclination, i, shear wave velocity from the upper 30 m, vs30, site period, Ts; as well as the input strong motion: yield acceleration, ay (equal to peak ground acceleration, PGA in this research), frequency, f; and in some cases moment magnitude, M, and Arias intensity, Ia, assumed from empirical correlations. Different datasets or scenarios were created: "Magnitude-independent", "Magnitude-dependent", and "Soil-dependent", and the data was statistically explored and analyzed with varying mathematical forms. Qualitative relations show that the permanent deformations are highly related to the soil class for the clay slopes, but not for the sand slopes. Furthermore, the slope height does not constrain the variability in the co-seismic displacements. The input frequency decreases the variability of the co-seismic displacements for the "Magnitude-dependent" and "Soil-dependent" datasets. The empirical models were developed with two and three predictors. For the sands it was not possible because they could not satisfy the constrains from the statistical method. For the clays, the best models with the smallest errors coincided with the simple general form of multiple regression with three predictors (e.g. near 0.16 and 0.21 standard error, S.E. and 0.75 and 0.55 R2 for the "M-independent" and "M-dependent" datasets correspondingly). From the models with two predictors, a 2nd-order polynom gave the best performance but with a not-significant parameter. The best models with both predictors significant have slightly larger error and smaller R2, e.g. 0.15 S.E., 44% R2 with ay and i. The predictive models obtained with the three scenarios from the clay slopes provide well-constrained predictions but low R2, suggesting the predictors are "not complete", most likely in relation to the simplicity used in the strong motion characterization. Nevertheless, the findings from this work demonstrate the potential from analytical methods in developing more precise predictions as well as the importance on treating different different ground types.

  17. A Model-Data Fusion Approach for Constraining Modeled GPP at Global Scales Using GOME2 SIF Data

    NASA Astrophysics Data System (ADS)

    MacBean, N.; Maignan, F.; Lewis, P.; Guanter, L.; Koehler, P.; Bacour, C.; Peylin, P.; Gomez-Dans, J.; Disney, M.; Chevallier, F.

    2015-12-01

    Predicting the fate of the ecosystem carbon, C, stocks and their sensitivity to climate change relies heavily on our ability to accurately model the gross carbon fluxes, i.e. photosynthesis and respiration. However, there are large differences in the Gross Primary Productivity (GPP) simulated by different land surface models (LSMs), not only in terms of mean value, but also in terms of phase and amplitude when compared to independent data-based estimates. This strongly limits our ability to provide accurate predictions of carbon-climate feedbacks. One possible source of this uncertainty is from inaccurate parameter values resulting from incomplete model calibration. Solar Induced Fluorescence (SIF) has been shown to have a linear relationship with GPP at the typical spatio-temporal scales used in LSMs (Guanter et al., 2011). New satellite-derived SIF datasets have the potential to constrain LSM parameters related to C uptake at global scales due to their coverage. Here we use SIF data derived from the GOME2 instrument (Köhler et al., 2014) to optimize parameters related to photosynthesis and leaf phenology of the ORCHIDEE LSM, as well as the linear relationship between SIF and GPP. We use a multi-site approach that combines many model grid cells covering a wide spatial distribution within the same optimization (e.g. Kuppel et al., 2014). The parameters are constrained per Plant Functional type as the linear relationship described above varies depending on vegetation structural properties. The relative skill of the optimization is compared to a case where only satellite-derived vegetation index data are used to constrain the model, and to a case where both data streams are used. We evaluate the results using an independent data-driven estimate derived from FLUXNET data (Jung et al., 2011) and with a new atmospheric tracer, Carbonyl sulphide (OCS) following the approach of Launois et al. (ACPD, in review). We show that the optimization reduces the strong positive bias of the ORCHIDEE model and increases the correlation compared to independent estimates. Differences in spatial patterns and gradients between simulated GPP and observed SIF remain largely unchanged however, suggesting that the underlying representation of vegetation type and/or structure and functioning in the model requires further investigation.

  18. Supersymmetry searches in GUT models with non-universal scalar masses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannoni, M.; Gómez, M.E.; Ellis, J.

    2016-03-01

    We study SO(10), SU(5) and flipped SU(5) GUT models with non-universal soft supersymmetry-breaking scalar masses, exploring how they are constrained by LHC supersymmetry searches and cold dark matter experiments, and how they can be probed and distinguished in future experiments. We find characteristic differences between the various GUT scenarios, particularly in the coannihilation region, which is very sensitive to changes of parameters. For example, the flipped SU(5) GUT predicts the possibility of ∼t{sub 1}−χ coannihilation, which is absent in the regions of the SO(10) and SU(5) GUT parameter spaces that we study. We use the relic density predictions in differentmore » models to determine upper bounds for the neutralino masses, and we find large differences between different GUT models in the sparticle spectra for the same LSP mass, leading to direct connections of distinctive possible experimental measurements with the structure of the GUT group. We find that future LHC searches for generic missing E{sub T}, charginos and stops will be able to constrain the different GUT models in complementary ways, as will the Xenon 1 ton and Darwin dark matter scattering experiments and future FERMI or CTA γ-ray searches.« less

  19. Experimental Studies of Nuclear Physics Input for γ -Process Nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Scholz, Philipp; Heim, Felix; Mayer, Jan; Netterdon, Lars; Zilges, Andreas

    The predictions of reaction rates for the γ process in the scope of the Hauser-Feshbach statistical model crucially depend on nuclear physics input-parameters as optical-model potentials (OMP) or γ -ray strength functions. Precise cross-section measurements at astrophysically relevant energies help to constrain adopted models and, therefore, to reduce the uncertainties in the theoretically predicted reaction rates. During the last years, several cross-sections of charged-particle induced reactions on heavy nuclei have been measured at the University of Cologne. Either by means of the in-beam method at the HORUS γ -ray spectrometer or the activation technique using the Cologne Clover Counting Setup, total and partial cross-sections could be used to further constrain different models for nuclear physics input-parameters. It could be shown that modifications on the α -OMP in the case of the 112Sn(α , γ ) reaction also improve the description of the recently measured cross sections of the 108Cd(α , γ ) and 108Cd(α , n) reaction and other reactions as well. Partial cross-sections of the 92Mo(p, γ ) reaction were used to improve the γ -strength function model in 93Tc in the same way as it was done for the 89Y(p, γ ) reaction.

  20. Zones of life in the subsurface of hydrothermal vents: A synthesis

    NASA Astrophysics Data System (ADS)

    Larson, B. I.; Houghton, J.; Meile, C. D.

    2011-12-01

    Subsurface microbial communities in Mid-ocean Ridge (MOR) hydrothermal systems host a wide array of unique metabolic strategies, but the spatial distribution of biogeochemical transformations is poorly constrained. Here we present an approach that reexamines chemical measurements from diffuse fluids with models of convective transport to delineate likely reaction zones. Chemical data have been compiled from bare basalt surfaces at a wide array of mid-ocean ridge systems, including 9°N, East Pacific Rise, Axial Seamount, Juan de Fuca, and Lucky Strike, Mid-Atlantic Ridge. Co-sampled end-member fluid from Ty (EPR) was used to constrain reaction path models that define diffuse fluid compositions as a function of temperature. The degree of mixing between hot vent fluid (350 deg. C) and seawater (2 deg. C) governs fluid temperature, Fe-oxide mineral precipitation is suppressed, and aqueous redox reactions are prevented from equilibrating, consistent with sluggish kinetics. Quartz and pyrite are predicted to precipitate, consistent with field observations. Most reported samples of diffuse fluids from EPR and Axial Seamount fall along the same predicted mixing line only when pyrite precipitation is suppressed, but Lucky Strike fluids do not follow the same trend. The predicted fluid composition as a function of temperature is then used to calculate the free energy available to autotrophic microorganisms for a variety of catabolic strategies in the subsurface. Finally, the relationships between temperature and free energy is combined with modeled temperature fields (Lowell et al., 2007 Geochem. Geophys., Geosys.) over a 500 m x 500 m region extending downward from the seafloor and outward from the high temperature focused hydrothermal flow to define areas that are energetically most favorable for a given metabolic process as well as below the upper temperature limit for life (~120 deg. C). In this way, we can expand the relevance of geochemical model predictions of bioenergetics by predicting functionally-defined 'Zones of Life' and placing them spatially within the boundary of the 120 deg. C isotherm, estimating the extent of subsurface biosphere beneath mid-ocean ridge hydrothermal systems. Preliminary results indicate that methanogenesis yields the most energy per kg of vent fluid, consistent with the elevated CH4(aq) seen at all three sites, but may be constrained by temperatures too hot for microbial life while available energy from the oxidation of Fe(II) peaks near regions of the crust that are more hospitable.

  1. Galaxy Formation At Extreme Redshifts: Semi-Analytic Model Predictions And Challenges For Observations

    NASA Astrophysics Data System (ADS)

    Yung, L. Y. Aaron; Somerville, Rachel S.

    2017-06-01

    The well-established Santa Cruz semi-analytic galaxy formation framework has been shown to be quite successful at explaining observations in the local Universe, as well as making predictions for low-redshift observations. Recently, metallicity-based gas partitioning and H2-based star formation recipes have been implemented in our model, replacing the legacy cold-gas based recipe. We then use our revised model to explore the high-redshift Universe and make predictions up to z = 15. Although our model is only calibrated to observations from the local universe, our predictions seem to match incredibly well with mid- to high-redshift observational constraints available-to-date, including rest-frame UV luminosity functions and the reionization history as constrained by CMB and IGM observations. We provide predictions for individual and statistical galaxy properties at a wide range of redshifts (z = 4 - 15), including objects that are too far or too faint to be detected with current facilities. And using our model predictions, we also provide forecasted luminosity functions and other observables for upcoming studies with JWST.

  2. Starobinsky-like inflation, supercosmology and neutrino masses in no-scale flipped SU(5)

    NASA Astrophysics Data System (ADS)

    Ellis, John; Garcia, Marcos A. G.; Nagata, Natsumi; Nanopoulos, Dimitri V.; Olive, Keith A.

    2017-07-01

    We embed a flipped SU(5) × U(1) GUT model in a no-scale supergravity framework, and discuss its predictions for cosmic microwave background observables, which are similar to those of the Starobinsky model of inflation. Measurements of the tilt in the spectrum of scalar perturbations in the cosmic microwave background, ns, constrain significantly the model parameters. We also discuss the model's predictions for neutrino masses, and pay particular attention to the behaviours of scalar fields during and after inflation, reheating and the GUT phase transition. We argue in favor of strong reheating in order to avoid excessive entropy production which could dilute the generated baryon asymmetry.

  3. Performance Prediction of Constrained Waveform Design for Adaptive Radar

    DTIC Science & Technology

    2016-11-01

    Kullback - Leibler divergence. χ2 Goodness - of - Fit Test We compute the estimated CDF for both models with 10000 MC trials. For Model 1 we observed a p-value of ...was clearly similar in its physical attributes, but the measures used , ( Kullback - Leibler , Chi-Square Test and the trace of the covariance) showed...models goodness - of - fit we look at three measures (1) χ2- Test (2) Trace of the inverse

  4. The making of the minibody: an engineered beta-protein for the display of conformationally constrained peptides.

    PubMed

    Tramontano, A; Bianchi, E; Venturini, S; Martin, F; Pessi, A; Sollazzo, M

    1994-03-01

    Conformationally constraining selectable peptides onto a suitable scaffold that enables their conformation to be predicted or readily determined by experimental techniques would considerably boost the drug discovery process by reducing the gap between the discovery of a peptide lead and the design of a peptidomimetic with a more desirable pharmacological profile. With this in mind, we designed the minibody, a 61-residue beta-protein aimed at retaining some desirable features of immunoglobulin variable domains, such as tolerance to sequence variability in selected regions of the protein and predictability of the main chain conformation of the same regions, based on the 'canonical structures' model. To test the ability of the minibody scaffold to support functional sites we also designed a metal binding version of the protein by suitably choosing the sequences of its loops. The minibody was produced both by chemical synthesis and expression in E. coli and characterized by size exclusion chromatography, UV CD (circular dichroism) spectroscopy and metal binding activity. All our data supported the model, but a more detailed structural characterization of the molecule was impaired by its low solubility. We were able to overcome this problem both by further mutagenesis of the framework and by addition of a solubilizing motif. The minibody is being used to select constrained human IL-6 peptidic ligands from a library displayed on the surface of the f1 bacteriophage.

  5. Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required

    NASA Astrophysics Data System (ADS)

    Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.

    2017-12-01

    A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely constraining the starting model. We also explore what types of datasets are needed to uniquely constrain the orientation(s) of anisotropic symmetry if the mechanism is assumed.

  6. Imposing constraints on parameter values of a conceptual hydrological model using baseflow response

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.

    Calibration of conceptual hydrological models is frequently limited by a lack of data about the area that is being studied. The result is that a broad range of parameter values can be identified that will give an equally good calibration to the available observations, usually of stream flow. The use of total stream flow can bias analyses towards interpretation of rapid runoff, whereas water quality issues are more frequently associated with low flow condition. This paper demonstrates how model distinctions between surface an sub-surface runoff can be used to define a likelihood measure based on the sub-surface (or baseflow) response. This helps to provide more information about the model behaviour, constrain the acceptable parameter sets and reduce uncertainty in streamflow prediction. A conceptual model, DIY, is applied to two contrasting catchments in Scotland, the Ythan and the Carron Valley. Parameter ranges and envelopes of prediction are identified using criteria based on total flow efficiency, baseflow efficiency and combined efficiencies. The individual parameter ranges derived using the combined efficiency measures still cover relatively wide bands, but are better constrained for the Carron than the Ythan. This reflects the fact that hydrological behaviour in the Carron is dominated by a much flashier surface response than in the Ythan. Hence, the total flow efficiency is more strongly controlled by surface runoff in the Carron and there is a greater contrast with the baseflow efficiency. Comparisons of the predictions using different efficiency measures for the Ythan also suggest that there is a danger of confusing parameter uncertainties with data and model error, if inadequate likelihood measures are defined.

  7. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.

  8. Hydrograph Predictions of Glacial Lake Outburst Floods From an Ice-Dammed Lake

    NASA Astrophysics Data System (ADS)

    McCoy, S. W.; Jacquet, J.; McGrath, D.; Koschitzki, R.; Okuinghttons, J.

    2017-12-01

    Understanding the time evolution of glacial lake outburst floods (GLOFs), and ultimately predicting peak discharge, is crucial to mitigating the impacts of GLOFs on downstream communities and understanding concomitant surface change. The dearth of in situ measurements taken during GLOFs has left many GLOF models currently in use untested. Here we present a dataset of 13 GLOFs from Lago Cachet Dos, Aysen Region, Chile in which we detail measurements of key environmental variables (total volume drained, lake temperature, and lake inflow rate) and high temporal resolution discharge measurements at the source lake, in addition to well-constrained ice thickness and bedrock topography. Using this dataset we test two common empirical equations as well as the physically-based model of Spring-Hutter-Clarke. We find that the commonly used empirical relationships based solely on a dataset of lake volume drained fail to predict the large variability in observed peak discharges from Lago Cachet Dos. This disagreement is likely because these equations do not consider additional environmental variables that we show also control peak discharge, primarily, lake water temperature and the rate of meltwater inflow to the source lake. We find that the Spring-Hutter-Clarke model can accurately simulate the exponentially rising hydrographs that are characteristic of ice-dammed GLOFs, as well as the order of magnitude variation in peak discharge between events if the hydraulic roughness parameter is allowed to be a free fitting parameter. However, the Spring-Hutter-Clarke model over predicts peak discharge in all cases by 10 to 35%. The systematic over prediction of peak discharge by the model is related to its abrupt flood termination that misses the observed steep falling limb of the flood hydrograph. Although satisfactory model fits are produced, the range in hydraulic roughness required to obtain these fits across all events was large, which suggests that current models do not completely capture the physics of these systems, thus limiting their ability to truly predict peak discharges using only independently constrained parameters. We suggest what some of these missing physics might be.

  9. The role of bias in simulation of the Indian monsoon and its relationship to predictability

    NASA Astrophysics Data System (ADS)

    Kelly, P.

    2016-12-01

    Confidence in future projections of how climate change will affect the Indian monsoon is currently limited by- among other things-model biases. That is, the systematic error in simulating the mean present day climate. An important priority question in seamless prediction involves the role of the mean state. How much of the prediction error in imperfect models stems from a biased mean state (itself a result of many interacting process errors), and how much stems from the flow dependence of processes during an oscillation or variation we are trying to predict? Using simple but effective nudging techniques, we are able to address this question in a clean and incisive framework that teases apart the roles of the mean state vs. transient flow dependence in constraining predictability. The role of bias in model fidelity of simulations of the Indian monsoon is investigated in CAM5, and the relationship to predictability in remote regions in the "free" (non-nudged) domain is explored.

  10. Qualitative simulation for process modeling and control

    NASA Technical Reports Server (NTRS)

    Dalle Molle, D. T.; Edgar, T. F.

    1989-01-01

    A qualitative model is developed for a first-order system with a proportional-integral controller without precise knowledge of the process or controller parameters. Simulation of the qualitative model yields all of the solutions to the system equations. In developing the qualitative model, a necessary condition for the occurrence of oscillatory behavior is identified. Initializations that cannot exhibit oscillatory behavior produce a finite set of behaviors. When the phase-space behavior of the oscillatory behavior is properly constrained, these initializations produce an infinite but comprehensible set of asymptotically stable behaviors. While the predictions include all possible behaviors of the real system, a class of spurious behaviors has been identified. When limited numerical information is included in the model, the number of predictions is significantly reduced.

  11. Improving Models for Coseismic And Postseismic Deformation from the 2002 Denali, Alaska Earthquake

    NASA Astrophysics Data System (ADS)

    Harper, H.; Freymueller, J. T.

    2016-12-01

    Given the multi-decadal temporal scale of postseismic deformation, predictions of previous models for postseismic deformation resulting from the 2002 Denali Fault earthquake (M 7.9) do not agree with longer-term observations. In revising the past postseismic models with what is now over a decade of data, the first step is revisiting coseismic displacements and slip distribution of the earthquake. Advances in processing allow us to better constrain coseismic displacement estimates, which affect slip distribution predictions in modeling. Additionally, an updated slip model structure from a homogeneous model to a layered model rectifies previous inconsistencies between coseismic and postseismic models. Previous studies have shown that two primary processes contribute to postseismic deformation: afterslip, which decays with a short time constant; and viscoelastic relaxation, which decays with a longer time constant. We fit continuous postseismic GPS time series with three different relaxation models: 1) logarithmic decay + exponential decay, 2) log + exp + exp, and 3) log + log + exp. A grid search is used to minimize total model WRSS, and we find optimal relaxation times of: 1) 0.125 years (log) and 21.67 years (exp); 2) 0.14 years (log), 0.68 years (exp), and 28.33 years (exp); 3) 0.055 years (log), 14.44 years (log), and 22.22 years (exp). While there is not a one-to-one correspondence between a particular decay constant and a mechanism, the optimization of these constants allows us to model the future timeseries and constrain the contribution of different postseismic processes.

  12. STICK-SLIP-SEPARATION Analysis and Non-Linear Stiffness and Damping Characterization of Friction Contacts Having Variable Normal Load

    NASA Astrophysics Data System (ADS)

    Yang, B. D.; Chu, M. L.; Menq, C. H.

    1998-03-01

    Mechanical systems in which moving components are mutually constrained through contacts often lead to complex contact kinematics involving tangential and normal relative motions. A friction contact model is proposed to characterize this type of contact kinematics that imposes both friction non-linearity and intermittent separation non-linearity on the system. The stick-slip friction phenomenon is analyzed by establishing analytical criteria that predict the transition between stick, slip, and separation of the interface. The established analytical transition criteria are particularly important to the proposed friction contact model for the transition conditions of the contact kinematics are complicated by the effect of normal load variation and possible interface separation. With these transition criteria, the induced friction force on the contact plane and the variable normal load perpendicular to the contact plane, can be predicted for any given cyclic relative motions at the contact interface and hysteresis loops can be produced so as to characterize the equivalent damping and stiffness of the friction contact. These-non-linear damping and stiffness methods along with the harmonic balance method are then used to predict the resonant response of a frictionally constrained two-degree-of-freedom oscillator. The predicted results are compared with those of the time integration method and the damping effect, the resonant frequency shift, and the jump phenomenon are examined.

  13. The use of atmospheric measurements to constrain model predictions of ozone change from chlorine perturbations

    NASA Technical Reports Server (NTRS)

    Douglass, Anne R.; Stolarski, Richard S.

    1987-01-01

    Atmospheric photochemistry models have been used to predict the sensitivity of the ozone layer to various perturbations. These same models also predict concentrations of chemical species in the present day atmosphere which can be compared to observations. Model results for both present day values and sensitivity to perturbation depend upon input data for reaction rates, photodissociation rates, and boundary conditions. A method of combining the results of a Monte Carlo uncertainty analysis with the existing set of present atmospheric species measurements is developed. The method is used to examine the range of values for the sensitivity of ozone to chlorine perturbations that is possible within the currently accepted ranges for input data. It is found that model runs which predict ozone column losses much greater than 10 percent as a result of present fluorocarbon fluxes produce concentrations and column amounts in the present atmosphere which are inconsistent with the measurements for ClO, HCl, NO, NO2, and HNO3.

  14. Ten years of multiple data stream assimilation with the ORCHIDEE land surface model to improve regional to global simulated carbon budgets: synthesis and perspectives on directions for the future

    NASA Astrophysics Data System (ADS)

    Peylin, P. P.; Bacour, C.; MacBean, N.; Maignan, F.; Bastrikov, V.; Chevallier, F.

    2017-12-01

    Predicting the fate of carbon stocks and their sensitivity to climate change and land use/management strongly relies on our ability to accurately model net and gross carbon fluxes. However, simulated carbon and water fluxes remain subject to large uncertainties, partly because of unknown or poorly calibrated parameters. Over the past ten years, the carbon cycle data assimilation system at the Laboratoire des Sciences du Climat et de l'Environnement has investigated the benefit of assimilating multiple carbon cycle data streams into the ORCHIDEE LSM, the land surface component of the Institut Pierre Simon Laplace Earth System Model. These datasets have included FLUXNET eddy covariance data (net CO2 flux and latent heat flux) to constrain hourly to seasonal time-scale carbon cycle processes, remote sensing of the vegetation activity (MODIS NDVI) to constrain the leaf phenology, biomass data to constrain "slow" (yearly to decadal) processes of carbon allocation, and atmospheric CO2 concentrations to provide overall large scale constraints on the land carbon sink. Furthermore, we have investigated technical issues related to multiple data stream assimilation and choice of optimization algorithm. This has provided a wide-ranging perspective on the challenges we face in constraining model parameters and thus better quantifying, and reducing, model uncertainty in projections of the future global carbon sink. We review our past studies in terms of the impact of the optimization on key characteristics of the carbon cycle, e.g. the partition of the northern latitudes vs tropical land carbon sink, and compare to the classic atmospheric flux inversion approach. Throughout, we discuss our work in context of the abovementioned challenges, and propose solutions for the community going forward, including the potential of new observations such as atmospheric COS concentrations and satellite-derived Solar Induced Fluorescence to constrain the gross carbon fluxes of the ORCHIDEE model.

  15. Implications of Binary Black Hole Detections on the Merger Rates of Double Neutron Stars and Neutron Star–Black Holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gupta, Anuradha; Arun, K. G.; Sathyaprakash, B. S., E-mail: axg645@psu.edu, E-mail: kgarun@cmi.ac.in, E-mail: bss25@psu.edu

    We show that the inferred merger rate and chirp masses of binary black holes (BBHs) detected by advanced LIGO (aLIGO) can be used to constrain the rate of double neutron star (DNS) and neutron star–black hole (NSBH) mergers in the universe. We explicitly demonstrate this by considering a set of publicly available population synthesis models of Dominik et al. and show that if all the BBH mergers, GW150914, LVT151012, GW151226, and GW170104, observed by aLIGO arise from isolated binary evolution, the predicted DNS merger rate may be constrained to be 2.3–471.0 Gpc{sup −3} yr{sup −1} and that of NSBH mergersmore » will be constrained to 0.2–48.5 Gpc{sup −3} yr{sup −1}. The DNS merger rates are not constrained much, but the NSBH rates are tightened by a factor of ∼4 as compared to their previous rates. Note that these constrained DNS and NSBH rates are extremely model-dependent and are compared to the unconstrained values 2.3–472.5 Gpc{sup −3} yr{sup −1} and 0.2–218 Gpc{sup −3} yr{sup −1}, respectively, using the same models of Dominik et al. (2012a). These rate estimates may have implications for short Gamma Ray Burst progenitor models assuming they are powered (solely) by DNS or NSBH mergers. While these results are based on a set of open access population synthesis models, which may not necessarily be the representative ones, the proposed method is very general and can be applied to any number of models, thereby yielding more realistic constraints on the DNS and NSBH merger rates from the inferred BBH merger rate and chirp mass.« less

  16. Constrained off-line synthesis approach of model predictive control for networked control systems with network-induced delays.

    PubMed

    Tang, Xiaoming; Qu, Hongchun; Wang, Ping; Zhao, Meng

    2015-03-01

    This paper investigates the off-line synthesis approach of model predictive control (MPC) for a class of networked control systems (NCSs) with network-induced delays. A new augmented model which can be readily applied to time-varying control law, is proposed to describe the NCS where bounded deterministic network-induced delays may occur in both sensor to controller (S-A) and controller to actuator (C-A) links. Based on this augmented model, a sufficient condition of the closed-loop stability is derived by applying the Lyapunov method. The off-line synthesis approach of model predictive control is addressed using the stability results of the system, which explicitly considers the satisfaction of input and state constraints. Numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    NASA Astrophysics Data System (ADS)

    Xavier, Marcelo A.; Trimboli, M. Scott

    2015-07-01

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggest significant performance improvements might be achieved by extending the result to electrochemical models.

  18. A toolkit for determining historical eco-hydrological interactions

    NASA Astrophysics Data System (ADS)

    Singer, M. B.; Sargeant, C. I.; Evans, C. M.; Vallet-Coulomb, C.

    2016-12-01

    Contemporary climate change is predicted to result in perturbations to hydroclimatic regimes across the globe, with some regions forecast to become warmer and drier. Given that water is a primary determinant of vegetative health and productivity, we can expect shifts in the availability of this critical resource to have significant impacts on forested ecosystems. The subject is particularly complex in environments where multiple sources of water are potentially available to vegetation and which may also exhibit spatial and temporal variability. To anticipate how subsurface hydrological partitioning may evolve in the future and impact overlying vegetation, we require well constrained, historical data and a modelling framework for assessing the dynamics of subsurface hydrology. We outline a toolkit to retrospectively investigate dynamic water use by trees. We describe a synergistic approach, which combines isotope dendrochronology of tree ring cellulose with a biomechanical model, detailed climatic and isotopic data in endmember waters to assess the mean isotopic composition of source water used in annual tree rings. We identify the data requirements and suggest three versions of the toolkit based on data availability. We present sensitivity analyses in order to identify the key variables required to constrain model predictions and then develop empirical relationships for constraining these parameters based on climate records. We demonstrate our methodology within a Mediterranean riparian forest site and show how it can be used along with subsurface hydrological modelling to validate source water determinations, which are fundamental to understanding climatic fluctuations and trends in subsurface hydrology. We suggest that the utility of our toolkit is applicable in riparian zones and in a range of forest environments where distinct isotopic endmembers are present.

  19. Stability analysis in tachyonic potential chameleon cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farajollahi, H.; Salehi, A.; Tayebi, F.

    2011-05-01

    We study general properties of attractors for tachyonic potential chameleon scalar-field model which possess cosmological scaling solutions. An analytic formulation is given to obtain fixed points with a discussion on their stability. The model predicts a dynamical equation of state parameter with phantom crossing behavior for an accelerating universe. We constrain the parameters of the model by best fitting with the recent data-sets from supernovae and simulated data points for redshift drift experiment generated by Monte Carlo simulations.

  20. Effective theory of flavor for Minimal Mirror Twin Higgs

    NASA Astrophysics Data System (ADS)

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-01

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ɛ^{n_i} for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ɛ^' {n}_i} , so that spontaneous breaking of the parity P arises from a single parameter ɛ'/ɛ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i , including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ɛ'/ɛ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. In each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.

  1. Physics of Inference

    NASA Astrophysics Data System (ADS)

    Toroczkai, Zoltan

    Jaynes's maximum entropy method provides a family of principled models that allow the prediction of a system's properties as constrained by empirical data (observables). However, their use is often hindered by the degeneracy problem characterized by spontaneous symmetry breaking, where predictions fail. Here we show that degeneracy appears when the corresponding density of states function is not log-concave, which is typically the consequence of nonlinear relationships between the constraining observables. We illustrate this phenomenon on several examples, including from complex networks, combinatorics and classical spin systems (e.g., Blume-Emery-Griffiths lattice-spin models). Exploiting these nonlinear relationships we then propose a solution to the degeneracy problem for a large class of systems via transformations that render the density of states function log-concave. The effectiveness of the method is demonstrated on real-world network data. Finally, we discuss the implications of these findings on the relationship between the geometrical properties of the density of states function and phase transitions in spin systems. Supported in part by Grant No. FA9550-12-1-0405 from AFOSR/DARPA and by Grant No. HDTRA 1-09-1-0039 from DTRA.

  2. Inductive reasoning about causally transmitted properties.

    PubMed

    Shafto, Patrick; Kemp, Charles; Bonawitz, Elizabeth Baraff; Coley, John D; Tenenbaum, Joshua B

    2008-11-01

    Different intuitive theories constrain and guide inferences in different contexts. Formalizing simple intuitive theories as probabilistic processes operating over structured representations, we present a new computational model of category-based induction about causally transmitted properties. A first experiment demonstrates undergraduates' context-sensitive use of taxonomic and food web knowledge to guide reasoning about causal transmission and shows good qualitative agreement between model predictions and human inferences. A second experiment demonstrates strong quantitative and qualitative fits to inferences about a more complex artificial food web. A third experiment investigates human reasoning about complex novel food webs where species have known taxonomic relations. Results demonstrate a double-dissociation between the predictions of our causal model and a related taxonomic model [Kemp, C., & Tenenbaum, J. B. (2003). Learning domain structures. In Proceedings of the 25th annual conference of the cognitive science society]: the causal model predicts human inferences about diseases but not genes, while the taxonomic model predicts human inferences about genes but not diseases. We contrast our framework with previous models of category-based induction and previous formal instantiations of intuitive theories, and outline challenges in developing a complete model of context-sensitive reasoning.

  3. Infrared Emission from Kilonovae: The Case of the Nearby Short Hard Burst GRB 160821B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasliwal, Mansi M.; Lau, Ryan M.; Korobkin, Oleg

    We present constraints on Ks-band emission from one of the nearest short hard gamma-ray bursts, GRB 160821B, at z = 0.16, at three epochs. We detect a red relativistic afterglow from the jetted emission in the first epoch but do not detect any excess kilonova emission in the second two epochs. We compare upper limits obtained with Keck I/MOSFIRE to multi-dimensional radiative transfer models of kilonovae, that employ composition-dependent nuclear heating and LTE opacities of heavy elements. We discuss eight models that combine toroidal dynamical ejecta and two types of wind and one model with dynamical ejecta only. We alsomore » discuss simple, empirical scaling laws of predicted emission as a function of ejecta mass and ejecta velocity. Our limits for GRB 160821B constrain the ejecta mass to be lower than 0.03 M {sub ⊙} for velocities greater than 0.1 c. At the distance sensitivity range of advanced LIGO, similar ground-based observations would be sufficiently sensitive to the full range of predicted model emission including models with only dynamical ejecta. The color evolution of these models shows that I – K color spans 7–16 mag, which suggests that even relatively shallow infrared searches for kilonovae could be as constraining as optical searches.« less

  4. Infrared Emission from Kilonovae: The Case of the Nearby Short Hard Burst GRB 160821B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kasliwal, Mansi M.; Korobkin, Oleg; Lau, Ryan M.

    In this paper, we present constraints on Ks-band emission from one of the nearest short hard gamma-ray bursts, GRB 160821B, at z = 0.16, at three epochs. We detect a red relativistic afterglow from the jetted emission in the first epoch but do not detect any excess kilonova emission in the second two epochs. We compare upper limits obtained with Keck I/MOSFIRE to multi-dimensional radiative transfer models of kilonovae, that employ composition-dependent nuclear heating and LTE opacities of heavy elements. We discuss eight models that combine toroidal dynamical ejecta and two types of wind and one model with dynamical ejectamore » only. We also discuss simple, empirical scaling laws of predicted emission as a function of ejecta mass and ejecta velocity. Our limits for GRB 160821B constrain the ejecta mass to be lower than 0.03 M ⊙ for velocities greater than 0.1 c. At the distance sensitivity range of advanced LIGO, similar ground-based observations would be sufficiently sensitive to the full range of predicted model emission including models with only dynamical ejecta. Finally, the color evolution of these models shows that I–K color spans 7–16 mag, which suggests that even relatively shallow infrared searches for kilonovae could be as constraining as optical searches.« less

  5. Infrared Emission from Kilonovae: The Case of the Nearby Short Hard Burst GRB 160821B

    DOE PAGES

    Kasliwal, Mansi M.; Korobkin, Oleg; Lau, Ryan M.; ...

    2017-07-12

    In this paper, we present constraints on Ks-band emission from one of the nearest short hard gamma-ray bursts, GRB 160821B, at z = 0.16, at three epochs. We detect a red relativistic afterglow from the jetted emission in the first epoch but do not detect any excess kilonova emission in the second two epochs. We compare upper limits obtained with Keck I/MOSFIRE to multi-dimensional radiative transfer models of kilonovae, that employ composition-dependent nuclear heating and LTE opacities of heavy elements. We discuss eight models that combine toroidal dynamical ejecta and two types of wind and one model with dynamical ejectamore » only. We also discuss simple, empirical scaling laws of predicted emission as a function of ejecta mass and ejecta velocity. Our limits for GRB 160821B constrain the ejecta mass to be lower than 0.03 M ⊙ for velocities greater than 0.1 c. At the distance sensitivity range of advanced LIGO, similar ground-based observations would be sufficiently sensitive to the full range of predicted model emission including models with only dynamical ejecta. Finally, the color evolution of these models shows that I–K color spans 7–16 mag, which suggests that even relatively shallow infrared searches for kilonovae could be as constraining as optical searches.« less

  6. Maximizing the information learned from finite data selects a simple model

    NASA Astrophysics Data System (ADS)

    Mattingly, Henry H.; Transtrum, Mark K.; Abbott, Michael C.; Machta, Benjamin B.

    2018-02-01

    We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We advocate for the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from limited data. When many parameters are poorly constrained by the available data, we find that this prior puts weight only on boundaries of the parameter space. Thus, it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. In the limit where there are sufficient data to tightly constrain any number of parameters, this reduces to the Jeffreys prior. However, we argue that this limit is pathological when applied to the hyperribbon parameter manifolds generic in science, because it leads to dramatic dependence on effects invisible to experiment.

  7. SMA Hybrid Composites for Dynamic Response Abatement Applications

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.

    2000-01-01

    A recently developed constitutive model and a finite element formulation for predicting the thermomechanical response of Shape Memory Alloy (SMA) hybrid composite (SMAHC) structures is briefly described. Attention is focused on constrained recovery behavior in this study, but the constitutive formulation is also capable of modeling restrained or free recovery. Numerical results are shown for glass/epoxy panel specimens with embedded Nitinol actuators subjected to thermal and acoustic loads. Control of thermal buckling, random response, sonic fatigue, and transmission loss are demonstrated and compared to conventional approaches including addition of conventional composite layers and a constrained layer damping treatment. Embedded SMA actuators are shown to be significantly more effective in dynamic response abatement applications than the conventional approaches and are attractive for combination with other passive and/or active approaches.

  8. Multiple network-constrained regressions expand insights into influenza vaccination responses.

    PubMed

    Avey, Stefan; Mohanty, Subhasis; Wilson, Jean; Zapata, Heidi; Joshi, Samit R; Siconolfi, Barbara; Tsang, Sui; Shaw, Albert C; Kleinstein, Steven H

    2017-07-15

    Systems immunology leverages recent technological advancements that enable broad profiling of the immune system to better understand the response to infection and vaccination, as well as the dysregulation that occurs in disease. An increasingly common approach to gain insights from these large-scale profiling experiments involves the application of statistical learning methods to predict disease states or the immune response to perturbations. However, the goal of many systems studies is not to maximize accuracy, but rather to gain biological insights. The predictors identified using current approaches can be biologically uninterpretable or present only one of many equally predictive models, leading to a narrow understanding of the underlying biology. Here we show that incorporating prior biological knowledge within a logistic modeling framework by using network-level constraints on transcriptional profiling data significantly improves interpretability. Moreover, incorporating different types of biological knowledge produces models that highlight distinct aspects of the underlying biology, while maintaining predictive accuracy. We propose a new framework, Logistic Multiple Network-constrained Regression (LogMiNeR), and apply it to understand the mechanisms underlying differential responses to influenza vaccination. Although standard logistic regression approaches were predictive, they were minimally interpretable. Incorporating prior knowledge using LogMiNeR led to models that were equally predictive yet highly interpretable. In this context, B cell-specific genes and mTOR signaling were associated with an effective vaccination response in young adults. Overall, our results demonstrate a new paradigm for analyzing high-dimensional immune profiling data in which multiple networks encoding prior knowledge are incorporated to improve model interpretability. The R source code described in this article is publicly available at https://bitbucket.org/kleinstein/logminer . steven.kleinstein@yale.edu or stefan.avey@yale.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  9. Consumer Search, Rationing Rules, and the Consequence for Competition

    NASA Astrophysics Data System (ADS)

    Ruebeck, Christopher S.

    Firms' conjectures about demand are consequential in oligopoly games. Through agent-based modeling of consumers' search for products, we can study the rationing of demand between capacity-constrained firms offering homogeneous products and explore the robustness of analytically solvable models' results. After algorithmically formalizing short-run search behavior rather than assuming a long-run average, this study predicts stronger competition in a two-stage capacity-price game.

  10. Constraints on Cosmology and Gravity from the Growth of X-ray Luminous Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Mantz, Adam; Allen, S. W.; Rapetti, D.; Ebeling, H.; Drlica-Wagner, A.

    2010-03-01

    I will present simultaneous constraints on galaxy cluster X-ray scaling relations and models of cosmology and gravity obtained from observations of the growth of massive clusters. The data set consists of 238 flux-selected clusters at redshifts z≤0.5 drawn from the ROSAT All-Sky Survey, and incorporates extensive Chandra follow-up observations. Our results on the scaling relations are consistent with excess heating of the intracluster medium, although the evolution of the relations remains consistent with the predictions of simple gravitational collapse models. For spatially flat, constant-w cosmological models, the cluster data yield Ωm=0.23±0.04, σ8=0.82±0.05, and w=-1.01±0.20, including conservative allowances for systematic uncertainties. Our results are consistent and competitive with a variety of independent cosmological data. In evolving-w models, marginalizing over transition redshifts in the range 0.05-1, the combination of the growth of structure data with the cosmic microwave background, supernovae, cluster gas mass fractions and baryon acoustic oscillations constrains the dark energy equation of state at late and early times to be respectively w0=-0.88±0.21 and wet=-1.05+0.20-0.36. Applying this combination of data to the problem of determining fundamental neutrino properties, we place an upper limit on the species-summed neutrino mass at 0.33eV (95% CL) and constrain the effective number of relativistic species to 3.4±0.6. In addition to dark energy and related problems, such data can be used to test the predictions of General Relativity. Introducing the standard Peebles/Linder parametrization of the linear growth rate, we use the cluster data to constrain the growth of structure, independent of the expansion of the Universe. Our analysis provides a tight constraint on the combination γ(σ8/0.8)6.8=0.55+0.13-0.10, and is simultaneously consistent with the predictions of relativity (γ=0.55) and the cosmological constant expansion model. This work was funded by NASA, the U.S. Department of Energy, and Stanford University.

  11. 2016 International Land Model Benchmarking (ILAMB) Workshop Report

    NASA Technical Reports Server (NTRS)

    Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen; Lawrence, David M.; Riley, William J.; Randerson, James T.; Ahlstrom, Anders; Abramowitz, Gabriel; Baldocchi, Dennis D.; Best, Martin J.; hide

    2016-01-01

    As earth system models (ESMs) become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of terrestrial biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistryclimate feedbacks and ecosystem processes in these models are essential for reducing the acknowledged substantial uncertainties in 21st century climate change projections.

  12. 2016 International Land Model Benchmarking (ILAMB) Workshop Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forrest M.; Koven, Charles D.; Keppel-Aleks, Gretchen

    As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.

  13. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.

  14. H-, He-like recombination spectra - II. l-changing collisions for He Rydberg states

    NASA Astrophysics Data System (ADS)

    Guzmán, F.; Badnell, N. R.; Williams, R. J. R.; van Hoof, P. A. M.; Chatzikos, M.; Ferland, G. J.

    2017-01-01

    Cosmological models can be constrained by determining primordial abundances. Accurate predictions of the He I spectrum are needed to determine the primordial helium abundance to a precision of <1 per cent in order to constrain big bang nucleosynthesis models. Theoretical line emissivities at least this accurate are needed if this precision is to be achieved. In the first paper of this series, which focused on H I, we showed that differences in l-changing collisional rate coefficients predicted by three different theories can translate into 10 per cent changes in predictions for H I spectra. Here, we consider the more complicated case of He atoms, where low-l subshells are not energy degenerate. A criterion for deciding when the energy separation between l subshells is small enough to apply energy-degenerate collisional theories is given. Moreover, for certain conditions, the Bethe approximation originally proposed by Pengelly & Seaton is not sufficiently accurate. We introduce a simple modification of this theory which leads to rate coefficients which agree well with those obtained from pure quantal calculations using the approach of Vrinceanu et al. We show that the l-changing rate coefficients from the different theoretical approaches lead to differences of ˜10 per cent in He I emissivities in simulations of H II regions using spectral code CLOUDY.

  15. A constrained maximization formulation to analyze deformation of fiber reinforced elastomeric actuators

    NASA Astrophysics Data System (ADS)

    Singh, Gaurav; Krishnan, Girish

    2017-06-01

    Fiber reinforced elastomeric enclosures (FREEs) are soft and smart pneumatic actuators that deform in a predetermined fashion upon inflation. This paper analyzes the deformation behavior of FREEs by formulating a simple calculus of variations problem that involves constrained maximization of the enclosed volume. The model accurately captures the deformed shape for FREEs with any general fiber angle orientation, and its relation with actuation pressure, material properties and applied load. First, the accuracy of the model is verified with existing literature and experiments for the popular McKibben pneumatic artificial muscle actuator with two equal and opposite families of helically wrapped fibers. Then, the model is used to predict and experimentally validate the deformation behavior of novel rotating-contracting FREEs, for which no prior literature exist. The generality of the model enables conceptualization of novel FREEs whose fiber orientations vary arbitrarily along the geometry. Furthermore, the model is deemed to be useful in the design synthesis of fiber reinforced elastomeric actuators for general axisymmetric desired motion and output force requirement.

  16. Constraining the interaction between dark sectors with future HI intensity mapping observations

    NASA Astrophysics Data System (ADS)

    Xu, Xiaodong; Ma, Yin-Zhe; Weltman, Amanda

    2018-04-01

    We study a model of interacting dark matter and dark energy, in which the two components are coupled. We calculate the predictions for the 21-cm intensity mapping power spectra, and forecast the detectability with future single-dish intensity mapping surveys (BINGO, FAST and SKA-I). Since dark energy is turned on at z ˜1 , which falls into the sensitivity range of these radio surveys, the HI intensity mapping technique is an efficient tool to constrain the interaction. By comparing with current constraints on dark sector interactions, we find that future radio surveys will produce tight and reliable constraints on the coupling parameters.

  17. Probing primordial features with next-generation photometric and radio surveys

    NASA Astrophysics Data System (ADS)

    Ballardini, M.; Finelli, F.; Maartens, R.; Moscardini, L.

    2018-04-01

    We investigate the possibility of using future photometric and radio surveys to constrain the power spectrum of primordial fluctuations that is predicted by inflationary models with a violation of the slow-roll phase. We forecast constraints with a Fisher analysis on the amplitude of the parametrized features on ultra-large scales, in order to assess whether these could be distinguishable over the cosmic variance. We find that the next generation of photometric and radio surveys has the potential to test these models at a sensitivity better than current CMB experiments and that the synergy between galaxy and CMB observations is able to constrain models with many extra parameters. In particular, an SKA continuum survey with a huge sky coverage and a flux threshold of a few μJy could confirm the presence of a new phase in the early Universe at more than 3σ.

  18. The Hydrological Sensitivity to Global Warming and Solar Geoengineering Derived from Thermodynamic Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik

    2015-01-16

    We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many ofmore » the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.« less

  19. Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Markley, F. Landis

    2013-01-01

    A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.

  20. The role of ecosystem memory in predicting inter-annual variations of the tropical carbon balance.

    NASA Astrophysics Data System (ADS)

    Bloom, A. A.; Liu, J.; Bowman, K. W.; Konings, A. G.; Saatchi, S.; Worden, J. R.; Worden, H. M.; Jiang, Z.; Parazoo, N.; Williams, M. D.; Schimel, D.

    2017-12-01

    Understanding the trajectory of the tropical carbon balance remains challenging, in part due to large uncertainties in the integrated response of carbon cycle processes to climate variability. Satellite observations atmospheric CO2 from GOSAT and OCO-2, together with ancillary satellite measurements, provide crucial constraints on continental-scale terrestrial carbon fluxes. However, an integrated understanding of both climate forcings and legacy effects (or "ecosystem memory") on the terrestrial carbon balance is ultimately needed to reduce uncertainty on its future trajectory. Here we use the CARbon DAta-MOdel fraMework (CARDAMOM) diagnostic model-data fusion approach - constrained by an array of C cycle satellite surface observations, including MODIS leaf area, biomass, GOSAT solar-induced fluorescence, as well as "top-down" atmospheric inversion estimates of CO2 and CO surface fluxes from the NASA Carbon Monitoring System Flux (CMS-Flux) - to constrain and predict spatially-explicit tropical carbon state variables during 2010-2015. We find that the combined assimilation of land surface and atmospheric datasets places key constraints on the temperature sensitivity and first order carbon-water feedbacks throughout the tropics and combustion factors within biomass burning regions. By varying the duration of the assimilation period, we find that the prediction skill on inter-annual net biospheric exchange is primarily limited by record length rather than model structure and process representation. We show that across all tropical biomes, quantitative knowledge of memory effects - which account for 30-50% of interannual variations across the tropics - is critical for understanding and ultimately predicting the inter-annual tropical carbon balance.

  1. Starobinsky-like inflation, supercosmology and neutrino masses in no-scale flipped SU(5)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, John; Garcia, Marcos A.G.; Nagata, Natsumi

    2017-07-01

    We embed a flipped SU(5) × U(1) GUT model in a no-scale supergravity framework, and discuss its predictions for cosmic microwave background observables, which are similar to those of the Starobinsky model of inflation. Measurements of the tilt in the spectrum of scalar perturbations in the cosmic microwave background, n {sub s} , constrain significantly the model parameters. We also discuss the model's predictions for neutrino masses, and pay particular attention to the behaviours of scalar fields during and after inflation, reheating and the GUT phase transition. We argue in favor of strong reheating in order to avoid excessive entropymore » production which could dilute the generated baryon asymmetry.« less

  2. Hemispheric processing of predictive inferences during reading: The influence of negatively emotional valenced stimuli.

    PubMed

    Virtue, Sandra; Schutzenhofer, Michael; Tomkins, Blaine

    2017-07-01

    Although a left hemisphere advantage is usually evident during language processing, the right hemisphere is highly involved during the processing of weakly constrained inferences. However, currently little is known about how the emotional valence of environmental stimuli influences the hemispheric processing of these inferences. In the current study, participants read texts promoting either strongly or weakly constrained predictive inferences and performed a lexical decision task to inference-related targets presented to the left visual field-right hemisphere or the right visual field-left hemisphere. While reading these texts, participants either listened to dissonant music (i.e., the music condition) or did not listen to music (i.e., the no music condition). In the no music condition, the left hemisphere showed an advantage for strongly constrained inferences compared to weakly constrained inferences, whereas the right hemisphere showed high facilitation for both strongly and weakly constrained inferences. In the music condition, both hemispheres showed greater facilitation for strongly constrained inferences than for weakly constrained inferences. These results suggest that negatively valenced stimuli (such as dissonant music) selectively influences the right hemisphere's processing of weakly constrained inferences during reading.

  3. Internal mechanisms underlying anticipatory language processing: Evidence from event-related-potentials and neural oscillations.

    PubMed

    Li, Xiaoqing; Zhang, Yuping; Xia, Jinyan; Swaab, Tamara Y

    2017-07-28

    Although numerous studies have demonstrated that the language processing system can predict upcoming content during comprehension, there is still no clear picture of the anticipatory stage of predictive processing. This electroencephalograph study examined the cognitive and neural oscillatory mechanisms underlying anticipatory processing during language comprehension, and the consequences of this prediction for bottom-up processing of predicted/unpredicted content. Participants read Mandarin Chinese sentences that were either strongly or weakly constraining and that contained critical nouns that were congruent or incongruent with the sentence contexts. We examined the effects of semantic predictability on anticipatory processing prior to the onset of the critical nouns and on integration of the critical nouns. The results revealed that, at the integration stage, the strong-constraint condition (compared to the weak-constraint condition) elicited a reduced N400 and reduced theta activity (4-7Hz) for the congruent nouns, but induced beta (13-18Hz) and theta (4-7Hz) power decreases for the incongruent nouns, indicating benefits of confirmed predictions and potential costs of disconfirmed predictions. More importantly, at the anticipatory stage, the strongly constraining context elicited an enhanced sustained anterior negativity and beta power decrease (19-25Hz), which indicates that strong prediction places a higher processing load on the anticipatory stage of processing. The differences (in the ease of processing and the underlying neural oscillatory activities) between anticipatory and integration stages of lexical processing were discussed with regard to predictive processing models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. A probabilistic framework to infer brain functional connectivity from anatomical connections.

    PubMed

    Deligianni, Fani; Varoquaux, Gael; Thirion, Bertrand; Robinson, Emma; Sharp, David J; Edwards, A David; Rueckert, Daniel

    2011-01-01

    We present a novel probabilistic framework to learn across several subjects a mapping from brain anatomical connectivity to functional connectivity, i.e. the covariance structure of brain activity. This prediction problem must be formulated as a structured-output learning task, as the predicted parameters are strongly correlated. We introduce a model selection framework based on cross-validation with a parametrization-independent loss function suitable to the manifold of covariance matrices. Our model is based on constraining the conditional independence structure of functional activity by the anatomical connectivity. Subsequently, we learn a linear predictor of a stationary multivariate autoregressive model. This natural parameterization of functional connectivity also enforces the positive-definiteness of the predicted covariance and thus matches the structure of the output space. Our results show that functional connectivity can be explained by anatomical connectivity on a rigorous statistical basis, and that a proper model of functional connectivity is essential to assess this link.

  5. A Model Connecting Galaxy Masses, Star Formation Rates, and Dust Temperatures across Cosmic Time

    NASA Astrophysics Data System (ADS)

    Imara, Nia; Loeb, Abraham; Johnson, Benjamin D.; Conroy, Charlie; Behroozi, Peter

    2018-02-01

    We investigate the evolution of dust content in galaxies from redshifts z = 0 to z = 9.5. Using empirically motivated prescriptions, we model galactic-scale properties—including halo mass, stellar mass, star formation rate, gas mass, and metallicity—to make predictions for the galactic evolution of dust mass and dust temperature in main-sequence galaxies. Our simple analytic model, which predicts that galaxies in the early universe had greater quantities of dust than their low-redshift counterparts, does a good job of reproducing observed trends between galaxy dust and stellar mass out to z ≈ 6. We find that for fixed galaxy stellar mass, the dust temperature increases from z = 0 to z = 6. Our model forecasts a population of low-mass, high-redshift galaxies with interstellar dust as hot as, or hotter than, their more massive counterparts; but this prediction needs to be constrained by observations. Finally, we make predictions for observing 1.1 mm flux density arising from interstellar dust emission with the Atacama Large Millimeter Array.

  6. A dynamic eco-evolutionary model predicts slow response of alpine plants to climate warming.

    PubMed

    Cotto, Olivier; Wessely, Johannes; Georges, Damien; Klonner, Günther; Schmid, Max; Dullinger, Stefan; Thuiller, Wilfried; Guillaume, Frédéric

    2017-05-05

    Withstanding extinction while facing rapid climate change depends on a species' ability to track its ecological niche or to evolve a new one. Current methods that predict climate-driven species' range shifts use ecological modelling without eco-evolutionary dynamics. Here we present an eco-evolutionary forecasting framework that combines niche modelling with individual-based demographic and genetic simulations. Applying our approach to four endemic perennial plant species of the Austrian Alps, we show that accounting for eco-evolutionary dynamics when predicting species' responses to climate change is crucial. Perennial species persist in unsuitable habitats longer than predicted by niche modelling, causing delayed range losses; however, their evolutionary responses are constrained because long-lived adults produce increasingly maladapted offspring. Decreasing population size due to maladaptation occurs faster than the contraction of the species range, especially for the most abundant species. Monitoring of species' local abundance rather than their range may likely better inform on species' extinction risks under climate change.

  7. Evaluating scaling models in biology using hierarchical Bayesian approaches

    PubMed Central

    Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S

    2009-01-01

    Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621

  8. Impact of DNA twist accumulation on progressive helical wrapping of torsionally constrained DNA.

    PubMed

    Li, Wei; Wang, Peng-Ye; Yan, Jie; Li, Ming

    2012-11-21

    DNA wrapping is an important mechanism for chromosomal DNA packaging in cells and viruses. Previous studies of DNA wrapping have been performed mostly on torsionally unconstrained DNA, while in vivo DNA is often under torsional constraint. In this study, we extend a previously proposed theoretical model for wrapping of torsionally unconstrained DNA to a new model including the contribution of DNA twist energy, which influences DNA wrapping drastically. In particular, due to accumulation of twist energy during DNA wrapping, it predicts a finite amount of DNA that can be wrapped on a helical spool. The predictions of the new model are tested by single-molecule study of DNA wrapping under torsional constraint using magnetic tweezers. The theoretical predictions and the experimental results are consistent with each other and their implications are discussed.

  9. Tectonothermal modeling of hydrocarbon maturation, Central Maracaibo Basin, Venezuela

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manske, M.C.

    1996-08-01

    The petroliferous Maracaibo Basin of northwestern Venezuela and extreme eastern Colombia has evolved through a complex geologic history. Deciphering the tectonic and thermal evolution is essential in the prediction of hydrocarbon maturation (timing) within the basin. Individual wells in two areas of the central basin, Blocks III and V, have been modeled to predict timing of hydrocarbon generation within the source Upper Cretaceous La Luna Formation, as well as within interbedded shales of the Lower-Middle Eocene Misoa Formation reservoir sandstones. Tectonic evolution, including burial and uplift (erosional) history, has been constrained with available well data. The initial extensional thermal regimemore » of the basin has been approximated with a Mackenzie-type thermal model, and the following compressional stage of basin development by applying a foreland basin model. Corrected Bottom Hole Temperature (BHT) measurements; from wells in the central basin, along with thermal conductivity measurements of rock samples from the entire sedimentary sequence, resulted in the estimation of present day heat flow. An understanding of the basin`s heat flow, then, allowed extrapolation of geothermal gradients through time. The relation of geothermal gradients and overpressure within the Upper Cretaceous hydrocarbon-generating La Luna Formation and thick Colon Formation shales was also taken into account. Maturation modeling by both the conventional Time-Temperature Index (TTI) and kinetic Transformation Ratio (TR) methods predicts the timing of hydrocarbon maturation in the potential source units of these two wells. These modeling results are constrained by vitrinite reflectance and illite/smectite clay dehydration data, and show general agreement. These results also have importance regarding the timing of structural formation and hydrocarbon migration into Misoa reservoirs.« less

  10. Evaluation of the land surface water budget in NCEP/NCAR and NCEP/DOE reanalyses using an off-line hydrologic model

    NASA Astrophysics Data System (ADS)

    Maurer, Edwin P.; O'Donnell, Greg M.; Lettenmaier, Dennis P.; Roads, John O.

    2001-08-01

    The ability of the National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis (NRA1) and the follow-up NCEP/Department of Energy (DOE) reanalysis (NRA2), to reproduce the hydrologic budgets over the Mississippi River basin is evaluated using a macroscale hydrology model. This diagnosis is aided by a relatively unconstrained global climate simulation using the NCEP global spectral model, and a more highly constrained regional climate simulation using the NCEP regional spectral model, both employing the same land surface parameterization (LSP) as the reanalyses. The hydrology model is the variable infiltration capacity (VIC) model, which is forced by gridded observed precipitation and temperature. It reproduces observed streamflow, and by closure is constrained to balance other terms in the surface water and energy budgets. The VIC-simulated surface fluxes therefore provide a benchmark for evaluating the predictions from the reanalyses and the climate models. The comparisons, conducted for the 10-year period 1988-1997, show the well-known overestimation of summer precipitation in the southeastern Mississippi River basin, a consistent overestimation of evapotranspiration, and an underprediction of snow in NRA1. These biases are generally lower in NRA2, though a large overprediction of snow water equivalent exists. NRA1 is subject to errors in the surface water budget due to nudging of modeled soil moisture to an assumed climatology. The nudging and precipitation bias alone do not explain the consistent overprediction of evapotranspiration throughout the basin. Another source of error is the gravitational drainage term in the NCEP LSP, which produces the majority of the model's reported runoff. This may contribute to an overprediction of persistence of surface water anomalies in much of the basin. Residual evapotranspiration inferred from an atmospheric balance of NRA1, which is more directly related to observed atmospheric variables, matches the VIC prediction much more closely than the coupled models. However, the persistence of the residual evapotranspiration is much less than is predicted by the hydrological model or the climate models.

  11. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xavier, MA; Trimboli, MS

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggestmore » significant performance improvements might be achieved by extending the result to electrochemical models. (C) 2015 Elsevier B.V. All rights reserved.« less

  12. Numerical modeling of Drangajökull Ice Cap, NW Iceland

    NASA Astrophysics Data System (ADS)

    Anderson, Leif S.; Jarosch, Alexander H.; Flowers, Gwenn E.; Aðalgeirsdóttir, Guðfinna; Magnússon, Eyjólfur; Pálsson, Finnur; Muñoz-Cobo Belart, Joaquín; Þorsteinsson, Þorsteinn; Jóhannesson, Tómas; Sigurðsson, Oddur; Harning, David; Miller, Gifford H.; Geirsdóttir, Áslaug

    2016-04-01

    Over the past century the Arctic has warmed twice as fast as the global average. This discrepancy is likely due to feedbacks inherent to the Arctic climate system. These Arctic climate feedbacks are currently poorly quantified, but are essential to future climate predictions based on global circulation modeling. Constraining the magnitude and timing of past Arctic climate changes allows us to test climate feedback parameterizations at different times with different boundary conditions. Because Holocene Arctic summer temperature changes have been largest in the North Atlantic (Kaufman et al., 2004) we focus on constraining the paleoclimate of Iceland. Glaciers are highly sensitive to changes in temperature and precipitation amount. This sensitivity allows for the estimation of paleoclimate using glacier models, modern glacier mass balance data, and past glacier extents. We apply our model to the Drangajökull ice cap (~150 sq. km) in NW Iceland. Our numerical model is resolved in two-dimensions, conserves mass, and applies the shallow-ice-approximation. The bed DEM used in the model runs was constructed from radio echo data surveyed in spring 2014. We constrain the modern surface mass balance of Drangajökull using: 1) ablation and accumulation stakes; 2) ice surface digital elevation models (DEMs) from satellite, airborne LiDAR, and aerial photographs; and 3) full-stokes model-derived vertical ice velocities. The modeled vertical ice velocities and ice surface DEMs are combined to estimate past surface mass balance. We constrain Holocene glacier geometries using moraines and trimlines (e.g., Brynjolfsson, etal, 2014), proglacial-lake cores, and radiocarbon-dated dead vegetation emerging from under the modern glacier. We present a sensitivity analysis of the model to changes in parameters and show the effect of step changes of temperature and precipitation on glacier extent. Our results are placed in context with local lacustrine and marine climate proxies as well as with glacier extent and volume changes across the North Atlantic.

  13. Thermo-hydraulics of the Peruvian accretionary complex at 12°S

    USGS Publications Warehouse

    Kukowski, Nina; Pecher, Ingo

    1999-01-01

    The models were constrained by the thermal gradient obtained from the depth of bottomsimulating reflectors (BSRs) at the lower slope and some conventional measurements. We foundthat significant frictional heating is required to explain the observed strong landward increase ofheat flux. This is consistent with results from sandbox modelling which predict strong basalfriction at this margin. A significantly higher heat source is needed to match the observed thermalgradient in the southern line.

  14. Constraining climatic controls on hillslope dynamics using a coupled model for the transport of soil and tracers: Application to loess-mantled hillslopes, Charwell River, South Island, New Zealand

    Treesearch

    J.J. Roering; P. Almond; P. Tonkin; J. McKean

    2004-01-01

    Landscapes reflect a legacy of tectonic and climatic forcing as modulated by surface processes. Because the morphologic characteristics of landscapes often do not allow us to uniquely define the relative roles of tectonic deformation and climate, additional constraints are required to interpret and predict landscape dynamics. Here we describe a coupled model for the...

  15. A Test of Carbon and Oxygen Stable Isotope Ratio Process Models in Tree Rings.

    NASA Astrophysics Data System (ADS)

    Roden, J. S.; Farquhar, G. D.

    2008-12-01

    Stable isotopes ratios of carbon and oxygen in tree ring cellulose have been used to infer environmental change. Process-based models have been developed to clarify the potential of historic tree ring records for meaningful paleoclimatic reconstructions. However, isotopic variation can be influenced by multiple environmental factors making simplistic interpretations problematic. Recently, the dual isotope approach, where the variation in one stable isotope ratio (e.g. oxygen) is used to constrain the interpretation of variation in another (e.g. carbon), has been shown to have the potential to de-convolute isotopic analysis. However, this approach requires further testing to determine its applicability for paleo-reconstructions using tree-ring time series. We present a study where the information needed to parameterize mechanistic models for both carbon and oxygen stable isotope ratios were collected in controlled environment chambers for two species (Pinus radiata and Eucalyptus globulus). The seedlings were exposed to treatments designed to modify leaf temperature, transpiration rates, stomatal conductance and photosynthetic capacity. Both species were grown for over 100 days under two humidity regimes that differed by 20%. Stomatal conductance was significantly different between species and for seedlings under drought conditions but not between other treatments or humidity regimes. The treatments produced large differences in transpiration rate and photosynthesis. Treatments that effected photosynthetic rates but not stomatal conductance influenced carbon isotope discrimination more than those that influenced primarily conductance. The various treatments produced a range in oxygen isotope ratios of 7 ‰. Process models predicted greater oxygen isotope enrichment in tree ring cellulose than observed. The oxygen isotope ratios of bulk leaf water were reasonably well predicted by current steady-state models. However, the fractional difference between models that predict bulk leaf water versus the site of evaporation did not increase with transpiration rates. In conclusion, although the dual isotope approach may better constrain interpretation of isotopic variation, more work is required before its predictive power can be applied to tree-ring archives.

  16. Ethical Considerations in the Practical Application of the Unisa Socio-Critical Model of Student Success

    ERIC Educational Resources Information Center

    Fynn, Angelo

    2016-01-01

    The prediction and classification of student performance has always been a central concern within higher education institutions. It is therefore natural for higher education institutions to harvest and analyse student data to inform decisions on education provision in resource constrained South African environments. One of the drivers for the use…

  17. The Postindustrial University: Fiscal Crisis and the Changing Structure of Academic Labour.

    ERIC Educational Resources Information Center

    Barrow, Clyde W.

    This paper, in reflecting on socioeconomic trends that will affect higher education in the 1990s, argues for a "postindustrial" university model. The paper predicts that the current fiscal crisis in American higher education will persist throughout the 1990s as a result of: (1) slowly rising state appropriations, (2) market constrains on…

  18. Predictors of Numeracy Performance in National Testing Programs: Insights from the Longitudinal Study of Australian Children

    ERIC Educational Resources Information Center

    Carmichael, Colin; MacDonald, Amy; McFarland-Piazza, Laura

    2014-01-01

    This article is based on an exploratory study that examines factors which predict children's performance on the numeracy component of the Australian National Assessment Program--Literacy and Numeracy (NAPLAN). Utilizing an ecological theoretical model, this study examines child, home and school variables which may enable or constrain NAPLAN…

  19. Top-down estimate of dust emissions through integration of MODIS and MISR aerosol retrievals with the GEOS-Chem adjoint model

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-04-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOS-Chem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  20. Top-down Estimate of Dust Emissions Through Integration of MODIS and MISR Aerosol Retrievals With the Geos-chem Adjoint Model

    NASA Technical Reports Server (NTRS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-01-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOSChem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  1. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    USGS Publications Warehouse

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of generating calibration-constrained parameter fields approximately doubled. Predictive uncertainty variance computed through the NSMC method was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method. Copyright 2011 by the American Geophysical Union.

  2. Improvements to Wire Bundle Thermal Modeling for Ampacity Determination

    NASA Technical Reports Server (NTRS)

    Rickman, Steve L.; Iannello, Christopher J.; Shariff, Khadijah

    2017-01-01

    Determining current carrying capacity (ampacity) of wire bundles in aerospace vehicles is critical not only to safety but also to efficient design. Published standards provide guidance on determining wire bundle ampacity but offer little flexibility for configurations where wire bundles of mixed gauges and currents are employed with varying external insulation jacket surface properties. Thermal modeling has been employed in an attempt to develop techniques to assist in ampacity determination for these complex configurations. Previous developments allowed analysis of wire bundle configurations but was constrained to configurations comprised of less than 50 elements. Additionally, for vacuum analyses, configurations with very low emittance external jackets suffered from numerical instability in the solution. A new thermal modeler is presented allowing for larger configurations and is not constrained for low bundle infrared emissivity calculations. Formulation of key internal radiation and interface conductance parameters is discussed including the effects of temperature and air pressure on wire to wire thermal conductance. Test cases comparing model-predicted ampacity and that calculated from standards documents are presented.

  3. Constraining heat-transport models by comparison to experimental data in a NIF hohlraum

    NASA Astrophysics Data System (ADS)

    Farmer, W. A.; Jones, O. S.; Barrios Garcia, M. A.; Koning, J. M.; Kerbel, G. D.; Strozzi, D. J.; Hinkel, D. E.; Moody, J. D.; Suter, L. J.; Liedahl, D. A.; Moore, A. S.; Landen, O. L.

    2017-10-01

    The accurate simulation of hohlraum plasma conditions is important for predicting the partition of energy and the symmetry of the x-ray field within a hohlraum. Electron heat transport within the hohlraum plasma is difficult to model due to the complex interaction of kinetic plasma effects, magnetic fields, laser-plasma interactions, and microturbulence. Here, we report simulation results using the radiation-hydrodynamic code, HYDRA, utilizing various physics packages (e.g., nonlocal Schurtz model, MHD, flux limiters) and compare to data from hohlraum plasma experiments which contain a Mn-Co tracer dot. In these experiments, the dot is placed in various positions in the hohlraum in order to assess the spatial variation of plasma conditions. Simulated data is compared to a variety of experimental diagnostics. Conclusions are given concerning how the experimental data does and does not constrain the physics models examined. This work was supported by the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  4. Integrated detection of fractures and caves in carbonate fractured-vuggy reservoirs based on seismic data and well data

    NASA Astrophysics Data System (ADS)

    Cao, Zhanning; Li, Xiangyang; Sun, Shaohan; Liu, Qun; Deng, Guangxiao

    2018-04-01

    Aiming at the prediction of carbonate fractured-vuggy reservoirs, we put forward an integrated approach based on seismic and well data. We divide a carbonate fracture-cave system into four scales for study: micro-scale fracture, meso-scale fracture, macro-scale fracture and cave. Firstly, we analyze anisotropic attributes of prestack azimuth gathers based on multi-scale rock physics forward modeling. We select the frequency attenuation gradient attribute to calculate azimuth anisotropy intensity, and we constrain the result with Formation MicroScanner image data and trial production data to predict the distribution of both micro-scale and meso-scale fracture sets. Then, poststack seismic attributes, variance, curvature and ant algorithms are used to predict the distribution of macro-scale fractures. We also constrain the results with trial production data for accuracy. Next, the distribution of caves is predicted by the amplitude corresponding to the instantaneous peak frequency of the seismic imaging data. Finally, the meso-scale fracture sets, macro-scale fractures and caves are combined to obtain an integrated result. This integrated approach is applied to a real field in Tarim Basin in western China for the prediction of fracture-cave reservoirs. The results indicate that this approach can well explain the spatial distribution of carbonate reservoirs. It can solve the problem of non-uniqueness and improve fracture prediction accuracy.

  5. Probing Planckian Corrections at the Horizon Scale with LISA Binaries

    NASA Astrophysics Data System (ADS)

    Maselli, Andrea; Pani, Paolo; Cardoso, Vitor; Abdelsalhin, Tiziano; Gualtieri, Leonardo; Ferrari, Valeria

    2018-02-01

    Several quantum-gravity models of compact objects predict microscopic or even Planckian corrections at the horizon scale. We explore the possibility of measuring two model-independent, smoking-gun effects of these corrections in the gravitational waveform of a compact binary, namely, the absence of tidal heating and the presence of tidal deformability. For events detectable by the future space-based interferometer LISA, we show that the effect of tidal heating dominates and allows one to constrain putative corrections down to the Planck scale. The measurement of the tidal Love numbers with LISA is more challenging but, in optimistic scenarios, it allows us to constrain the compactness of a supermassive exotic compact object down to the Planck scale. Our analysis suggests that highly spinning, supermassive binaries at 1-20 Gpc provide unparalleled tests of quantum-gravity effects at the horizon scale.

  6. KOI-3278: a self-lensing binary star system.

    PubMed

    Kruse, Ethan; Agol, Eric

    2014-04-18

    Over 40% of Sun-like stars are bound in binary or multistar systems. Stellar remnants in edge-on binary systems can gravitationally magnify their companions, as predicted 40 years ago. By using data from the Kepler spacecraft, we report the detection of such a "self-lensing" system, in which a 5-hour pulse of 0.1% amplitude occurs every orbital period. The white dwarf stellar remnant and its Sun-like companion orbit one another every 88.18 days, a long period for a white dwarf-eclipsing binary. By modeling the pulse as gravitational magnification (microlensing) along with Kepler's laws and stellar models, we constrain the mass of the white dwarf to be ~63% of the mass of our Sun. Further study of this system, and any others discovered like it, will help to constrain the physics of white dwarfs and binary star evolution.

  7. Constraining estimates of methane emissions from Arctic permafrost regions with CARVE

    NASA Astrophysics Data System (ADS)

    Chang, R. Y.; Karion, A.; Sweeney, C.; Henderson, J.; Mountain, M.; Eluszkiewicz, J.; Luus, K. A.; Lin, J. C.; Dinardo, S.; Miller, C. E.; Wofsy, S. C.

    2013-12-01

    Permafrost in the Arctic contains large carbon pools that are currently non-labile, but can be released to the atmosphere as polar regions warm. In order to predict future climate scenarios, we need to understand the emissions of these greenhouse gases under varying environmental conditions. This study presents in-situ measurements of methane made on board an aircraft during the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE), which sampled over the permafrost regions of Alaska. Using measurements from May to September 2012, seasonal emission rate estimates of methane from tundra are constrained using the Stochastic Time-Inverted Lagrangian Transport model, a Lagrangian particle dispersion model driven by custom polar-WRF fields. Preliminary results suggest that methane emission rates have not greatly increased since the Arctic Boundary Layer Experiment conducted in southwest Alaska in 1988.

  8. Probing Planckian Corrections at the Horizon Scale with LISA Binaries.

    PubMed

    Maselli, Andrea; Pani, Paolo; Cardoso, Vitor; Abdelsalhin, Tiziano; Gualtieri, Leonardo; Ferrari, Valeria

    2018-02-23

    Several quantum-gravity models of compact objects predict microscopic or even Planckian corrections at the horizon scale. We explore the possibility of measuring two model-independent, smoking-gun effects of these corrections in the gravitational waveform of a compact binary, namely, the absence of tidal heating and the presence of tidal deformability. For events detectable by the future space-based interferometer LISA, we show that the effect of tidal heating dominates and allows one to constrain putative corrections down to the Planck scale. The measurement of the tidal Love numbers with LISA is more challenging but, in optimistic scenarios, it allows us to constrain the compactness of a supermassive exotic compact object down to the Planck scale. Our analysis suggests that highly spinning, supermassive binaries at 1-20 Gpc provide unparalleled tests of quantum-gravity effects at the horizon scale.

  9. A novel one-class SVM based negative data sampling method for reconstructing proteome-wide HTLV-human protein interaction networks.

    PubMed

    Mei, Suyu; Zhu, Hao

    2015-01-26

    Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.

  10. Constrained Active Learning for Anchor Link Prediction Across Multiple Heterogeneous Social Networks

    PubMed Central

    Zhu, Junxing; Zhang, Jiawei; Wu, Quanyuan; Jia, Yan; Zhou, Bin; Wei, Xiaokai; Yu, Philip S.

    2017-01-01

    Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a=(u,v) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages. PMID:28771201

  11. Constrained Active Learning for Anchor Link Prediction Across Multiple Heterogeneous Social Networks.

    PubMed

    Zhu, Junxing; Zhang, Jiawei; Wu, Quanyuan; Jia, Yan; Zhou, Bin; Wei, Xiaokai; Yu, Philip S

    2017-08-03

    Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a = ( u , v ) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages.

  12. Spatially explicit modeling of particulate nutrient flux in Large global rivers

    NASA Astrophysics Data System (ADS)

    Cohen, S.; Kettner, A.; Mayorga, E.; Harrison, J. A.

    2017-12-01

    Water, sediment, nutrient and carbon fluxes along river networks have undergone considerable alterations in response to anthropogenic and climatic changes, with significant consequences to infrastructure, agriculture, water security, ecology and geomorphology worldwide. However, in a global setting, these changes in fluvial fluxes and their spatial and temporal characteristics are poorly constrained, due to the limited availability of continuous and long-term observations. We present results from a new global-scale particulate modeling framework (WBMsedNEWS) that combines the Global NEWS watershed nutrient export model with the spatially distributed WBMsed water and sediment model. We compare the model predictions against multiple observational datasets. The results indicate that the model is able to accurately predict particulate nutrient (Nitrogen, Phosphorus and Organic Carbon) fluxes on an annual time scale. Analysis of intra-basin nutrient dynamics and fluxes to global oceans is presented.

  13. Soil thermal dynamics, snow cover, and frozen depth under five temperature treatments in an ombrotrophic bog: Constrained forecast with data assimilation: Forecast With Data Assimilation

    DOE PAGES

    Huang, Yuanyuan; Jiang, Jiang; Ma, Shuang; ...

    2017-08-18

    We report that accurate simulation of soil thermal dynamics is essential for realistic prediction of soil biogeochemical responses to climate change. To facilitate ecological forecasting at the Spruce and Peatland Responses Under Climatic and Environmental change site, we incorporated a soil temperature module into a Terrestrial ECOsystem (TECO) model by accounting for surface energy budget, snow dynamics, and heat transfer among soil layers and during freeze-thaw events. We conditioned TECO with detailed soil temperature and snow depth observations through data assimilation before the model was used for forecasting. The constrained model reproduced variations in observed temperature from different soil layers,more » the magnitude of snow depth, the timing of snowfall and snowmelt, and the range of frozen depth. The conditioned TECO forecasted probabilistic distributions of soil temperature dynamics in six soil layers, snow, and frozen depths under temperature treatments of +0.0, +2.25, +4.5, +6.75, and +9.0°C. Air warming caused stronger elevation in soil temperature during summer than winter due to winter snow and ice. And soil temperature increased more in shallow soil layers in summer in response to air warming. Whole ecosystem warming (peat + air warmings) generally reduced snow and frozen depths. The accuracy of forecasted snow and frozen depths relied on the precision of weather forcing. Uncertainty is smaller for forecasting soil temperature but large for snow and frozen depths. Lastly, timely and effective soil thermal forecast, constrained through data assimilation that combines process-based understanding and detailed observations, provides boundary conditions for better predictions of future biogeochemical cycles.« less

  14. Soil thermal dynamics, snow cover, and frozen depth under five temperature treatments in an ombrotrophic bog: Constrained forecast with data assimilation: Forecast With Data Assimilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Yuanyuan; Jiang, Jiang; Ma, Shuang

    We report that accurate simulation of soil thermal dynamics is essential for realistic prediction of soil biogeochemical responses to climate change. To facilitate ecological forecasting at the Spruce and Peatland Responses Under Climatic and Environmental change site, we incorporated a soil temperature module into a Terrestrial ECOsystem (TECO) model by accounting for surface energy budget, snow dynamics, and heat transfer among soil layers and during freeze-thaw events. We conditioned TECO with detailed soil temperature and snow depth observations through data assimilation before the model was used for forecasting. The constrained model reproduced variations in observed temperature from different soil layers,more » the magnitude of snow depth, the timing of snowfall and snowmelt, and the range of frozen depth. The conditioned TECO forecasted probabilistic distributions of soil temperature dynamics in six soil layers, snow, and frozen depths under temperature treatments of +0.0, +2.25, +4.5, +6.75, and +9.0°C. Air warming caused stronger elevation in soil temperature during summer than winter due to winter snow and ice. And soil temperature increased more in shallow soil layers in summer in response to air warming. Whole ecosystem warming (peat + air warmings) generally reduced snow and frozen depths. The accuracy of forecasted snow and frozen depths relied on the precision of weather forcing. Uncertainty is smaller for forecasting soil temperature but large for snow and frozen depths. Lastly, timely and effective soil thermal forecast, constrained through data assimilation that combines process-based understanding and detailed observations, provides boundary conditions for better predictions of future biogeochemical cycles.« less

  15. Effect of soil property uncertainties on permafrost thaw projections: a calibration-constrained analysis

    NASA Astrophysics Data System (ADS)

    Harp, D. R.; Atchley, A. L.; Painter, S. L.; Coon, E. T.; Wilson, C. J.; Romanovsky, V. E.; Rowland, J. C.

    2016-02-01

    The effects of soil property uncertainties on permafrost thaw projections are studied using a three-phase subsurface thermal hydrology model and calibration-constrained uncertainty analysis. The null-space Monte Carlo method is used to identify soil hydrothermal parameter combinations that are consistent with borehole temperature measurements at the study site, the Barrow Environmental Observatory. Each parameter combination is then used in a forward projection of permafrost conditions for the 21st century (from calendar year 2006 to 2100) using atmospheric forcings from the Community Earth System Model (CESM) in the Representative Concentration Pathway (RCP) 8.5 greenhouse gas concentration trajectory. A 100-year projection allows for the evaluation of predictive uncertainty (due to soil property (parametric) uncertainty) and the inter-annual climate variability due to year to year differences in CESM climate forcings. After calibrating to measured borehole temperature data at this well-characterized site, soil property uncertainties are still significant and result in significant predictive uncertainties in projected active layer thickness and annual thaw depth-duration even with a specified future climate. Inter-annual climate variability in projected soil moisture content and Stefan number are small. A volume- and time-integrated Stefan number decreases significantly, indicating a shift in subsurface energy utilization in the future climate (latent heat of phase change becomes more important than heat conduction). Out of 10 soil parameters, ALT, annual thaw depth-duration, and Stefan number are highly dependent on mineral soil porosity, while annual mean liquid saturation of the active layer is highly dependent on the mineral soil residual saturation and moderately dependent on peat residual saturation. By comparing the ensemble statistics to the spread of projected permafrost metrics using different climate models, we quantify the relative magnitude of soil property uncertainty to another source of permafrost uncertainty, structural climate model uncertainty. We show that the effect of calibration-constrained uncertainty in soil properties, although significant, is less than that produced by structural climate model uncertainty for this location.

  16. Measurements and Modeling of Stress in Precipitation-Hardened Aluminum Alloy AA2618 during Gleeble Interrupted Quenching and Constrained Cooling

    NASA Astrophysics Data System (ADS)

    Chobaut, Nicolas; Carron, Denis; Saelzle, Peter; Drezet, Jean-Marie

    2016-11-01

    Solutionizing and quenching are the key steps in the fabrication of heat-treatable aluminum parts such as AA2618 compressor impellers for turbochargers as they highly impact the mechanical characteristics of the product. In particular, quenching induces residual stresses that can cause unacceptable distortions during machining and unfavorable stresses in service. Predicting and controlling stress generation during quenching of large AA2618 forgings are therefore of particular interest. Since possible precipitation during quenching may affect the local yield strength of the material and thus impact the level of macroscale residual stresses, consideration of this phenomenon is required. A material model accounting for precipitation in a simple but realistic way is presented. Instead of modeling precipitation that occurs during quenching, the model parameters are identified using a limited number of tensile tests achieved after representative interrupted cooling paths in a Gleeble machine. This material model is presented, calibrated, and validated against constrained coolings in a Gleeble blocked-jaws configuration. Applications of this model are FE computations of stress generation during quenching of large AA2618 forgings for compressor impellers.

  17. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  18. International Land Model Benchmarking (ILAMB) Workshop Report, Technical Report DOE/SC-0186

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forrest M.; Koven, Charles D.; Kappel-Aleks, Gretchen

    2016-11-01

    As Earth system models become increasingly complex, there is a growing need for comprehensive and multi-faceted evaluation of model projections. To advance understanding of biogeochemical processes and their interactions with hydrology and climate under conditions of increasing atmospheric carbon dioxide, new analysis methods are required that use observations to constrain model predictions, inform model development, and identify needed measurements and field experiments. Better representations of biogeochemistry–climate feedbacks and ecosystem processes in these models are essential for reducing uncertainties associated with projections of climate change during the remainder of the 21st century.

  19. Constraining a complex biogeochemical model for CO2 and N2O emission simulations from various land uses by model-data fusion

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz

    2017-07-01

    This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.

  20. Kinesin Steps Do Not Alternate in Size☆

    PubMed Central

    Fehr, Adrian N.; Asbury, Charles L.; Block, Steven M.

    2008-01-01

    Abstract Kinesin is a two-headed motor protein that transports cargo inside cells by moving stepwise on microtubules. Its exact trajectory along the microtubule is unknown: alternative pathway models predict either uniform 8-nm steps or alternating 7- and 9-nm steps. By analyzing single-molecule stepping traces from “limping” kinesin molecules, we were able to distinguish alternate fast- and slow-phase steps and thereby to calculate the step sizes associated with the motions of each of the two heads. We also compiled step distances from nonlimping kinesin molecules and compared these distributions against models predicting uniform or alternating step sizes. In both cases, we find that kinesin takes uniform 8-nm steps, a result that strongly constrains the allowed models. PMID:18083906

  1. Fermi-LAT upper limits on gamma-ray emission from colliding wind binaries

    DOE PAGES

    Werner, Michael; Reimer, O.; Reimer, A.; ...

    2013-07-09

    Here, colliding wind binaries (CWBs) are thought to give rise to a plethora of physical processes including acceleration and interaction of relativistic particles. Observation of synchrotron radiation in the radio band confirms there is a relativistic electron population in CWBs. Accordingly, CWBs have been suspected sources of high-energy γ-ray emission since the COS-B era. Theoretical models exist that characterize the underlying physical processes leading to particle acceleration and quantitatively predict the non-thermal energy emission observable at Earth. Furthermore, we strive to find evidence of γ-ray emission from a sample of seven CWB systems: WR 11, WR 70, WR 125, WRmore » 137, WR 140, WR 146, and WR 147. Theoretical modelling identified these systems as the most favourable candidates for emitting γ-rays. We make a comparison with existing γ-ray flux predictions and investigate possible constraints. We used 24 months of data from the Large Area Telescope (LAT) on-board the Fermi Gamma Ray Space Telescope to perform a dedicated likelihood analysis of CWBs in the LAT energy range. As a result, we find no evidence of γ-ray emission from any of the studied CWB systems and determine corresponding flux upper limits. For some CWBs the interplay of orbital and stellar parameters renders the Fermi-LAT data not sensitive enough to constrain the parameter space of the emission models. In the cases of WR140 and WR147, the Fermi -LAT upper limits appear to rule out some model predictions entirely and constrain theoretical models over a significant parameter space. A comparison of our findings to the CWB η Car is made.« less

  2. Optimizing future imaging survey of galaxies to confront dark energy and modified gravity models

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kazuhiro; Parkinson, David; Hamana, Takashi; Nichol, Robert C.; Suto, Yasushi

    2007-07-01

    We consider the extent to which future imaging surveys of galaxies can distinguish between dark energy and modified gravity models for the origin of the cosmic acceleration. Dynamical dark energy models may have similar expansion rates as models of modified gravity, yet predict different growth of structure histories. We parametrize the cosmic expansion by the two parameters, w0 and wa, and the linear growth rate of density fluctuations by Linder’s γ, independently. Dark energy models generically predict γ≈0.55, while the Dvali-Gabadadze-Porrati (DGP) model γ≈0.68. To determine if future imaging surveys can constrain γ within 20% (or Δγ<0.1), we perform the Fisher matrix analysis for a weak-lensing survey such as the ongoing Hyper Suprime-Cam (HSC) project. Under the condition that the total observation time is fixed, we compute the figure of merit (FoM) as a function of the exposure time texp. We find that the tomography technique effectively improves the FoM, which has a broad peak around texp≃several˜10min; a shallow and wide survey is preferred to constrain the γ parameter. While Δγ<0.1 cannot be achieved by the HSC weak-lensing survey alone, one can improve the constraints by combining with a follow-up spectroscopic survey like Wide-field Fiber-fed Multi-Object Spectrograph (WFMOS) and/or future cosmic microwave background (CMB) observations.

  3. Ab initio theory and modeling of water.

    PubMed

    Chen, Mohan; Ko, Hsin-Yu; Remsing, Richard C; Calegari Andrade, Marcos F; Santra, Biswajit; Sun, Zhaoru; Selloni, Annabella; Car, Roberto; Klein, Michael L; Perdew, John P; Wu, Xifan

    2017-10-10

    Water is of the utmost importance for life and technology. However, a genuinely predictive ab initio model of water has eluded scientists. We demonstrate that a fully ab initio approach, relying on the strongly constrained and appropriately normed (SCAN) density functional, provides such a description of water. SCAN accurately describes the balance among covalent bonds, hydrogen bonds, and van der Waals interactions that dictates the structure and dynamics of liquid water. Notably, SCAN captures the density difference between water and ice I h at ambient conditions, as well as many important structural, electronic, and dynamic properties of liquid water. These successful predictions of the versatile SCAN functional open the gates to study complex processes in aqueous phase chemistry and the interactions of water with other materials in an efficient, accurate, and predictive, ab initio manner.

  4. Ab initio theory and modeling of water

    PubMed Central

    Chen, Mohan; Ko, Hsin-Yu; Remsing, Richard C.; Calegari Andrade, Marcos F.; Santra, Biswajit; Sun, Zhaoru; Selloni, Annabella; Car, Roberto; Klein, Michael L.; Perdew, John P.; Wu, Xifan

    2017-01-01

    Water is of the utmost importance for life and technology. However, a genuinely predictive ab initio model of water has eluded scientists. We demonstrate that a fully ab initio approach, relying on the strongly constrained and appropriately normed (SCAN) density functional, provides such a description of water. SCAN accurately describes the balance among covalent bonds, hydrogen bonds, and van der Waals interactions that dictates the structure and dynamics of liquid water. Notably, SCAN captures the density difference between water and ice Ih at ambient conditions, as well as many important structural, electronic, and dynamic properties of liquid water. These successful predictions of the versatile SCAN functional open the gates to study complex processes in aqueous phase chemistry and the interactions of water with other materials in an efficient, accurate, and predictive, ab initio manner. PMID:28973868

  5. A method to identify and analyze biological programs through automated reasoning

    PubMed Central

    Yordanov, Boyan; Dunn, Sara-Jane; Kugler, Hillel; Smith, Austin; Martello, Graziano; Emmott, Stephen

    2016-01-01

    Predictive biology is elusive because rigorous, data-constrained, mechanistic models of complex biological systems are difficult to derive and validate. Current approaches tend to construct and examine static interaction network models, which are descriptively rich, but often lack explanatory and predictive power, or dynamic models that can be simulated to reproduce known behavior. However, in such approaches implicit assumptions are introduced as typically only one mechanism is considered, and exhaustively investigating all scenarios is impractical using simulation. To address these limitations, we present a methodology based on automated formal reasoning, which permits the synthesis and analysis of the complete set of logical models consistent with experimental observations. We test hypotheses against all candidate models, and remove the need for simulation by characterizing and simultaneously analyzing all mechanistic explanations of observed behavior. Our methodology transforms knowledge of complex biological processes from sets of possible interactions and experimental observations to precise, predictive biological programs governing cell function. PMID:27668090

  6. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  7. CONSTRAINING HIGH-SPEED WINDS IN EXOPLANET ATMOSPHERES THROUGH OBSERVATIONS OF ANOMALOUS DOPPLER SHIFTS DURING TRANSIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller-Ricci Kempton, Eliza; Rauscher, Emily, E-mail: ekempton@ucolick.org

    2012-06-01

    Three-dimensional (3D) dynamical models of hot Jupiter atmospheres predict very strong wind speeds. For tidally locked hot Jupiters, winds at high altitude in the planet's atmosphere advect heat from the day side to the cooler night side of the planet. Net wind speeds on the order of 1-10 km s{sup -1} directed towards the night side of the planet are predicted at mbar pressures, which is the approximate pressure level probed by transmission spectroscopy. These winds should result in an observed blueshift of spectral lines in transmission on the order of the wind speed. Indeed, Snellen et al. recently observedmore » a 2 {+-} 1 km s{sup -1} blueshift of CO transmission features for HD 209458b, which has been interpreted as a detection of the day-to-night (substellar to anti-stellar) winds that have been predicted by 3D atmospheric dynamics modeling. Here, we present the results of a coupled 3D atmospheric dynamics and transmission spectrum model, which predicts the Doppler-shifted spectrum of a hot Jupiter during transit resulting from winds in the planet's atmosphere. We explore four different models for the hot Jupiter atmosphere using different prescriptions for atmospheric drag via interaction with planetary magnetic fields. We find that models with no magnetic drag produce net Doppler blueshifts in the transmission spectrum of {approx}2 km s{sup -1} and that lower Doppler shifts of {approx}1 km s{sup -1} are found for the higher drag cases, results consistent with-but not yet strongly constrained by-the Snellen et al. measurement. We additionally explore the possibility of recovering the average terminator wind speed as a function of altitude by measuring Doppler shifts of individual spectral lines and spatially resolving wind speeds across the leading and trailing terminators during ingress and egress.« less

  8. Emergent Constraints for Cloud Feedbacks and Climate Sensitivity

    DOE PAGES

    Klein, Stephen A.; Hall, Alex

    2015-10-26

    Emergent constraints are physically explainable empirical relationships between characteristics of the current climate and long-term climate prediction that emerge in collections of climate model simulations. With the prospect of constraining long-term climate prediction, scientists have recently uncovered several emergent constraints related to long-term cloud feedbacks. We review these proposed emergent constraints, many of which involve the behavior of low-level clouds, and discuss criteria to assess their credibility. With further research, some of the cases we review may eventually become confirmed emergent constraints, provided they are accompanied by credible physical explanations. Because confirmed emergent constraints identify a source of model errormore » that projects onto climate predictions, they deserve extra attention from those developing climate models and climate observations. While a systematic bias cannot be ruled out, it is noteworthy that the promising emergent constraints suggest larger cloud feedback and hence climate sensitivity.« less

  9. Improved Geothermometry Through Multivariate Reaction-path Modeling and Evaluation of Geomicrobiological Influences on Geochemical Temperature Indicators: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mattson, Earl; Smith, Robert; Fujita, Yoshiko

    2015-03-01

    The project was aimed at demonstrating that the geothermometric predictions can be improved through the application of multi-element reaction path modeling that accounts for lithologic and tectonic settings, while also accounting for biological influences on geochemical temperature indicators. The limited utilization of chemical signatures by individual traditional geothermometer in the development of reservoir temperature estimates may have been constraining their reliability for evaluation of potential geothermal resources. This project, however, was intended to build a geothermometry tool which can integrate multi-component reaction path modeling with process-optimization capability that can be applied to dilute, low-temperature water samples to consistently predict reservoirmore » temperature within ±30 °C. The project was also intended to evaluate the extent to which microbiological processes can modulate the geochemical signals in some thermal waters and influence the geothermometric predictions.« less

  10. DART: New Research Using Ensemble Data Assimilation in Geophysical Models

    NASA Astrophysics Data System (ADS)

    Hoar, T. J.; Raeder, K.

    2015-12-01

    The Data Assimilation Research Testbed (DART) is a community facilityfor ensemble data assimilation developed and supported by the NationalCenter for Atmospheric Research. DART provides a comprehensive suite of software, documentation, and tutorials that can be used for ensemble data assimilation research, operations, and education. Scientists and software engineers at NCAR are available to support DART users who want to use existing DART products or develop their own applications. Current DART users range from university professors teaching data assimilation, to individual graduate students working with simple models, through national laboratories doing operational prediction with large state-of-the-art models. DART runs efficiently on many computational platforms ranging from laptops through thousands of cores on the newest supercomputers.This poster focuses on several recent research activities using DART with geophysical models.Using CAM/DART to understand whether OCO-2 Total Precipitable Water observations can be useful in numerical weather prediction.Impacts of the synergistic use of Infra-red CO retrievals (MOPITT, IASI) in CAM-CHEM/DART assimilations.Assimilation and Analysis of Observations of Amazonian Biomass Burning Emissions by MOPITT (aerosol optical depth), MODIS (carbon monoxide) and MISR (plume height).Long term evaluation of the chemical response of MOPITT-CO assimilation in CAM-CHEM/DART OSSEs for satellite planning and emission inversion capabilities.Improved forward observation operators for land models that have multiple land use/land cover segments in a single grid cell,Simulating mesoscale convective systems (MCSs) using a variable resolution, unstructured grid in the Model for Prediction Across Scales (MPAS) and DART.The mesoscale WRF+DART system generated an ensemble of year-long, real-time initializations of a convection allowing model over the United States.Constraining WACCM with observations in the tropical band (30S-30N) using DART also constrains the polar stratosphere during the same winter. Assimilation of MOPITT carbon monoxide Compact Phase Space Retrievals (CPSR) in WRF-Chem/DART.Future work:DART interface to the CICE (CESM) sea ice model.Fully coupled assimilations in CESM.

  11. Orbit determination of the Next-Generation Beidou satellites with Intersatellite link measurements and a priori orbit constraints

    NASA Astrophysics Data System (ADS)

    Ren, Xia; Yang, Yuanxi; Zhu, Jun; Xu, Tianhe

    2017-11-01

    Intersatellite Link (ISL) technology helps to realize the auto update of broadcast ephemeris and clock error parameters for Global Navigation Satellite System (GNSS). ISL constitutes an important approach with which to both improve the observation geometry and extend the tracking coverage of China's Beidou Navigation Satellite System (BDS). However, ISL-only orbit determination might lead to the constellation drift, rotation, and even lead to the divergence in orbit determination. Fortunately, predicted orbits with good precision can be used as a priori information with which to constrain the estimated satellite orbit parameters. Therefore, the precision of satellite autonomous orbit determination can be improved by consideration of a priori orbit information, and vice versa. However, the errors of rotation and translation in a priori orbit will remain in the ultimate result. This paper proposes a constrained precise orbit determination (POD) method for a sub-constellation of the new Beidou satellite constellation with only a few ISLs. The observation model of dual one-way measurements eliminating satellite clock errors is presented, and the orbit determination precision is analyzed with different data processing backgrounds. The conclusions are as follows. (1) With ISLs, the estimated parameters are strongly correlated, especially the positions and velocities of satellites. (2) The performance of determined BDS orbits will be improved by the constraints with more precise priori orbits. The POD precision is better than 45 m with a priori orbit constrain of 100 m precision (e.g., predicted orbits by telemetry tracking and control system), and is better than 6 m with precise priori orbit constraints of 10 m precision (e.g., predicted orbits by international GNSS monitoring & Assessment System (iGMAS)). (3) The POD precision is improved by additional ISLs. Constrained by a priori iGMAS orbits, the POD precision with two, three, and four ISLs is better than 6, 3, and 2 m, respectively. (4) The in-plane link and out-of-plane link have different contributions to observation configuration and system observability. The POD with weak observation configuration (e.g., one in-plane link and one out-of-plane link) should be tightly constrained with a priori orbits.

  12. The surprisingly transparent sQGP at LHC

    NASA Astrophysics Data System (ADS)

    Horowitz, W. A.; Gyulassy, Miklos

    2011-12-01

    We present parameter-free predictions of the nuclear modification factor, RAAπ(p,s), of high p pions produced in Pb + Pb collisions at s=2.76 and 5.5 ATeV based on the WHDG/DGLV (radiative + elastic + geometric fluctuation) jet energy loss model. The initial quark gluon plasma (QGP) density at LHC is constrained from a rigorous statistical analysis of PHENIX/RHIC π quenching data at s=0.2 ATeV and the charged particle multiplicity at ALICE/LHC at 2.76 ATeV. Our perturbative QCD tomographic theory predicts significant differences between jet quenching at RHIC and LHC energies, which are qualitatively consistent with the p-dependence and normalization—within the large systematic uncertainty—of the first charged hadron nuclear modification factor, RAAch, data measured by ALICE. However, our constrained prediction of the central to peripheral pion modification, Rcpπ(p), for which large systematic uncertainties associated with unmeasured p + p reference data cancel, is found to be over-quenched relative to the charged hadron ALICE Rcpch data in the range 5

  13. Effective theory of flavor for Minimal Mirror Twin Higgs

    DOE PAGES

    Barbieri, Riccardo; Hall, Lawrence J.; Harigaya, Keisuke

    2017-10-03

    We consider two copies of the Standard Model, interchanged by an exact parity symmetry, P. The observed fermion mass hierarchy is described by suppression factors ϵ more » $$n_i$$ for charged fermion i, as can arise in Froggatt-Nielsen and extra-dimensional theories of flavor. The corresponding flavor factors in the mirror sector are ϵ' $$n_i$$, so that spontaneous breaking of the parity P arises from a single parameter ϵ'/ϵ, yielding a tightly constrained version of Minimal Mirror Twin Higgs, introduced in our previous paper. Models are studied for simple values of n i, including in particular one with SU(5)-compatibility, that describe the observed fermion mass hierarchy. The entire mirror quark and charged lepton spectrum is broadly predicted in terms of ϵ'/ϵ, as are the mirror QCD scale and the decoupling temperature between the two sectors. Helium-, hydrogen- and neutron-like mirror dark matter candidates are constrained by self-scattering and relic ionization. Lastly, in each case, the allowed parameter space can be fully probed by proposed direct detection experiments. Correlated predictions are made as well for the Higgs signal strength and the amount of dark radiation.« less

  14. Exploring the Relationship Between Planet Mass and Atmospheric Metallicity for Cool Giant Planets

    NASA Astrophysics Data System (ADS)

    Thomas, Nancy H.; Wong, Ian; Knutson, Heather; Deming, Drake; Desert, Jean-Michel; Fortney, Jonathan J.; Morley, Caroline; Kammer, Joshua A.; Line, Michael R.

    2016-10-01

    Measurements of the average densities of exoplanets have begun to help constrain their bulk compositions and to provide insight into their formation locations and accretionary histories. Current mass and radius measurements suggest an inverse relationship between a planet's bulk metallicity and its mass, a relationship also seen in the gas and ice giant planets of our own solar system. We expect atmospheric metallicity to similarly increase with decreasing planet mass, but there are currently few constraints on the atmospheric metallicities of extrasolar giant planets. For hydrogen-dominated atmospheres, equilibrium chemistry models predict a transition from CO to CH4 below ~1200 K. However, with increased atmospheric metallicity the relative abundance of CH4 is depleted and CO is enhanced. In this study we present new secondary eclipse observations of a set of cool (<1200 K) giant exoplanets at 3.6 and 4.5 microns using the Spitzer Space Telescope, which allow us to constrain their relative abundances of CH4 and CO and corresponding atmospheric metallicities. We discuss the implications of our results for the proposed correlation between planet mass and atmospheric metallicity as predicted by the core accretion models and observed in our solar system.

  15. Finite Element Modeling of In-Situ Stresses near Salt Bodies

    NASA Astrophysics Data System (ADS)

    Sanz, P.; Gray, G.; Albertz, M.

    2011-12-01

    The in-situ stress field is modified around salt bodies because salt rock has no ability to sustain shear stresses. A reliable prediction of stresses near salt is important for planning safe and economic drilling programs. A better understanding of in-situ stresses before drilling can be achieved using finite element models that account for the creeping salt behavior and the elastoplastic response of the surrounding sediments. Two different geomechanical modeling techniques can be distinguished: "dynamic" modeling and "static" modeling. "Dynamic" models, also known as forward models, simulate the development of structural processes in geologic time. This technique provides the evolution of stresses and so it is used to simulate the initiation and development of structural features, such as, faults, folds, fractures, and salt diapers. The original or initial configuration and the unknown final configuration of forward models are usually significantly different therefore geometric non-linearities need to be considered. These models may be difficult to constrain when different tectonic, deposition, and erosion events, and the timing among them, needs to be accounted for. While dynamic models provide insight into the stress evolution, in many cases is very challenging, if not impossible, to forward model a configuration to its known present-day geometry; particularly in the case of salt layers that evolve into highly irregular and complex geometries. Alternatively, "static" models use the present-day geometry and present-day far-field stresses to estimate the present-day in-situ stress field inside a domain. In this case, it is appropriate to use a small deformation approach because initial and final configurations should be very similar, and more important, because the equilibrium of stresses should be stated in the present-day initial configuration. The initial stresses and the applied boundary conditions are constrained by the geologic setting and available data. This modeling technique does not predict the evolution of structural elements or stresses with time; therefore it does not provide any insight into the formation of fractures that were previously developed under a different stress condition or the development of overpressure generated by a high sedimentation rate. This work provides a validation for predicting in-situ stresses near salt using "static" models. We compare synthetic examples using both modeling techniques and show that stresses near salt predicted with "static" models are comparable to the ones generated by "dynamic" models.

  16. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  17. Lexical mediation of phonotactic frequency effects on spoken word recognition: A Granger causality analysis of MRI-constrained MEG/EEG data.

    PubMed

    Gow, David W; Olson, Bruna B

    2015-07-01

    Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.

  18. Lexical mediation of phonotactic frequency effects on spoken word recognition: A Granger causality analysis of MRI-constrained MEG/EEG data

    PubMed Central

    Gow, David W.; Olson, Bruna B.

    2015-01-01

    Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical “gang effects” in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account. PMID:25883413

  19. Pan-Arctic modelling of net ecosystem exchange of CO2

    PubMed Central

    Shaver, G. R.; Rastetter, E. B.; Salmon, V.; Street, L. E.; van de Weg, M. J.; Rocha, A.; van Wijk, M. T.; Williams, M.

    2013-01-01

    Net ecosystem exchange (NEE) of C varies greatly among Arctic ecosystems. Here, we show that approximately 75 per cent of this variation can be accounted for in a single regression model that predicts NEE as a function of leaf area index (LAI), air temperature and photosynthetically active radiation (PAR). The model was developed in concert with a survey of the light response of NEE in Arctic and subarctic tundras in Alaska, Greenland, Svalbard and Sweden. Model parametrizations based on data collected in one part of the Arctic can be used to predict NEE in other parts of the Arctic with accuracy similar to that of predictions based on data collected in the same site where NEE is predicted. The principal requirement for the dataset is that it should contain a sufficiently wide range of measurements of NEE at both high and low values of LAI, air temperature and PAR, to properly constrain the estimates of model parameters. Canopy N content can also be substituted for leaf area in predicting NEE, with equal or greater accuracy, but substitution of soil temperature for air temperature does not improve predictions. Overall, the results suggest a remarkable convergence in regulation of NEE in diverse ecosystem types throughout the Arctic. PMID:23836790

  20. X-1 to X-Wings: Developing a Parametric Cost Model

    NASA Technical Reports Server (NTRS)

    Sterk, Steve; McAtee, Aaron

    2015-01-01

    In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.

  1. Recent Results From MINERvA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patrick, Cheryl

    The MINERvA detector is situated in Fermilab's NuMI beam, which provides neutrinos and antineutrinos in the 1-20 GeV range. It is designed to make precision cross-section measurements for scattering processes on various nuclei. These proceedings summarize the differential cross-section distributions measured for several different processes. Comparison of these with various models hints at additional nuclear effects not included in common simulations. These results will help constrain generators' nuclear models and reduce systematic uncertainties on their predictions. An accurate cross-section model, with minimal uncertainties, is vital to oscillation experiments.

  2. Modeling a space-variant cortical representation for apparent motion.

    PubMed

    Wurbs, Jeremy; Mingolla, Ennio; Yazdanbakhsh, Arash

    2013-08-06

    Receptive field sizes of neurons in early primate visual areas increase with eccentricity, as does temporal processing speed. The fovea is evidently specialized for slow, fine movements while the periphery is suited for fast, coarse movements. In either the fovea or periphery discrete flashes can produce motion percepts. Grossberg and Rudd (1989) used traveling Gaussian activity profiles to model long-range apparent motion percepts. We propose a neural model constrained by physiological data to explain how signals from retinal ganglion cells to V1 affect the perception of motion as a function of eccentricity. Our model incorporates cortical magnification, receptive field overlap and scatter, and spatial and temporal response characteristics of retinal ganglion cells for cortical processing of motion. Consistent with the finding of Baker and Braddick (1985), in our model the maximum flash distance that is perceived as an apparent motion (Dmax) increases linearly as a function of eccentricity. Baker and Braddick (1985) made qualitative predictions about the functional significance of both stimulus and visual system parameters that constrain motion perception, such as an increase in the range of detectable motions as a function of eccentricity and the likely role of higher visual processes in determining Dmax. We generate corresponding quantitative predictions for those functional dependencies for individual aspects of motion processing. Simulation results indicate that the early visual pathway can explain the qualitative linear increase of Dmax data without reliance on extrastriate areas, but that those higher visual areas may serve as a modulatory influence on the exact Dmax increase.

  3. Lepton flavor violating B meson decays via a scalar leptoquark

    NASA Astrophysics Data System (ADS)

    Sahoo, Suchismita; Mohanta, Rukmani

    2016-06-01

    We study the effect of scalar leptoquarks in the lepton flavor violating B meson decays induced by the flavor-changing transitions b →q li+lj- with q =s , d . In the standard model, these transitions are extremely rare as they are either two-loop suppressed or proceed via box diagrams with tiny neutrino masses in the loop. However, in the leptoquark model, they can occur at tree level and are expected to have significantly large branching ratios. The leptoquark parameter space is constrained using the experimental limits on the branching ratios of Bq→l+l- processes. Using such constrained parameter space, we predict the branching ratios of LFV semileptonic B meson decays, such as B+→K+(π+)li+lj-, B+→(K*+,ρ+)li+lj-, and Bs→ϕ li+lj-, which are found to be within the experimental reach of LHCb and the upcoming Belle II experiments. We also investigate the rare leptonic KL ,S→μ+μ-(e+e-) and KL→μ∓e± decays in the leptoquark model.

  4. Resource Management in Constrained Dynamic Situations

    NASA Astrophysics Data System (ADS)

    Seok, Jinwoo

    Resource management is considered in this dissertation for systems with limited resources, possibly combined with other system constraints, in unpredictably dynamic environments. Resources may represent fuel, power, capabilities, energy, and so on. Resource management is important for many practical systems; usually, resources are limited, and their use must be optimized. Furthermore, systems are often constrained, and constraints must be satisfied for safe operation. Simplistic resource management can result in poor use of resources and failure of the system. Furthermore, many real-world situations involve dynamic environments. Many traditional problems are formulated based on the assumptions of given probabilities or perfect knowledge of future events. However, in many cases, the future is completely unknown, and information on or probabilities about future events are not available. In other words, we operate in unpredictably dynamic situations. Thus, a method is needed to handle dynamic situations without knowledge of the future, but few formal methods have been developed to address them. Thus, the goal is to design resource management methods for constrained systems, with limited resources, in unpredictably dynamic environments. To this end, resource management is organized hierarchically into two levels: 1) planning, and 2) control. In the planning level, the set of tasks to be performed is scheduled based on limited resources to maximize resource usage in unpredictably dynamic environments. In the control level, the system controller is designed to follow the schedule by considering all the system constraints for safe and efficient operation. Consequently, this dissertation is mainly divided into two parts: 1) planning level design, based on finite state machines, and 2) control level methods, based on model predictive control. We define a recomposable restricted finite state machine to handle limited resource situations and unpredictably dynamic environments for the planning level. To obtain a policy, dynamic programing is applied, and to obtain a solution, limited breadth-first search is applied to the recomposable restricted finite state machine. A multi-function phased array radar resource management problem and an unmanned aerial vehicle patrolling problem are treated using recomposable restricted finite state machines. Then, we use model predictive control for the control level, because it allows constraint handling and setpoint tracking for the schedule. An aircraft power system management problem is treated that aims to develop an integrated control system for an aircraft gas turbine engine and electrical power system using rate-based model predictive control. Our results indicate that at the planning level, limited breadth-first search for recomposable restricted finite state machines generates good scheduling solutions in limited resource situations and unpredictably dynamic environments. The importance of cooperation in the planning level is also verified. At the control level, a rate-based model predictive controller allows good schedule tracking and safe operations. The importance of considering the system constraints and interactions between the subsystems is indicated. For the best resource management in constrained dynamic situations, the planning level and the control level need to be considered together.

  5. Shear wave prediction using committee fuzzy model constrained by lithofacies, Zagros basin, SW Iran

    NASA Astrophysics Data System (ADS)

    Shiroodi, Sadjad Kazem; Ghafoori, Mohammad; Ansari, Hamid Reza; Lashkaripour, Golamreza; Ghanadian, Mostafa

    2017-02-01

    The main purpose of this study is to introduce the geological controlling factors in improving an intelligence-based model to estimate shear wave velocity from seismic attributes. The proposed method includes three main steps in the framework of geological events in a complex sedimentary succession located in the Persian Gulf. First, the best attributes were selected from extracted seismic data. Second, these attributes were transformed into shear wave velocity using fuzzy inference systems (FIS) such as Sugeno's fuzzy inference (SFIS), adaptive neuro-fuzzy inference (ANFIS) and optimized fuzzy inference (OFIS). Finally, a committee fuzzy machine (CFM) based on bat-inspired algorithm (BA) optimization was applied to combine previous predictions into an enhanced solution. In order to show the geological effect on improving the prediction, the main classes of predominate lithofacies in the reservoir of interest including shale, sand, and carbonate were selected and then the proposed algorithm was performed with and without lithofacies constraint. The results showed a good agreement between real and predicted shear wave velocity in the lithofacies-based model compared to the model without lithofacies especially in sand and carbonate.

  6. Lens models under the microscope: comparison of Hubble Frontier Field cluster magnification maps

    NASA Astrophysics Data System (ADS)

    Priewe, Jett; Williams, Liliya L. R.; Liesenborgs, Jori; Coe, Dan; Rodney, Steven A.

    2017-02-01

    Using the power of gravitational lensing magnification by massive galaxy clusters, the Hubble Frontier Fields provide deep views of six patches of the high-redshift Universe. The combination of deep Hubble imaging and exceptional lensing strength has revealed the greatest numbers of multiply-imaged galaxies available to constrain models of cluster mass distributions. However, even with O(100) images per cluster, the uncertainties associated with the reconstructions are not negligible. The goal of this paper is to show the diversity of model magnification predictions. We examine seven and nine mass models of Abell 2744 and MACS J0416, respectively, submitted to the Mikulski Archive for Space Telescopes for public distribution in 2015 September. The dispersion between model predictions increases from 30 per cent at common low magnifications (μ ˜ 2) to 70 per cent at rare high magnifications (μ ˜ 40). MACS J0416 exhibits smaller dispersions than Abell 2744 for 2 < μ < 10. We show that magnification maps based on different lens inversion techniques typically differ from each other by more than their quoted statistical errors. This suggests that some models underestimate the true uncertainties, which are primarily due to various lensing degeneracies. Though the exact mass sheet degeneracy is broken, its generalized counterpart is not broken at least in Abell 2744. Other local degeneracies are also present in both clusters. Our comparison of models is complementary to the comparison of reconstructions of known synthetic mass distributions. By focusing on observed clusters, we can identify those that are best constrained, and therefore provide the clearest view of the distant Universe.

  7. Collective behaviour in vertebrates: a sensory perspective

    PubMed Central

    Collignon, Bertrand; Fernández-Juricic, Esteban

    2016-01-01

    Collective behaviour models can predict behaviours of schools, flocks, and herds. However, in many cases, these models make biologically unrealistic assumptions in terms of the sensory capabilities of the organism, which are applied across different species. We explored how sensitive collective behaviour models are to these sensory assumptions. Specifically, we used parameters reflecting the visual coverage and visual acuity that determine the spatial range over which an individual can detect and interact with conspecifics. Using metric and topological collective behaviour models, we compared the classic sensory parameters, typically used to model birds and fish, with a set of realistic sensory parameters obtained through physiological measurements. Compared with the classic sensory assumptions, the realistic assumptions increased perceptual ranges, which led to fewer groups and larger group sizes in all species, and higher polarity values and slightly shorter neighbour distances in the fish species. Overall, classic visual sensory assumptions are not representative of many species showing collective behaviour and constrain unrealistically their perceptual ranges. More importantly, caution must be exercised when empirically testing the predictions of these models in terms of choosing the model species, making realistic predictions, and interpreting the results. PMID:28018616

  8. Finding viable models in SUSY parameter spaces with signal specific discovery potential

    NASA Astrophysics Data System (ADS)

    Burgess, Thomas; Lindroos, Jan Øye; Lipniacka, Anna; Sandaker, Heidi

    2013-08-01

    Recent results from ATLAS giving a Higgs mass of 125.5 GeV, further constrain already highly constrained supersymmetric models such as pMSSM or CMSSM/mSUGRA. As a consequence, finding potentially discoverable and non-excluded regions of model parameter space is becoming increasingly difficult. Several groups have invested large effort in studying the consequences of Higgs mass bounds, upper limits on rare B-meson decays, and limits on relic dark matter density on constrained models, aiming at predicting superpartner masses, and establishing likelihood of SUSY models compared to that of the Standard Model vis-á-vis experimental data. In this paper a framework for efficient search for discoverable, non-excluded regions of different SUSY spaces giving specific experimental signature of interest is presented. The method employs an improved Markov Chain Monte Carlo (MCMC) scheme exploiting an iteratively updated likelihood function to guide search for viable models. Existing experimental and theoretical bounds as well as the LHC discovery potential are taken into account. This includes recent bounds on relic dark matter density, the Higgs sector and rare B-mesons decays. A clustering algorithm is applied to classify selected models according to expected phenomenology enabling automated choice of experimental benchmarks and regions to be used for optimizing searches. The aim is to provide experimentalist with a viable tool helping to target experimental signatures to search for, once a class of models of interest is established. As an example a search for viable CMSSM models with τ-lepton signatures observable with the 2012 LHC data set is presented. In the search 105209 unique models were probed. From these, ten reference benchmark points covering different ranges of phenomenological observables at the LHC were selected.

  9. Uncertainties on exclusive diffractive Higgs boson and jet production at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechambre, A.; CEA/IRFU/Service de physique des particules, CEA/Saclay; Kepka, O.

    2011-03-01

    Two theoretical descriptions of exclusive diffractive jets and Higgs production at the LHC were implemented into the FPMC generator: the Khoze, Martin, Ryskin model and the Cudell, Hernandez, Ivanov, Dechambre exclusive model. We then study the uncertainties. We compare their predictions to the CDF measurement and discuss the possibility of constraining the exclusive Higgs production at the LHC with early measurements of exclusive jets. We show that the present theoretical uncertainties can be reduced with such data by a factor of 5.

  10. Explorations in dark energy

    NASA Astrophysics Data System (ADS)

    Bozek, Brandon

    This dissertation describes three research projects on the topic of dark energy. The first project is an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their "w 0--wa" parameterization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which can not be distinguished from a cosmological constant at DETF Stage 2, and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The second project details this analysis on a Inverse Power Law (IPL) or "Ratra-Peebles" (RP) model. This model is a member of a popular subset of scalar field quintessence models that exhibit "tracking" behavior that make this model particularly theoretically interesting. We find that the relative increase in constraining power on the parameter space of this model is consistent to what was found in the first project and the DETF report. We also show, using a background cosmology based on an IPL scalar field model that is consistent with a cosmological constant with Stage 2 data, that good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The third project extends the Causal Entropic Principle to predict the preferred curvature within the "multiverse". The Causal Entropic Principle (Bousso, et al.) provides an alternative approach to anthropic attempts to predict our observed value of the cosmological constant by calculating the entropy created within a causal diamond. We have found that values larger than rhok = 40rho m are disfavored by more than 99.99% and a peak value at rho Λ = 7.9 x 10-123 and rho k = 4.3rhom for open universes. For universes that allow only positive curvature or both positive and negative curvature, we find a correlation between curvature and dark energy that leads to an extended region of preferred values. Our universe is found to be disfavored to an extent depending the priors on curvature. We also provide a comparison to previous anthropic constraints on open universes and discuss future directions for this work.

  11. Reply to ''Comments on 'Why Hasn't Earth Warmed as much as Expected?'''

    NASA Technical Reports Server (NTRS)

    Schwartz, Stephen E.; Charlson, Robert J.; Kahn, Ralph A.; Ogren, John A.; Rodhe, Henning

    2012-01-01

    In response to our article, Why Hasnt Earth Warmed as Much as Expected? (2010), Knutti and Plattner (2012) wrote a rebuttal. The term climate sensitivity is usually defined as the change in global mean surface temperature that is produced by a specified change in forcing, such as a change in solar heating or greenhouse gas concentrations. We had argued in the 2010 paper that although climate models can reproduce the global mean surface temperature history over the past century, the uncertainties in these models, due primarily to the uncertainty in climate forcing by airborne particles, mean that the models lack the confidence to actually constrain the climate sensitivity within useful limits for climate prediction. Knutti and Plattner are climate modelers, and they argued essentially that because the models could reproduce the surface temperature history, the issue we raised was moot. Our response amounts to straightening out this confusion; for the models to be constraining, they must be able to reproduce the surface temperature history with sufficient confidence, not just to match the measurements, but to exclude alternative histories. As before, we concluded that if we can actually make the aerosol measurements using currently available, state-of-the-art techniques, we can determine the aerosol climate forcing to the degree required to constrain that aspect of model climate sensitivity. A technical issue relating to the timescale over which a change in CO2 emissions would be equilibrated in the environmental energy balance was also discussed, again, a matter of differences in terminology.

  12. Predicting Great Lakes fish yields: tools and constraints

    USGS Publications Warehouse

    Lewis, C.A.; Schupp, D.H.; Taylor, W.W.; Collins, J.J.; Hatch, Richard W.

    1987-01-01

    Prediction of yield is a critical component of fisheries management. The development of sound yield prediction methodology and the application of the results of yield prediction are central to the evolution of strategies to achieve stated goals for Great Lakes fisheries and to the measurement of progress toward those goals. Despite general availability of species yield models, yield prediction for many Great Lakes fisheries has been poor due to the instability of the fish communities and the inadequacy of available data. A host of biological, institutional, and societal factors constrain both the development of sound predictions and their application to management. Improved predictive capability requires increased stability of Great Lakes fisheries through rehabilitation of well-integrated communities, improvement of data collection, data standardization and information-sharing mechanisms, and further development of the methodology for yield prediction. Most important is the creation of a better-informed public that will in turn establish the political will to do what is required.

  13. Pushing the Frontier of Data-Oriented Geodynamic Modeling: from Qualitative to Quantitative to Predictive

    NASA Astrophysics Data System (ADS)

    Liu, L.; Hu, J.; Zhou, Q.

    2016-12-01

    The rapid accumulation of geophysical and geological data sets poses an increasing demand for the development of geodynamic models to better understand the evolution of the solid Earth. Consequently, the earlier qualitative physical models are no long satisfying. Recent efforts are focusing on more quantitative simulations and more efficient numerical algorithms. Among these, a particular line of research is on the implementation of data-oriented geodynamic modeling, with the purpose of building an observationally consistent and physically correct geodynamic framework. Such models could often catalyze new insights into the functioning mechanisms of the various aspects of plate tectonics, and their predictive nature could also guide future research in a deterministic fashion. Over the years, we have been working on constructing large-scale geodynamic models with both sequential and variational data assimilation techniques. These models act as a bridge between different observational records, and the superposition of the constraining power from different data sets help reveal unknown processes and mechanisms of the dynamics of the mantle and lithosphere. We simulate the post-Cretaceous subduction history in South America using a forward (sequential) approach. The model is constrained using past subduction history, seafloor age evolution, tectonic architecture of continents, and the present day geophysical observations. Our results quantify the various driving forces shaping the present South American flat slabs, which we found are all internally torn. The 3-D geometry of these torn slabs further explains the abnormal seismicity pattern and enigmatic volcanic history. An inverse (variational) model simulating the late Cenozoic western U.S. mantle dynamics with similar constraints reveals a different mechanism for the formation of Yellowstone-related volcanism from traditional understanding. Furthermore, important insights on the mantle density and viscosity structures also emerge from these models.

  14. Search for Muonic Dark Forces at BABAR

    NASA Astrophysics Data System (ADS)

    Godang, Romulus

    2017-04-01

    Many models of physics beyond Standard Model predict the existence of light Higgs states, dark photons, and new gauge bosons mediating interactions between dark sectors and the Standard Model. Using a full data sample collected with the BABAR detector at the PEP-II e+e- collider, we report searches for a light non-Standard Model Higgs boson, dark photon, and a new muonic dark force mediated by a gauge boson (Z') coupling only to the second and third lepton families. Our results significantly improve upon the current bounds and further constrain the remaining region of the allowed parameter space.

  15. Rear wheel torque vectoring model predictive control with velocity regulation for electric vehicles

    NASA Astrophysics Data System (ADS)

    Siampis, Efstathios; Velenis, Efstathios; Longo, Stefano

    2015-11-01

    In this paper we propose a constrained optimal control architecture for combined velocity, yaw and sideslip regulation for stabilisation of the vehicle near the limit of lateral acceleration using the rear axle electric torque vectoring configuration of an electric vehicle. A nonlinear vehicle and tyre model are used to find reference steady-state cornering conditions and design two model predictive control (MPC) strategies of different levels of fidelity: one that uses a linearised version of the full vehicle model with the rear wheels' torques as the input, and another one that neglects the wheel dynamics and uses the rear wheels' slips as the input instead. After analysing the relative trade-offs between performance and computational effort, we compare the two MPC strategies against each other and against an unconstrained optimal control strategy in Simulink and Carsim environment.

  16. Historical precipitation predictably alters the shape and magnitude of microbial functional response to soil moisture.

    PubMed

    Averill, Colin; Waring, Bonnie G; Hawkes, Christine V

    2016-05-01

    Soil moisture constrains the activity of decomposer soil microorganisms, and in turn the rate at which soil carbon returns to the atmosphere. While increases in soil moisture are generally associated with increased microbial activity, historical climate may constrain current microbial responses to moisture. However, it is not known if variation in the shape and magnitude of microbial functional responses to soil moisture can be predicted from historical climate at regional scales. To address this problem, we measured soil enzyme activity at 12 sites across a broad climate gradient spanning 442-887 mm mean annual precipitation. Measurements were made eight times over 21 months to maximize sampling during different moisture conditions. We then fit saturating functions of enzyme activity to soil moisture and extracted half saturation and maximum activity parameter values from model fits. We found that 50% of the variation in maximum activity parameters across sites could be predicted by 30-year mean annual precipitation, an indicator of historical climate, and that the effect is independent of variation in temperature, soil texture, or soil carbon concentration. Based on this finding, we suggest that variation in the shape and magnitude of soil microbial response to soil moisture due to historical climate may be remarkably predictable at regional scales, and this approach may extend to other systems. If historical contingencies on microbial activities prove to be persistent in the face of environmental change, this approach also provides a framework for incorporating historical climate effects into biogeochemical models simulating future global change scenarios. © 2016 John Wiley & Sons Ltd.

  17. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization.

    PubMed

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method.

  18. Development of a Prediction Model Based on RBF Neural Network for Sheet Metal Fixture Locating Layout Design and Optimization

    PubMed Central

    Wang, Zhongqi; Yang, Bo; Kang, Yonggang; Yang, Yuan

    2016-01-01

    Fixture plays an important part in constraining excessive sheet metal part deformation at machining, assembly, and measuring stages during the whole manufacturing process. However, it is still a difficult and nontrivial task to design and optimize sheet metal fixture locating layout at present because there is always no direct and explicit expression describing sheet metal fixture locating layout and responding deformation. To that end, an RBF neural network prediction model is proposed in this paper to assist design and optimization of sheet metal fixture locating layout. The RBF neural network model is constructed by training data set selected by uniform sampling and finite element simulation analysis. Finally, a case study is conducted to verify the proposed method. PMID:27127499

  19. Understanding leachate flow in municipal solid waste landfills by combining time-lapse ERT and subsurface flow modelling - Part II: Constraint methodology of hydrodynamic models.

    PubMed

    Audebert, M; Oxarango, L; Duquennoi, C; Touze-Foltz, N; Forquet, N; Clément, R

    2016-09-01

    Leachate recirculation is a key process in the operation of municipal solid waste landfills as bioreactors. To ensure optimal water content distribution, bioreactor operators need tools to design leachate injection systems. Prediction of leachate flow by subsurface flow modelling could provide useful information for the design of such systems. However, hydrodynamic models require additional data to constrain them and to assess hydrodynamic parameters. Electrical resistivity tomography (ERT) is a suitable method to study leachate infiltration at the landfill scale. It can provide spatially distributed information which is useful for constraining hydrodynamic models. However, this geophysical method does not allow ERT users to directly measure water content in waste. The MICS (multiple inversions and clustering strategy) methodology was proposed to delineate the infiltration area precisely during time-lapse ERT survey in order to avoid the use of empirical petrophysical relationships, which are not adapted to a heterogeneous medium such as waste. The infiltration shapes and hydrodynamic information extracted with MICS were used to constrain hydrodynamic models in assessing parameters. The constraint methodology developed in this paper was tested on two hydrodynamic models: an equilibrium model where, flow within the waste medium is estimated using a single continuum approach and a non-equilibrium model where flow is estimated using a dual continuum approach. The latter represents leachate flows into fractures. Finally, this methodology provides insight to identify the advantages and limitations of hydrodynamic models. Furthermore, we suggest an explanation for the large volume detected by MICS when a small volume of leachate is injected. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Constraining proposed combinations of ice history and Earth rheology using VLBI determined baseline length rates in North America

    NASA Technical Reports Server (NTRS)

    Mitrovica, J. X.; Davis, J. L.; Shapiro, I. I.

    1993-01-01

    We predict the present-day rates of change of the lengths of 19 North American baselines due to the glacial isostatic adjustment process. Contrary to previously published research, we find that the three dimensional motion of each of the sites defining a baseline, rather than only the radial motions of these sites, needs to be considered to obtain an accurate estimate of the rate of change of the baseline length. Predictions are generated using a suite of Earth models and late Pleistocene ice histories, these include specific combinations of the two which have been proposed in the literature as satisfying a variety of rebound related geophysical observations from the North American region. A number of these published models are shown to predict rates which differ significantly from the VLBI observations.

  1. Quantified Objectives for Assessing the Contribution of Low Clouds to Climate Sensitivity and Variability

    NASA Astrophysics Data System (ADS)

    Del Genio, A. D.; Platnick, S. E.; Bennartz, R.; Klein, S. A.; Marchand, R.; Oreopoulos, L.; Pincus, R.; Wood, R.

    2016-12-01

    Low clouds are central to leading-order questions in climate and subseasonal weather predictability, and are key to the NRC panel report's goals "to understand the signals of the Earth system under a changing climate" and "for improved models and model projections." To achieve both goals requires a mix of continuity observations to document the components of the changing climate and improvements in retrievals of low cloud and boundary layer dynamical/thermodynamic properties to ensure process-oriented observations that constrain the parameterized physics of the models. We discuss four climate/weather objectives that depend sensitively on understanding the behavior of low clouds: 1. Reduce uncertainty in GCM-inferred climate sensitivity by 50% by constraining subtropical low cloud feedbacks. 2. Eliminate the GCM Southern Ocean shortwave flux bias and its effect on cloud feedback and the position of the midlatitude storm track. 3. Eliminate the double Intertropical Convergence Zone bias in GCMs and its potential effects on tropical precipitation over land and the simulation and prediction of El Niño. 4. Increase the subseasonal predictability of tropical warm pool precipitation from 20 to 30 days. We envision advances in three categories of observations that would be highly beneficial for reaching these goals: 1. More accurate observations will facilitate more thorough evaluation of clouds in GCMs. 2. Better observations of the links between cloud properties and the environmental state will be used as the foundation for parameterization improvements. 3. Sufficiently long and higher quality records of cloud properties and environmental state will constrain low cloud feedback purely observationally. To accomplish this, the greatest need is to replace A-Train instruments, which are nearing end-of-life, with enhanced versions. The requirements are sufficient horizontal and vertical resolution to capture boundary layer cloud and thermodynamic spatial structure; more accurate determination of cloud condensate profiles and optical properties; near-coincident observations to permit multi-instrument retrievals and association with dynamic and thermodynamic structure; global coverage; and, for long-term monitoring, measurement and orbit stability and sufficient mission duration.

  2. Maximum entropy modeling of metabolic networks by constraining growth-rate moments predicts coexistence of phenotypes

    NASA Astrophysics Data System (ADS)

    De Martino, Daniele

    2017-12-01

    In this work maximum entropy distributions in the space of steady states of metabolic networks are considered upon constraining the first and second moments of the growth rate. Coexistence of fast and slow phenotypes, with bimodal flux distributions, emerges upon considering control on the average growth (optimization) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of Escherichia coli where it quantifies the metabolic activity of slow growing phenotypes and it provides a quantitative map with metabolic fluxes, opening the possibility to detect coexistence from flux data. A preliminary analysis on data for E. coli cultures in standard conditions shows degeneracy for the inferred parameters that extend in the coexistence region.

  3. Multi-timescale Modeling of Activity-Dependent Metabolic Coupling in the Neuron-Glia-Vasculature Ensemble

    PubMed Central

    Jolivet, Renaud; Coggan, Jay S.; Allaman, Igor; Magistretti, Pierre J.

    2015-01-01

    Glucose is the main energy substrate in the adult brain under normal conditions. Accumulating evidence, however, indicates that lactate produced in astrocytes (a type of glial cell) can also fuel neuronal activity. The quantitative aspects of this so-called astrocyte-neuron lactate shuttle (ANLS) are still debated. To address this question, we developed a detailed biophysical model of the brain’s metabolic interactions. Our model integrates three modeling approaches, the Buxton-Wang model of vascular dynamics, the Hodgkin-Huxley formulation of neuronal membrane excitability and a biophysical model of metabolic pathways. This approach provides a template for large-scale simulations of the neuron-glia-vasculature (NGV) ensemble, and for the first time integrates the respective timescales at which energy metabolism and neuronal excitability occur. The model is constrained by relative neuronal and astrocytic oxygen and glucose utilization, by the concentration of metabolites at rest and by the temporal dynamics of NADH upon activation. These constraints produced four observations. First, a transfer of lactate from astrocytes to neurons emerged in response to activity. Second, constrained by activity-dependent NADH transients, neuronal oxidative metabolism increased first upon activation with a subsequent delayed astrocytic glycolysis increase. Third, the model correctly predicted the dynamics of extracellular lactate and oxygen as observed in vivo in rats. Fourth, the model correctly predicted the temporal dynamics of tissue lactate, of tissue glucose and oxygen consumption, and of the BOLD signal as reported in human studies. These findings not only support the ANLS hypothesis but also provide a quantitative mathematical description of the metabolic activation in neurons and glial cells, as well as of the macroscopic measurements obtained during brain imaging. PMID:25719367

  4. Multi-timescale modeling of activity-dependent metabolic coupling in the neuron-glia-vasculature ensemble.

    PubMed

    Jolivet, Renaud; Coggan, Jay S; Allaman, Igor; Magistretti, Pierre J

    2015-02-01

    Glucose is the main energy substrate in the adult brain under normal conditions. Accumulating evidence, however, indicates that lactate produced in astrocytes (a type of glial cell) can also fuel neuronal activity. The quantitative aspects of this so-called astrocyte-neuron lactate shuttle (ANLS) are still debated. To address this question, we developed a detailed biophysical model of the brain's metabolic interactions. Our model integrates three modeling approaches, the Buxton-Wang model of vascular dynamics, the Hodgkin-Huxley formulation of neuronal membrane excitability and a biophysical model of metabolic pathways. This approach provides a template for large-scale simulations of the neuron-glia-vasculature (NGV) ensemble, and for the first time integrates the respective timescales at which energy metabolism and neuronal excitability occur. The model is constrained by relative neuronal and astrocytic oxygen and glucose utilization, by the concentration of metabolites at rest and by the temporal dynamics of NADH upon activation. These constraints produced four observations. First, a transfer of lactate from astrocytes to neurons emerged in response to activity. Second, constrained by activity-dependent NADH transients, neuronal oxidative metabolism increased first upon activation with a subsequent delayed astrocytic glycolysis increase. Third, the model correctly predicted the dynamics of extracellular lactate and oxygen as observed in vivo in rats. Fourth, the model correctly predicted the temporal dynamics of tissue lactate, of tissue glucose and oxygen consumption, and of the BOLD signal as reported in human studies. These findings not only support the ANLS hypothesis but also provide a quantitative mathematical description of the metabolic activation in neurons and glial cells, as well as of the macroscopic measurements obtained during brain imaging.

  5. Modeling and Simulation of the Gonghe geothermal field (Qinghai, China) Constrained by Geophysical

    NASA Astrophysics Data System (ADS)

    Zeng, Z.; Wang, K.; Zhao, X.; Huai, N.; He, R.

    2017-12-01

    The Gonghe geothermal field in Qinghai is important because of its variety of geothermal resource types. Now, the Gonghe geothermal field has been a demonstration area of geothermal development and utilization in China. It has been the topic of numerous geophysical investigations conducted to determine the depth to and the nature of the heat source, and to image the channel of heat flow. This work focuses on the causes of geothermal fields used numerical simulation method constrained by geophysical data. At first, by analyzing and inverting an magnetotelluric (MT) measurements profile across this area we obtain the deep resistivity distribution. Using the gravity anomaly inversion constrained by the resistivity profile, the density of the basins and the underlying rocks can be calculated. Combined with the measured parameters of rock thermal conductivity, the 2D geothermal conceptual model of Gonghe area is constructed. Then, the unstructured finite element method is used to simulate the heat conduction equation and the geothermal field. Results of this model were calibrated with temperature data for the observation well. A good match was achieved between the measured values and the model's predicted values. At last, geothermal gradient and heat flow distribution of this model are calculated(fig.1.). According to the results of geophysical exploration, there is a low resistance and low density region (d5) below the geothermal field. We recognize that this anomaly is generated by tectonic motion, and this tectonic movement creates a mantle-derived heat upstream channel. So that the anomalous basement heat flow values are higher than in other regions. The model's predicted values simulated using that boundary condition has a good match with the measured values. The simulated heat flow values show that the mantle-derived heat flow migrates through the boundary of the low-resistance low-density anomaly area to the Gonghe geothermal field, with only a small fraction moving to other regions. Therefore, the mantle-derived heat flow across the tectonic channel to the cohesive continuous supply heat for Gonghe geothermal field, is the main the main causes of abundant geothermal resources.

  6. Search for standard model production of four top quarks with same-sign and multilepton final states in proton-proton collisions at √{s} = 13 {TeV}

    NASA Astrophysics Data System (ADS)

    Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Ambrogi, F.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Escalante Del Valle, A.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Grossmann, J.; Hrubec, J.; Jeitler, M.; König, A.; Krammer, N.; Krätschmer, I.; Liko, D.; Madlener, T.; Mikulec, I.; Pree, E.; Rad, N.; Rohringer, H.; Schieck, J.; Schöfbeck, R.; Spanring, M.; Spitzbart, D.; Taurok, A.; Waltenberger, W.; Wittmann, J.; Wulz, C.-E.; Zarucki, M.; Chekhovsky, V.; Mossolov, V.; Suarez Gonzalez, J.; De Wolf, E. A.; Di Croce, D.; Janssen, X.; Lauwers, J.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; De Bruyn, I.; De Clercq, J.; Deroover, K.; Flouris, G.; Lontkovskyi, D.; Lowette, S.; Marchesini, I.; Moortgat, S.; Moreels, L.; Python, Q.; Skovpen, K.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Parijs, I.; Beghin, D.; Bilin, B.; Brun, H.; Clerbaux, B.; De Lentdecker, G.; Delannoy, H.; Dorney, B.; Fasanella, G.; Favart, L.; Goldouzian, R.; Grebenyuk, A.; Kalsi, A. K.; Lenzi, T.; Luetic, J.; Maerschalk, T.; Marinov, A.; Seva, T.; Starling, E.; Vander Velde, C.; Vanlaer, P.; Vannerom, D.; Yonamine, R.; Zenoni, F.; Cornelis, T.; Dobur, D.; Fagot, A.; Gul, M.; Khvastunov, I.; Poyraz, D.; Roskas, C.; Salva, S.; Trocino, D.; Tytgat, M.; Verbeke, W.; Zaganidis, N.; Bakhshiansohi, H.; Bondu, O.; Brochet, S.; Bruno, G.; Caputo, C.; Caudron, A.; David, P.; De Visscher, S.; Delaere, C.; Delcourt, M.; Francois, B.; Giammanco, A.; Komm, M.; Krintiras, G.; Lemaitre, V.; Magitteri, A.; Mertens, A.; Musich, M.; Piotrzkowski, K.; Quertenmont, L.; Saggio, A.; Vidal Marono, M.; Wertz, S.; Zobec, J.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Correia Silva, G.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Coelho, E.; Da Costa, E. M.; Da Silveira, G. G.; De Jesus Damiao, D.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Melo De Almeida, M.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Sanchez Rosas, L. J.; Santoro, A.; Sznajder, A.; Thiel, M.; Tonelli Manganote, E. J.; Torres Da Silva De Araujo, F.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Misheva, M.; Rodozov, M.; Shopova, M.; Sultanov, G.; Dimitrov, A.; Litov, L.; Pavlov, B.; Petkov, P.; Fang, W.; Gao, X.; Yuan, L.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Chen, Y.; Jiang, C. H.; Leggat, D.; Liao, H.; Liu, Z.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Yazgan, E.; Yu, T.; Zhang, H.; Zhao, J.; Ban, Y.; Chen, G.; Li, J.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Zhang, F.; Wang, Y.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; González Hernández, C. F.; Ruiz Alvarez, J. D.; Segura Delgado, M. A.; Courbon, B.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Sculac, T.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Ferencek, D.; Kadija, K.; Mesic, B.; Starodumov, A.; Susa, T.; Ather, M. W.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Finger, M.; Finger, M.; Carrera Jarrin, E.; Assran, Y.; Elgammal, S.; Mahrous, A.; Bhowmik, S.; Dewanjee, R. K.; Kadastik, M.; Perrini, L.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Kirschenmann, H.; Pekkanen, J.; Voutilainen, M.; Havukainen, J.; Heikkilä, J. K.; Järvinen, T.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Laurila, S.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Siikonen, H.; Tuominen, E.; Tuominiemi, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Faure, J. L.; Ferri, F.; Ganjour, S.; Ghosh, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Kucher, I.; Leloup, C.; Locci, E.; Machet, M.; Malcles, J.; Negro, G.; Rander, J.; Rosowsky, A.; Sahin, M. Ö.; Titov, M.; Abdulsalam, A.; Amendola, C.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Charlot, C.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Lobanov, A.; Martin Blanco, J.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Stahl Leiton, A. G.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Zghiche, A.; Agram, J.-L.; Andrea, J.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Drouhin, F.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Jansová, M.; Juillot, P.; Le Bihan, A.-C.; Tonon, N.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fay, J.; Finco, L.; Gascon, S.; Gouzevitch, M.; Grenier, G.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Popov, A.; Sordini, V.; Vander Donckt, M.; Viret, S.; Zhang, S.; Khvedelidze, A.; Bagaturia, I.; Autermann, C.; Feld, L.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Preuten, M.; Schomakers, C.; Schulz, J.; Teroerde, M.; Wittmer, B.; Zhukov, V.; Albert, A.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Merschmeyer, M.; Meyer, A.; Millet, P.; Mukherjee, S.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Teyssier, D.; Thüer, S.; Flügge, G.; Kargoll, B.; Kress, T.; Künsken, A.; Müller, T.; Nehrkorn, A.; Nowack, A.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Arndt, T.; Asawatangtrakuldee, C.; Beernaert, K.; Behnke, O.; Behrens, U.; Bermúdez Martínez, A.; Bin Anuar, A. A.; Borras, K.; Botta, V.; Campbell, A.; Connor, P.; Contreras-Campana, C.; Costanza, F.; Diez Pardos, C.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Eren, E.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Grados Luyando, J. M.; Grohsjean, A.; Gunnellini, P.; Guthoff, M.; Harb, A.; Hauk, J.; Hempel, M.; Jung, H.; Kasemann, M.; Keaveney, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Lelek, A.; Lenz, T.; Leonard, J.; Lipka, K.; Lohmann, W.; Mankel, R.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Missiroli, M.; Mittag, G.; Mnich, J.; Mussgiller, A.; Ntomari, E.; Pitzl, D.; Raspereza, A.; Savitskyi, M.; Saxena, P.; Shevchenko, R.; Stefaniuk, N.; Van Onsem, G. P.; Walsh, R.; Wen, Y.; Wichmann, K.; Wissing, C.; Zenaiev, O.; Aggleton, R.; Bein, S.; Blobel, V.; Centis Vignali, M.; Dreyer, T.; Garutti, E.; Gonzalez, D.; Haller, J.; Hinzmann, A.; Hoffmann, M.; Karavdina, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Kurz, S.; Lapsien, T.; Marconi, D.; Meyer, M.; Niedziela, M.; Nowatschin, D.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Scharf, C.; Schleper, P.; Schmidt, A.; Schumann, S.; Schwandt, J.; Sonneveld, J.; Stadie, H.; Steinbrück, G.; Stober, F. M.; Stöver, M.; Tholen, H.; Troendle, D.; Usai, E.; Vanhoefer, A.; Vormwald, B.; Akbiyik, M.; Barth, C.; Baselga, M.; Baur, S.; Butz, E.; Caspart, R.; Chwalek, T.; Colombo, F.; De Boer, W.; Dierlamm, A.; Faltermann, N.; Freund, B.; Friese, R.; Giffels, M.; Harrendorf, M. A.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Kassel, F.; Kudella, S.; Mildner, H.; Mozer, M. U.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Schröder, M.; Shvetsov, I.; Sieber, G.; Simonis, H. J.; Ulrich, R.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Kyriakis, A.; Loukas, D.; Topsis-Giotis, I.; Karathanasis, G.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Kousouris, K.; Evangelou, I.; Foudas, C.; Gianneios, P.; Katsoulis, P.; Kokkas, P.; Mallios, S.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Triantis, F. A.; Tsitsonis, D.; Csanad, M.; Filipovic, N.; Pasztor, G.; Surányi, O.; Veres, G. I.; Bencze, G.; Hajdu, C.; Horvath, D.; Hunyadi, Á.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Beni, N.; Czellar, S.; Karancsi, J.; Makovec, A.; Molnar, J.; Szillasi, Z.; Bartók, M.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Choudhury, S.; Komaragiri, J. R.; Bahinipati, S.; Mal, P.; Mandal, K.; Nayak, A.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Dhingra, N.; Kaur, A.; Kaur, M.; Kaur, S.; Kumar, R.; Kumari, P.; Mehta, A.; Singh, J. B.; Walia, G.; Kumar, Ashok; Shah, Aashaq; Bhardwaj, A.; Chauhan, S.; Choudhary, B. C.; Garg, R. B.; Keshri, S.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, R.; Bhardwaj, R.; Bhattacharya, R.; Bhattacharya, S.; Bhawandeep, U.; Dey, S.; Dutt, S.; Dutta, S.; Ghosh, S.; Majumdar, N.; Modak, A.; Mondal, K.; Mukhopadhyay, S.; Nandan, S.; Purohit, A.; Roy, A.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Thakur, S.; Behera, P. K.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Netrakanti, P. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Dugad, S.; Mahakud, B.; Mitra, S.; Mohanty, G. B.; Sur, N.; Sutar, B.; Banerjee, S.; Bhattacharya, S.; Chatterjee, S.; Das, P.; Guchait, M.; Jain, Sa.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Sarkar, T.; Wickramage, N.; Chauhan, S.; Dube, S.; Hegde, V.; Kapoor, A.; Kothekar, K.; Pandey, S.; Rane, A.; Sharma, S.; Chenarani, S.; Eskandari Tadavani, E.; Etesami, S. M.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Errico, F.; Fiore, L.; Iaselli, G.; Lezki, S.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Bonacorsi, D.; Borgonovi, L.; Braibant-Giacomelli, S.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Albergo, S.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Chatterjee, K.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Lenzi, P.; Meschini, M.; Paoletti, S.; Russo, L.; Sguazzoni, G.; Strom, D.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Ravera, F.; Robutti, E.; Tosi, S.; Benaglia, A.; Beschi, A.; Brianza, L.; Brivio, F.; Ciriolo, V.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Ghezzi, A.; Govoni, P.; Malberti, M.; Malvezzi, S.; Manzoni, R. A.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Pigazzini, S.; Ragazzi, S.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Fabozzi, F.; Fienga, F.; Iorio, A. O. M.; Khan, W. A.; Lista, L.; Meola, S.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Benato, L.; Boletti, A.; Carlin, R.; Carvalho Antunes De Oliveira, A.; Checchia, P.; Dall'Osso, M.; De Castro Manzano, P.; Dorigo, T.; Dosselli, U.; Gasparini, F.; Gasparini, U.; Gozzelino, A.; Lacaprara, S.; Lujan, P.; Margoni, M.; Meneguzzo, A. T.; Pozzobon, N.; Ronchese, P.; Rossin, R.; Simonetto, F.; Torassa, E.; Zanetti, M.; Zotto, P.; Zumerle, G.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Ressegotti, M.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Biasini, M.; Bilei, G. M.; Cecchi, C.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Leonardi, R.; Manoni, E.; Mantovani, G.; Mariani, V.; Menichelli, M.; Rossi, A.; Santocchia, A.; Spiga, D.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Boccali, T.; Borrello, L.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Fedi, G.; Giannini, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Manca, E.; Mandorli, G.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; Cipriani, M.; Daci, N.; Del Re, D.; Di Marco, E.; Diemoz, M.; Gelli, S.; Longo, E.; Margaroli, F.; Marzocchi, B.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bartosik, N.; Bellan, R.; Biino, C.; Cartiglia, N.; Cenna, F.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Monteno, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Romero, A.; Ruspa, M.; Sacchi, R.; Shchelina, K.; Sola, V.; Solano, A.; Staiano, A.; Traczyk, P.; Belforte, S.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Zanetti, A.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Lee, J.; Lee, S.; Lee, S. W.; Moon, C. S.; Oh, Y. D.; Sekmen, S.; Son, D. C.; Yang, Y. C.; Kim, H.; Moon, D. H.; Oh, G.; Brochero Cifuentes, J. A.; Goh, J.; Kim, T. J.; Cho, S.; Choi, S.; Go, Y.; Gyun, D.; Ha, S.; Hong, B.; Jo, Y.; Kim, Y.; Lee, K.; Lee, K. S.; Lee, S.; Lim, J.; Park, S. K.; Roh, Y.; Almond, J.; Kim, J.; Kim, J. S.; Lee, H.; Lee, K.; Nam, K.; Oh, S. B.; Radburn-Smith, B. C.; Seo, S. H.; Yang, U. K.; Yoo, H. D.; Yu, G. B.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Choi, Y.; Hwang, C.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Zolkapli, Z.; Reyes-Almanza, R.; Ramirez-Sanchez, G.; Duran-Osuna, M. C.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Rabadan-Trejo, R. I.; Lopez-Fernandez, R.; Mejia Guisao, J.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Oropeza Barrera, C.; Vazquez Valencia, F.; Eysermans, J.; Pedraza, I.; Salazar Ibarguen, H. A.; Uribe Estrada, C.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Saddique, A.; Shah, M. A.; Shoaib, M.; Waqas, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Szleper, M.; Zalewski, P.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Pyskir, A.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E. Silva, C.; Di Francesco, A.; Faccioli, P.; Galinhas, B.; Gallinaro, M.; Hollar, J.; Leonardo, N.; Lloret Iglesias, L.; Nemallapudi, M. V.; Seixas, J.; Strong, G.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Baginyan, A.; Golunov, A.; Golutvin, I.; Karjavin, V.; Korenkov, V.; Kozlov, G.; Lanev, A.; Malakhov, A.; Matveev, V.; Mitsyn, V. V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Smirnov, V.; Voytishin, N.; Yuldashev, B. S.; Zarubin, A.; Zhiltsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sosnov, D.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Stepennov, A.; Stolin, V.; Toms, M.; Vlasov, E.; Zhokin, A.; Aushev, T.; Bylinkin, A.; Chistov, R.; Danilov, M.; Parygin, P.; Philippov, D.; Polikarpov, S.; Tarkovskii, E.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Rusakov, S. V.; Terkulov, A.; Baskakov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Gribushin, A.; Klyukhin, V.; Korneeva, N.; Lokhtin, I.; Miagkov, I.; Obraztsov, S.; Perfilov, M.; Savrin, V.; Volkov, P.; Blinov, V.; Shtol, D.; Skovpen, Y.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Elumakhov, D.; Godizov, A.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Mandrik, P.; Petrov, V.; Ryutin, R.; Sobol, A.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Devetak, D.; Dordevic, M.; Milosevic, J.; Alcaraz Maestre, J.; Bachiller, I.; Barrio Luna, M.; Cerrada, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Moran, D.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Redondo, I.; Romero, L.; Soares, M. S.; Triossi, A.; Álvarez Fernández, A.; Albajar, C.; de Trocóniz, J. F.; Cuevas, J.; Erice, C.; Fernandez Menendez, J.; Gonzalez Caballero, I.; González Fernández, J. R.; Palencia Cortezon, E.; Sanchez Cruz, S.; Vischia, P.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Chazin Quero, B.; Curras, E.; Duarte Campderros, J.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Martinez Rivero, C.; Martinez Ruiz del Arbol, P.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Akgun, B.; Auffray, E.; Baillon, P.; Ball, A. H.; Barney, D.; Bendavid, J.; Bianco, M.; Bocci, A.; Botta, C.; Camporesi, T.; Castello, R.; Cepeda, M.; Cerminara, G.; Chapon, E.; Chen, Y.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Roeck, A.; Deelen, N.; Dobson, M.; du Pree, T.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Everaerts, P.; Fallavollita, F.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gilbert, A.; Gill, K.; Glege, F.; Gulhan, D.; Harris, P.; Hegeman, J.; Innocente, V.; Jafari, A.; Janot, P.; Karacheban, O.; Kieseler, J.; Knünz, V.; Kornmayer, A.; Kortelainen, M. J.; Krammer, M.; Lange, C.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Malgeri, L.; Mannelli, M.; Martelli, A.; Meijers, F.; Merlin, J. A.; Mersi, S.; Meschi, E.; Milenovic, P.; Moortgat, F.; Mulders, M.; Neugebauer, H.; Ngadiuba, J.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Rabady, D.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Seidel, M.; Selvaggi, M.; Sharma, A.; Silva, P.; Sphicas, P.; Stakia, A.; Steggemann, J.; Stoye, M.; Tosi, M.; Treille, D.; Tsirou, A.; Veckalns, V.; Verweij, M.; Zeuner, W. D.; Bertl, W.; Caminada, L.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Rohe, T.; Wiederkehr, S. A.; Backhaus, M.; Bäni, L.; Berger, P.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dorfer, C.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Klijnsma, T.; Lustermann, W.; Mangano, B.; Marionneau, M.; Meinhard, M. T.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrin, G.; Perrozzi, L.; Quittnat, M.; Reichmann, M.; Sanz Becerra, D. A.; Schönenberger, M.; Shchutska, L.; Tavolaro, V. R.; Theofilatos, K.; Vesterbacka Olsson, M. L.; Wallny, R.; Zhu, D. H.; Aarrestad, T. K.; Amsler, C.; Canelli, M. F.; De Cosa, A.; Del Burgo, R.; Donato, S.; Galloni, C.; Hreus, T.; Kilminster, B.; Pinna, D.; Rauco, G.; Robmann, P.; Salerno, D.; Schweiger, K.; Seitz, C.; Takahashi, Y.; Zucchetta, A.; Candelise, V.; Chang, Y. H.; Cheng, K. y.; Doan, T. H.; Jain, Sh.; Khurana, R.; Kuo, C. M.; Lin, W.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Chang, P.; Chao, Y.; Chen, K. F.; Chen, P. H.; Fiori, F.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Paganis, E.; Psallidas, A.; Steen, A.; Tsai, J. F.; Asavapibhop, B.; Kovitanggoon, K.; Singh, G.; Srimanobhas, N.; Bat, A.; Boran, F.; Damarseckin, S.; Demiroglu, Z. S.; Dozen, C.; Eskut, E.; Girgis, S.; Gokbulut, G.; Guler, Y.; Hos, I.; Kangal, E. E.; Kara, O.; Kayis Topaksu, A.; Kiminsu, U.; Oglakci, M.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Tok, U. G.; Topakli, H.; Tali, B.; Turkcapar, S.; Zorbakir, I. S.; Zorbilmez, C.; Karapinar, G.; Ocalan, K.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Tekten, S.; Yetkin, E. A.; Agaras, M. N.; Atay, S.; Cakir, A.; Cankocak, K.; Komurcu, Y.; Grynyov, B.; Levchuk, L.; Ball, F.; Beck, L.; Brooke, J. J.; Burns, D.; Clement, E.; Cussans, D.; Davignon, O.; Flacher, H.; Goldstein, J.; Heath, G. P.; Heath, H. F.; Kreczko, L.; Newbold, D. M.; Paramesvaran, S.; Sakuma, T.; Seif El Nasr-storey, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Linacre, J.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Womersley, W. J.; Auzinger, G.; Bainbridge, R.; Bloch, P.; Borg, J.; Breeze, S.; Buchmuller, O.; Bundock, A.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Di Maria, R.; Elwood, A.; Haddad, Y.; Hall, G.; Iles, G.; James, T.; Lane, R.; Laner, C.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mastrolorenzo, L.; Matsushita, T.; Nash, J.; Nikitenko, A.; Palladino, V.; Pesaresi, M.; Raymond, D. M.; Richards, A.; Rose, A.; Scott, E.; Seez, C.; Shtipliyski, A.; Summers, S.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Wardle, N.; Winterbottom, D.; Wright, J.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Reid, I. D.; Teodorescu, L.; Zahid, S.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Smith, C.; Bartek, R.; Dominguez, A.; Buccilli, A.; Cooper, S. I.; Henderson, C.; Rumerio, P.; West, C.; Arcaro, D.; Avetisyan, A.; Bose, T.; Gastler, D.; Rankin, D.; Richardson, C.; Rohlf, J.; Sulak, L.; Zou, D.; Benelli, G.; Cutts, D.; Hadley, M.; Hakala, J.; Heintz, U.; Hogan, J. M.; Kwok, K. H. M.; Laird, E.; Landsberg, G.; Lee, J.; Mao, Z.; Narain, M.; Pazzini, J.; Piperov, S.; Sagir, S.; Syarif, R.; Yu, D.; Band, R.; Brainerd, C.; Breedon, R.; Burns, D.; Calderon De La Barca Sanchez, M.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Flores, C.; Funk, G.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Shalhout, S.; Shi, M.; Smith, J.; Stolp, D.; Tos, K.; Tripathi, M.; Wang, Z.; Bachtis, M.; Bravo, C.; Cousins, R.; Dasgupta, A.; Florent, A.; Hauser, J.; Ignatenko, M.; Mccoll, N.; Regnard, S.; Saltzberg, D.; Schnaible, C.; Valuev, V.; Bouvier, E.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Ghiasi Shirazi, S. M. A.; Hanson, G.; Heilman, J.; Karapostoli, G.; Kennedy, E.; Lacroix, F.; Long, O. R.; Olmedo Negrete, M.; Paneva, M. I.; Si, W.; Wang, L.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cittolin, S.; Derdzinski, M.; Gerosa, R.; Gilbert, D.; Hashemi, B.; Holzner, A.; Klein, D.; Kole, G.; Krutelyov, V.; Letts, J.; Masciovecchio, M.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Wood, J.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Amin, N.; Bhandari, R.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Franco Sevilla, M.; Gouskos, L.; Heller, R.; Incandela, J.; Ovcharova, A.; Qu, H.; Richman, J.; Stuart, D.; Suarez, I.; Yoo, J.; Anderson, D.; Bornheim, A.; Bunn, J.; Lawhorn, J. M.; Newman, H. B.; Nguyen, T. Q.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Wilkinson, R.; Xie, S.; Zhang, Z.; Zhu, R. Y.; Andrews, M. B.; Ferguson, T.; Mudholkar, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Weinberg, M.; Cumalat, J. P.; Ford, W. T.; Jensen, F.; Johnson, A.; Krohn, M.; Leontsinis, S.; Mulholland, T.; Stenson, K.; Ulmer, K. A.; Wagner, S. R.; Alexander, J.; Chaves, J.; Chu, J.; Dittmer, S.; Mcdermott, K.; Mirman, N.; Patterson, J. R.; Quach, D.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Tan, S. M.; Tao, Z.; Thom, J.; Tucker, J.; Wittich, P.; Zientek, M.; Abdullin, S.; Albrow, M.; Alyari, M.; Apollinari, G.; Apresyan, A.; Apyan, A.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Canepa, A.; Cerati, G. B.; Cheung, H. W. K.; Chlebana, F.; Cremonesi, M.; Duarte, J.; Elvira, V. D.; Freeman, J.; Gecse, Z.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Lincoln, D.; Lipton, R.; Liu, M.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Magini, N.; Marraffino, J. M.; Mason, D.; McBride, P.; Merkel, P.; Mrenna, S.; Nahn, S.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Ristori, L.; Schneider, B.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Stoynev, S.; Strait, J.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Wang, M.; Weber, H. A.; Whitbeck, A.; Wu, W.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Brinkerhoff, A.; Carnes, A.; Carver, M.; Curry, D.; Field, R. D.; Furic, I. K.; Gleyzer, S. V.; Joshi, B. M.; Konigsberg, J.; Korytov, A.; Kotov, K.; Ma, P.; Matchev, K.; Mei, H.; Mitselmakher, G.; Shi, K.; Sperka, D.; Terentyev, N.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Joshi, Y. R.; Linn, S.; Markowitz, P.; Rodriguez, J. L.; Ackert, A.; Adams, T.; Askew, A.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Kolberg, T.; Martinez, G.; Perry, T.; Prosper, H.; Saha, A.; Santra, A.; Sharma, V.; Yohay, R.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Cavanaugh, R.; Chen, X.; Evdokimov, O.; Gerber, C. E.; Hangal, D. A.; Hofman, D. J.; Jung, K.; Kamin, J.; Sandoval Gonzalez, I. D.; Tonjes, M. B.; Trauger, H.; Varelas, N.; Wang, H.; Wu, Z.; Zhang, J.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Blumenfeld, B.; Cocoros, A.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Roskes, J.; Sarica, U.; Swartz, M.; Xiao, M.; You, C.; Al-bataineh, A.; Baringer, P.; Bean, A.; Boren, S.; Bowen, J.; Castle, J.; Khalil, S.; Kropivnitskaya, A.; Majumder, D.; Mcbrayer, W.; Murray, M.; Rogan, C.; Royon, C.; Sanders, S.; Schmitz, E.; Tapia Takaki, J. D.; Wang, Q.; Ivanov, A.; Kaadze, K.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Rebassoo, F.; Wright, D.; Baden, A.; Baron, O.; Belloni, A.; Eno, S. C.; Feng, Y.; Ferraioli, C.; Hadley, N. J.; Jabeen, S.; Jeng, G. Y.; Kellogg, R. G.; Kunkle, J.; Mignerey, A. C.; Ricci-Tam, F.; Shin, Y. H.; Skuja, A.; Tonwar, S. C.; Abercrombie, D.; Allen, B.; Azzolini, V.; Barbieri, R.; Baty, A.; Bauer, G.; Bi, R.; Brandt, S.; Busza, W.; Cali, I. A.; D'Alfonso, M.; Demiragli, Z.; Gomez Ceballos, G.; Goncharov, M.; Hsu, D.; Hu, M.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Maier, B.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Sumorok, K.; Tatar, K.; Velicanu, D.; Wang, J.; Wang, T. W.; Wyslouch, B.; Benvenuti, A. C.; Chatterjee, R. M.; Evans, A.; Hansen, P.; Hiltbrand, J.; Kalafut, S.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Turkewitz, J.; Wadud, M. A.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Claes, D. R.; Fangmeier, C.; Golf, F.; Gonzalez Suarez, R.; Kamalieddin, R.; Kravchenko, I.; Monroy, J.; Siado, J. E.; Snow, G. R.; Stieger, B.; Dolen, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Nguyen, D.; Parker, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Freer, C.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Orimoto, T.; Teixeira De Lima, R.; Wamorkar, T.; Wang, B.; Wisecarver, A.; Wood, D.; Bhattacharya, S.; Charaf, O.; Hahn, K. A.; Mucia, N.; Odell, N.; Schmitt, M. H.; Sung, K.; Trovato, M.; Velasco, M.; Bucci, R.; Dev, N.; Hildreth, M.; Hurtado Anampa, K.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Li, W.; Loukas, N.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Siddireddy, P.; Smith, G.; Taroni, S.; Wayne, M.; Wightman, A.; Wolf, M.; Woodard, A.; Alimena, J.; Antonelli, L.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Francis, B.; Hart, A.; Hill, C.; Ji, W.; Ling, T. Y.; Liu, B.; Luo, W.; Winer, B. L.; Wulsin, H. W.; Cooperstein, S.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Higginbotham, S.; Kalogeropoulos, A.; Lange, D.; Luo, J.; Marlow, D.; Mei, K.; Ojalvo, I.; Olsen, J.; Palmer, C.; Piroué, P.; Stickland, D.; Tully, C.; Malik, S.; Norberg, S.; Barker, A.; Barnes, V. E.; Das, S.; Folgueras, S.; Gutay, L.; Jones, M.; Jung, A. W.; Khatiwada, A.; Miller, D. H.; Neumeister, N.; Peng, C. C.; Qiu, H.; Schulte, J. F.; Sun, J.; Wang, F.; Xiao, R.; Xie, W.; Cheng, T.; Parashar, N.; Stupak, J.; Chen, Z.; Ecklund, K. M.; Freed, S.; Geurts, F. J. M.; Guilbaud, M.; Kilpatrick, M.; Li, W.; Michlin, B.; Padley, B. P.; Roberts, J.; Rorie, J.; Shi, W.; Tu, Z.; Zabel, J.; Zhang, A.; Bodek, A.; de Barbaro, P.; Demina, R.; Duh, Y. T.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Hindrichs, O.; Khukhunaishvili, A.; Lo, K. H.; Tan, P.; Verzetti, M.; Ciesielski, R.; Goulianos, K.; Mesropian, C.; Agapitos, A.; Chou, J. P.; Gershtein, Y.; Gómez Espinosa, T. A.; Halkiadakis, E.; Heindl, M.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Kyriacou, S.; Lath, A.; Montalvo, R.; Nash, K.; Osherson, M.; Saka, H.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Delannoy, A. G.; Heideman, J.; Riley, G.; Rose, K.; Spanier, S.; Thapa, K.; Bouhali, O.; Castaneda Hernandez, A.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Kamon, T.; Mueller, R.; Pakhotin, Y.; Patel, R.; Perloff, A.; Perniè, L.; Rathjens, D.; Safonov, A.; Tatarinov, A.; Akchurin, N.; Damgov, J.; De Guio, F.; Dudero, P. R.; Faulkner, J.; Gurpinar, E.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Mengke, T.; Muthumuni, S.; Peltola, T.; Undleeb, S.; Volobouev, I.; Wang, Z.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Melo, A.; Ni, H.; Padeken, K.; Sheldon, P.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Barria, P.; Cox, B.; Hirosky, R.; Joyce, M.; Ledovskoy, A.; Li, H.; Neu, C.; Sinthuprasith, T.; Wang, Y.; Wolfe, E.; Xia, F.; Harr, R.; Karchin, P. E.; Poudyal, N.; Sturdy, J.; Thapa, P.; Zaleski, S.; Brodski, M.; Buchanan, J.; Caillol, C.; Carlsmith, D.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Herndon, M.; Hervé, A.; Hussain, U.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Rekovic, V.; Ruggles, T.; Savin, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.

    2018-02-01

    A search for standard model production of four top quarks (t\\overline{t} t\\overline{t} ) is reported using events containing at least three leptons (e, μ) or a same-sign lepton pair. The events are produced in proton-proton collisions at a center-of-mass energy of 13 {TeV} at the LHC, and the data sample, recorded in 2016, corresponds to an integrated luminosity of 35.9 {fb}^{-1}. Jet multiplicity and flavor are used to enhance signal sensitivity, and dedicated control regions are used to constrain the dominant backgrounds. The observed and expected signal significances are, respectively, 1.6 and 1.0 standard deviations, and the t\\overline{t} t\\overline{t} cross section is measured to be 16.9^{+13.8}_{-11.4} {fb}, in agreement with next-to-leading-order standard model predictions. These results are also used to constrain the Yukawa coupling between the top quark and the Higgs boson to be less than 2.1 times its expected standard model value at 95% confidence level.

  7. Constrained positive matrix factorization: Elemental ratios, spatial distinction, and chemical transport model source contributions

    NASA Astrophysics Data System (ADS)

    Sturtz, Timothy M.

    Source apportionment models attempt to untangle the relationship between pollution sources and the impacts at downwind receptors. Two frameworks of source apportionment models exist: source-oriented and receptor-oriented. Source based apportionment models use presumed emissions and atmospheric processes to estimate the downwind source contributions. Conversely, receptor based models leverage speciated concentration data from downwind receptors and apply statistical methods to predict source contributions. Integration of both source-oriented and receptor-oriented models could lead to a better understanding of the implications sources have on the environment and society. The research presented here investigated three different types of constraints applied to the Positive Matrix Factorization (PMF) receptor model within the framework of the Multilinear Engine (ME-2): element ratio constraints, spatial separation constraints, and chemical transport model (CTM) source attribution constraints. PM10-2.5 mass and trace element concentrations were measured in Winston-Salem, Chicago, and St. Paul at up to 60 sites per city during two different seasons in 2010. PMF was used to explore the underlying sources of variability. Information on previously reported PM10-2.5 tire and brake wear profiles were used to constrain these features in PMF by prior specification of selected species ratios. We also modified PMF to allow for combining the measurements from all three cities into a single model while preserving city-specific soil features. Relatively minor differences were observed between model predictions with and without the prior ratio constraints, increasing confidence in our ability to identify separate brake wear and tire wear features. Using separate data, source contributions to total fine particle carbon predicted by a CTM were incorporated into the PMF receptor model to form a receptor-oriented hybrid model. The level of influence of the CTM versus traditional PMF was varied using a weighting parameter applied to an object function as implemented in ME-2. The resulting hybrid model was used to quantify the contributions of total carbon from both wildfires and biogenic sources at two Interagency Monitoring of Protected Visual Environment monitoring sites, Monture and Sula Peak, Montana, from 2006 through 2008.

  8. Testing constrained sequential dominance models of neutrinos

    NASA Astrophysics Data System (ADS)

    Björkeroth, Fredrik; King, Stephen F.

    2015-12-01

    Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We analyze a class of CSD(n) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the ‘atmospheric’ and ‘solar’ neutrino masses with Yukawa couplings to ({ν }e,{ν }μ ,{ν }τ ) proportional to (0,1,1) and (1,n,n-2), respectively, where n is a positive integer. These coupling patterns may arise in indirect family symmetry models based on A 4. With two right-handed neutrinos, using a χ 2 test, we find a good agreement with data for CSD(3) and CSD(4) where the entire Pontecorvo-Maki-Nakagawa-Sakata mixing matrix is controlled by a single phase η, which takes simple values, leading to accurate predictions for mixing angles and the magnitude of the oscillation phase | {δ }{CP}| . We carefully study the perturbing effect of a third ‘decoupled’ right-handed neutrino, leading to a bound on the lightest physical neutrino mass {m}1{{≲ }}1 meV for the viable cases, corresponding to a normal neutrino mass hierarchy. We also discuss a direct link between the oscillation phase {δ }{CP} and leptogenesis in CSD(n) due to the same see-saw phase η appearing in both the neutrino mass matrix and leptogenesis.

  9. Constraining the phantom braneworld model from cosmic structure sizes

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Kousvos, Stefanos R.

    2017-11-01

    We consider the phantom braneworld model in the context of the maximum turnaround radius, RTA ,max, of a stable, spherical cosmic structure with a given mass. The maximum turnaround radius is the point where the attraction due to the central inhomogeneity gets balanced with the repulsion of the ambient dark energy, beyond which a structure cannot hold any mass, thereby giving the maximum upper bound on the size of a stable structure. In this work we derive an analytical expression of RTA ,max for this model using cosmological scalar perturbation theory. Using this we numerically constrain the parameter space, including a bulk cosmological constant and the Weyl fluid, from the mass versus observed size data for some nearby, nonvirial cosmic structures. We use different values of the matter density parameter Ωm, both larger and smaller than that of the Λ cold dark matter, as the input in our analysis. We show in particular, that (a) with a vanishing bulk cosmological constant the predicted upper bound is always greater than what is actually observed; a similar conclusion holds if the bulk cosmological constant is negative (b) if it is positive, the predicted maximum size can go considerably below than what is actually observed and owing to the involved nature of the field equations, it leads to interesting constraints on not only the bulk cosmological constant itself but on the whole parameter space of the theory.

  10. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  11. DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, M.; Manchester, W. B.; Van der Holst, B.

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful ofmore » observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).« less

  12. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    NASA Astrophysics Data System (ADS)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.

  13. Estimating rates of local extinction and colonization in colonial species and an extension to the metapopulation and community levels

    USGS Publications Warehouse

    Barbraud, C.; Nichols, J.D.; Hines, J.E.; Hafner, H.

    2003-01-01

    Coloniality has mainly been studied from an evolutionary perspective, but relatively few studies have developed methods for modelling colony dynamics. Changes in number of colonies over time provide a useful tool for predicting and evaluating the responses of colonial species to management and to environmental disturbance. Probabilistic Markov process models have been recently used to estimate colony site dynamics using presence-absence data when all colonies are detected in sampling efforts. Here, we define and develop two general approaches for the modelling and analysis of colony dynamics for sampling situations in which all colonies are, and are not, detected. For both approaches, we develop a general probabilistic model for the data and then constrain model parameters based on various hypotheses about colony dynamics. We use Akaike's Information Criterion (AIC) to assess the adequacy of the constrained models. The models are parameterised with conditional probabilities of local colony site extinction and colonization. Presence-absence data arising from Pollock's robust capture-recapture design provide the basis for obtaining unbiased estimates of extinction, colonization, and detection probabilities when not all colonies are detected. This second approach should be particularly useful in situations where detection probabilities are heterogeneous among colony sites. The general methodology is illustrated using presence-absence data on two species of herons (Purple Heron, Ardea purpurea and Grey Heron, Ardea cinerea). Estimates of the extinction and colonization rates showed interspecific differences and strong temporal and spatial variations. We were also able to test specific predictions about colony dynamics based on ideas about habitat change and metapopulation dynamics. We recommend estimators based on probabilistic modelling for future work on colony dynamics. We also believe that this methodological framework has wide application to problems in animal ecology concerning metapopulation and community dynamics.

  14. Modeling nonstructural carbohydrate reserve dynamics in forest trees

    NASA Astrophysics Data System (ADS)

    Richardson, Andrew; Keenan, Trevor; Carbone, Mariah; Pederson, Neil

    2013-04-01

    Understanding the factors influencing the availability of nonstructural carbohydrate (NSC) reserves is essential for predicting the resilience of forests to climate change and environmental stress. However, carbon allocation processes remain poorly understood and many models either ignore NSC reserves, or use simple and untested representations of NSC allocation and pool dynamics. Using model-data fusion techniques, we combined a parsimonious model of forest ecosystem carbon cycling with novel field sampling and laboratory analyses of NSCs. Simulations were conducted for an evergreen conifer forest and a deciduous broadleaf forest in New England. We used radiocarbon methods based on the 14C "bomb spike" to estimate the age of NSC reserves, and used this to constrain the mean residence time of modeled NSCs. We used additional data, including tower-measured fluxes of CO2, soil and biomass carbon stocks, woody biomass increment, and leaf area index and litterfall, to further constrain the model's parameters and initial conditions. Incorporation of fast- and slow-cycling NSC pools improved the ability of the model to reproduce the measured interannual variability in woody biomass increment. We show how model performance varies according to model structure and total pool size, and we use novel diagnostic criteria, based on autocorrelation statistics of annual biomass growth, to evaluate the model's ability to correctly represent lags and memory effects.

  15. Prediction for the transverse momentum distribution of Drell-Yan dileptons at GSI PANDA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linnyk, O.; Gallmeister, K.; Leupold, S.

    2006-02-01

    We predict the triple differential cross section of the Drell-Yan process pp{yields}l{sup +}l{sup -}X in the kinematical regimes relevant for the upcoming PANDA experiment, using a model that accounts for quark virtuality as well as primordial transverse momentum. We find a cross section magnitude of up to 10 nb in the low mass region. A measurement with 10% accuracy is desirable in order to constrain the partonic transverse momentum dispersion and the spectral function width within {+-}50 MeV and to study their evolution with M and {radical}(s)

  16. Deducing the multi-trader population driving a financial market

    NASA Astrophysics Data System (ADS)

    Gupta, Nachi; Hauser, Raphael; Johnson, Neil

    2005-12-01

    We have previously laid out a basic framework for predicting financial movements and pockets of predictability by tracking the distribution of a multi-trader population playing on an artificial financial market model. This work explores extensions to this basic framework. We allow for more intelligent agents with a richer strategy set, and we no longer constrain the distribution over these agents to a probability space. We then introduce a fusion scheme which accounts for multiple runs of randomly chosen sets of possible agent types. We also discuss a mechanism for bias removal on the estimates.

  17. Reduced mate availability leads to evolution of self-fertilization and purging of inbreeding depression in a hermaphrodite.

    PubMed

    Noël, Elsa; Chemtob, Yohann; Janicke, Tim; Sarda, Violette; Pélissié, Benjamin; Jarne, Philippe; David, Patrice

    2016-03-01

    Basic models of mating-system evolution predict that hermaphroditic organisms should mostly either cross-fertilize, or self-fertilize, due to self-reinforcing coevolution of inbreeding depression and outcrossing rates. However transitions between mating systems occur. A plausible scenario for such transitions assumes that a decrease in pollinator or mate availability temporarily constrains outcrossing populations to self-fertilize as a reproductive assurance strategy. This should trigger a purge of inbreeding depression, which in turn encourages individuals to self-fertilize more often and finally to reduce male allocation. We tested the predictions of this scenario using the freshwater snail Physa acuta, a self-compatible hermaphrodite that preferentially outcrosses and exhibits high inbreeding depression in natural populations. From an outbred population, we built two types of experimental evolution lines, controls (outcrossing every generation) and constrained lines (in which mates were often unavailable, forcing individuals to self-fertilize). After ca. 20 generations, individuals from constrained lines initiated self-fertilization earlier in life and had purged most of their inbreeding depression compared to controls. However, their male allocation remained unchanged. Our study suggests that the mating system can rapidly evolve as a response to reduced mating opportunities, supporting the reproductive assurance scenario of transitions from outcrossing to selfing. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  18. A Study of Interactions between Mixing and Chemical Reaction Using the Rate-Controlled Constrained-Equilibrium Method

    NASA Astrophysics Data System (ADS)

    Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed

    2016-10-01

    The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.

  19. Interferometric tests of Planckian quantum geometry models

    DOE PAGES

    Kwon, Ohkyung; Hogan, Craig J.

    2016-04-19

    The effect of Planck scale quantum geometrical effects on measurements with interferometers is estimated with standard physics, and with a variety of proposed extensions. It is shown that effects are negligible in standard field theory with canonically quantized gravity. Statistical noise levels are estimated in a variety of proposals for nonstandard metric fluctuations, and these alternatives are constrained using upper bounds on stochastic metric fluctuations from LIGO. Idealized models of several interferometer system architectures are used to predict signal noise spectra in a quantum geometry that cannot be described by a fluctuating metric, in which position noise arises from holographicmore » bounds on directional information. Lastly, predictions in this case are shown to be close to current and projected experimental bounds.« less

  20. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  1. Microbial models with data-driven parameters predict stronger soil carbon responses to climate change.

    PubMed

    Hararuk, Oleksandra; Smith, Matthew J; Luo, Yiqi

    2015-06-01

    Long-term carbon (C) cycle feedbacks to climate depend on the future dynamics of soil organic carbon (SOC). Current models show low predictive accuracy at simulating contemporary SOC pools, which can be improved through parameter estimation. However, major uncertainty remains in global soil responses to climate change, particularly uncertainty in how the activity of soil microbial communities will respond. To date, the role of microbes in SOC dynamics has been implicitly described by decay rate constants in most conventional global carbon cycle models. Explicitly including microbial biomass dynamics into C cycle model formulations has shown potential to improve model predictive performance when assessed against global SOC databases. This study aimed to data-constrained parameters of two soil microbial models, evaluate the improvements in performance of those calibrated models in predicting contemporary carbon stocks, and compare the SOC responses to climate change and their uncertainties between microbial and conventional models. Microbial models with calibrated parameters explained 51% of variability in the observed total SOC, whereas a calibrated conventional model explained 41%. The microbial models, when forced with climate and soil carbon input predictions from the 5th Coupled Model Intercomparison Project (CMIP5), produced stronger soil C responses to 95 years of climate change than any of the 11 CMIP5 models. The calibrated microbial models predicted between 8% (2-pool model) and 11% (4-pool model) soil C losses compared with CMIP5 model projections which ranged from a 7% loss to a 22.6% gain. Lastly, we observed unrealistic oscillatory SOC dynamics in the 2-pool microbial model. The 4-pool model also produced oscillations, but they were less prominent and could be avoided, depending on the parameter values. © 2014 John Wiley & Sons Ltd.

  2. Geomorphically based predictive mapping of soil thickness in upland watersheds

    NASA Astrophysics Data System (ADS)

    Pelletier, Jon D.; Rasmussen, Craig

    2009-09-01

    The hydrologic response of upland watersheds is strongly controlled by soil (regolith) thickness. Despite the need to quantify soil thickness for input into hydrologic models, there is currently no widely used, geomorphically based method for doing so. In this paper we describe and illustrate a new method for predictive mapping of soil thicknesses using high-resolution topographic data, numerical modeling, and field-based calibration. The model framework works directly with input digital elevation model data to predict soil thicknesses assuming a long-term balance between soil production and erosion. Erosion rates in the model are quantified using one of three geomorphically based sediment transport models: nonlinear slope-dependent transport, nonlinear area- and slope-dependent transport, and nonlinear depth- and slope-dependent transport. The model balances soil production and erosion locally to predict a family of solutions corresponding to a range of values of two unconstrained model parameters. A small number of field-based soil thickness measurements can then be used to calibrate the local value of those unconstrained parameters, thereby constraining which solution is applicable at a particular study site. As an illustration, the model is used to predictively map soil thicknesses in two small, ˜0.1 km2, drainage basins in the Marshall Gulch watershed, a semiarid drainage basin in the Santa Catalina Mountains of Pima County, Arizona. Field observations and calibration data indicate that the nonlinear depth- and slope-dependent sediment transport model is the most appropriate transport model for this site. The resulting framework provides a generally applicable, geomorphically based tool for predictive mapping of soil thickness using high-resolution topographic data sets.

  3. Observations of Circumstellar Thermochemical Equilibrium: The Case of Phosphorus

    NASA Technical Reports Server (NTRS)

    Milam, Stefanie N.; Charnley, Steven B.

    2011-01-01

    We will present observations of phosphorus-bearing species in circumstellar envelopes, including carbon- and oxygen-rich shells 1. New models of thermochemical equilibrium chemistry have been developed to interpret, and constrained by these data. These calculations will also be presented and compared to the numerous P-bearing species already observed in evolved stars. Predictions for other viable species will be made for observations with Herschel and ALMA.

  4. Water in Massive protostellar objects: first detection of THz water maser and water inner abundance.

    NASA Astrophysics Data System (ADS)

    Herpin, Fabrice

    2014-10-01

    The formation massive stars is still not well understood. Despite numerous water line observations with Herschel telescope, over a broad range of energies, in most of the observed sources the WISH-KP (Water In Star-forming regions with Herschel, Co-PI: F. Herpin) observations were not able to trace the emission from the hot core. Moreover, water maser model predict that several THz water maser should be detectable in these objects. We aim to detect for the first time the THz maser lines o-H2O 8(2,7)- 7(3,4) at 1296.41106 GHz and p-H2O 7(2,6)- 6(3,3) at 1440.78167 GHz as predicted by the model. We propose two sources for a northern flight as first priority and two other sources for a possible southern flight. This will 1) constrain the maser theory, 2) constrain the physical conditions and water abundance in the inner layers of the prostellar environnement. In addition, we will use the p-H2O 3(3,1)- 4(0,4) thermal line at 1893.68651 GHz (L2 channel) in order to probe the physical conditions and water abundance in the inner layers of the prostellar objects where HIFI-Herschel has partially failed.

  5. Time-Ordered Networks Reveal Limitations to Information Flow in Ant Colonies

    PubMed Central

    Blonder, Benjamin; Dornhaus, Anna

    2011-01-01

    Background An important function of many complex networks is to inhibit or promote the transmission of disease, resources, or information between individuals. However, little is known about how the temporal dynamics of individual-level interactions affect these networks and constrain their function. Ant colonies are a model comparative system for understanding general principles linking individual-level interactions to network-level functions because interactions among individuals enable integration of multiple sources of information to collectively make decisions, and allocate tasks and resources. Methodology/Findings Here we show how the temporal and spatial dynamics of such individual interactions provide upper bounds to rates of colony-level information flow in the ant Temnothorax rugatulus. We develop a general framework for analyzing dynamic networks and a mathematical model that predicts how information flow scales with individual mobility and group size. Conclusions/Significance Using thousands of time-stamped interactions between uniquely marked ants in four colonies of a range of sizes, we demonstrate that observed maximum rates of information flow are always slower than predicted, and are constrained by regulation of individual mobility and contact rate. By accounting for the ordering and timing of interactions, we can resolve important difficulties with network sampling frequency and duration, enabling a broader understanding of interaction network functioning across systems and scales. PMID:21625450

  6. The optical, ultraviolet, and X-ray structure of the quasar HE 0435–1223

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blackburne, Jeffrey A.; Kochanek, Christopher S.; Chen, Bin

    2014-07-10

    Microlensing has proved an effective probe of the structure of the innermost regions of quasars and an important test of accretion disk models. We present light curves of the lensed quasar HE 0435–1223 in the R band and in the ultraviolet (UV), and consider them together with X-ray light curves in two energy bands that are presented in a companion paper. Using a Bayesian Monte Carlo method, we constrain the size of the accretion disk in the rest-frame near- and far-UV, and constrain for the first time the size of the X-ray emission regions in two X-ray energy bands. Themore » R-band scale size of the accretion disk is about 10{sup 15.23} cm (∼23r{sub g}), slightly smaller than previous estimates, but larger than would be predicted from the quasar flux. In the UV, the source size is weakly constrained, with a strong prior dependence. The UV to R-band size ratio is consistent with the thin disk model prediction, with large error bars. In soft and hard X-rays, the source size is smaller than ∼10{sup 14.8} cm (∼10r{sub g} ) at 95% confidence. We do not find evidence of structure in the X-ray emission region, as the most likely value for the ratio of the hard X-ray size to the soft X-ray size is unity. Finally, we find that the most likely value for the mean mass of stars in the lens galaxy is ∼0.3 M{sub ☉}, consistent with other studies.« less

  7. Erratum to: Constraining couplings of top quarks to the Z boson in $$ t\\overline{t} $$ + Z production at the LHC

    DOE PAGES

    Röntsch, Raoul; Schulze, Markus

    2015-09-21

    We study top quark pair production in association with a Z boson at the Large Hadron Collider (LHC) and investigate the prospects of measuring the couplings of top quarks to the Z boson. To date these couplings have not been constrained in direct measurements. Such a determination will be possible for the first time at the LHC. Our calculation improves previous coupling studies through the inclusion of next-to-leading order (NLO) QCD corrections in production and decays of all unstable particles. We treat top quarks in the narrow-width approximation and retain all NLO spin correlations. To determine the sensitivity of amore » coupling measurement we perform a binned log-likelihood ratio test based on normalization and shape information of the angle between the leptons from the Z boson decay. The obtained limits account for statistical uncertainties as well as leading theoretical systematics from residual scale dependence and parton distribution functions. We use current CMS data to place the first direct constraints on the ttbZ couplings. We also consider the upcoming high-energy LHC run and find that with 300 inverse fb of data at an energy of 13 TeV the vector and axial ttbZ couplings can be constrained at the 95% confidence level to C_V=0.24^{+0.39}_{-0.85} and C_A=-0.60^{+0.14}_{-0.18}, where the central values are the Standard Model predictions. This is a reduction of uncertainties by 25% and 42%, respectively, compared to an analysis based on leading-order predictions. We also translate these results into limits on dimension-six operators contributing to the ttbZ interactions beyond the Standard Model.« less

  8. Instant preheating in quintessential inflation with α -attractors

    NASA Astrophysics Data System (ADS)

    Dimopoulos, Konstantinos; Wood, Leonora Donaldson; Owen, Charlotte

    2018-03-01

    We investigate a compelling model of quintessential inflation in the context of α -attractors, which naturally result in a scalar potential featuring two flat regions; the inflationary plateau and the quintessential tail. The "asymptotic freedom" of α -attractors, near the kinetic poles, suppresses radiative corrections and interactions, which would otherwise threaten to lift the flatness of the quintessential tail and cause a 5th-force problem respectively. Since this is a nonoscillatory inflation model, we reheat the Universe through instant preheating. The parameter space is constrained by both inflation and dark energy requirements. We find an excellent correlation between the inflationary observables and model predictions, in agreement with the α -attractors setup. We also obtain successful quintessence for natural values of the parameters. Our model predicts potentially sizeable tensor perturbations (at the level of 1%) and a slightly varying equation of state for dark energy, to be probed in the near future.

  9. Effects of long-term representations on free recall of unrelated words

    PubMed Central

    Katkov, Mikhail; Romani, Sandro

    2015-01-01

    Human memory stores vast amounts of information. Yet recalling this information is often challenging when specific cues are lacking. Here we consider an associative model of retrieval where each recalled item triggers the recall of the next item based on the similarity between their long-term neuronal representations. The model predicts that different items stored in memory have different probability to be recalled depending on the size of their representation. Moreover, items with high recall probability tend to be recalled earlier and suppress other items. We performed an analysis of a large data set on free recall and found a highly specific pattern of statistical dependencies predicted by the model, in particular negative correlations between the number of words recalled and their average recall probability. Taken together, experimental and modeling results presented here reveal complex interactions between memory items during recall that severely constrain recall capacity. PMID:25593296

  10. Models of volcanic eruption hazards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohletz, K.H.

    1992-01-01

    Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluidmore » flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.« less

  11. Models of volcanic eruption hazards

    NASA Astrophysics Data System (ADS)

    Wohletz, K. H.

    Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.

  12. The impacts of data constraints on the predictive performance of a general process-based crop model (PeakN-crop v1.0)

    NASA Astrophysics Data System (ADS)

    Caldararu, Silvia; Purves, Drew W.; Smith, Matthew J.

    2017-04-01

    Improving international food security under a changing climate and increasing human population will be greatly aided by improving our ability to modify, understand and predict crop growth. What we predominantly have at our disposal are either process-based models of crop physiology or statistical analyses of yield datasets, both of which suffer from various sources of error. In this paper, we present a generic process-based crop model (PeakN-crop v1.0) which we parametrise using a Bayesian model-fitting algorithm to three different sources: data-space-based vegetation indices, eddy covariance productivity measurements and regional crop yields. We show that the model parametrised without data, based on prior knowledge of the parameters, can largely capture the observed behaviour but the data-constrained model greatly improves both the model fit and reduces prediction uncertainty. We investigate the extent to which each dataset contributes to the model performance and show that while all data improve on the prior model fit, the satellite-based data and crop yield estimates are particularly important for reducing model error and uncertainty. Despite these improvements, we conclude that there are still significant knowledge gaps, in terms of available data for model parametrisation, but our study can help indicate the necessary data collection to improve our predictions of crop yields and crop responses to environmental changes.

  13. Constraining the inferred paleohydrologic evolution of a deep unsaturated zone in the Amargosa Desert

    USGS Publications Warehouse

    Walvoord, Michelle Ann; Stonestrom, David A.; Andraski, Brian J.; Striegl, Robert G.

    2004-01-01

    Natural flow regimes in deep unsaturated zones of arid interfluvial environments are rarely in hydraulic equilibrium with near-surface boundary conditions imposed by present-day plant–soil–atmosphere dynamics. Nevertheless, assessments of water resources and contaminant transport require realistic estimates of gas, water, and solute fluxes under past, present, and projected conditions. Multimillennial transients that are captured in current hydraulic, chemical, and isotopic profiles can be interpreted to constrain alternative scenarios of paleohydrologic evolution following climatic and vegetational shifts from pluvial to arid conditions. However, interpreting profile data with numerical models presents formidable challenges in that boundary conditions must be prescribed throughout the entire Holocene, when we have at most a few decades of actual records. Models of profile development at the Amargosa Desert Research Site include substantial uncertainties from imperfectly known initial and boundary conditions when simulating flow and solute transport over millennial timescales. We show how multiple types of profile data, including matric potentials and porewater concentrations of Cl−, δD, δ18O, can be used in multiphase heat, flow, and transport models to expose and reduce uncertainty in paleohydrologic reconstructions. Results indicate that a dramatic shift in the near-surface water balance occurred approximately 16000 yr ago, but that transitions in precipitation, temperature, and vegetation were not necessarily synchronous. The timing of the hydraulic transition imparts the largest uncertainty to model-predicted contemporary fluxes. In contrast, the uncertainties associated with initial (late Pleistocene) conditions and boundary conditions during the Holocene impart only small uncertainties to model-predicted contemporaneous fluxes.

  14. Slab stagnation and detachment under northeast China

    NASA Astrophysics Data System (ADS)

    Honda, Satoru

    2016-03-01

    Results of tomography models around the Japanese Islands show the existence of a gap between the horizontally lying (stagnant) slab extending under northeastern China and the fast seismic velocity anomaly in the lower mantle. A simple conversion from the fast velocity anomaly to the low-temperature anomaly shows a similar feature. This feature appears to be inconsistent with the results of numerical simulations on the interaction between the slab and phase transitions with temperature-dependent viscosity. Such numerical models predict a continuous slab throughout the mantle. I extend previous analyses of the tomography model and model calculations to infer the origins of the gap beneath northeastern China. Results of numerical simulations that take the geologic history of the subduction zone into account suggest two possible origins for the gap: (1) the opening of the Japan Sea led to a breaking off of the otherwise continuous subducting slab, or (2) the western edge of the stagnant slab is the previous subducted ridge, which was the plate boundary between the extinct Izanagi and the Pacific plates. Origin (2) suggesting the present horizontally lying slab has accumulated since the ridge subduction, is preferable for explaining the present length of the horizontally lying slab in the upper mantle. Numerical models of origin (1) predict a stagnant slab in the upper mantle that is too short, and a narrow or non-existent gap. Preferred models require rather stronger flow resistance of the 660-km phase change than expected from current estimates of the phase transition property. Future detailed estimates of the amount of the subducted Izanagi plate and the present stagnant slab would be useful to constrain models. A systematic along-arc variation of the slab morphology from the northeast Japan to Kurile arcs is also recognized, and its understanding may constrain the 3D mantle flow there.

  15. Constraining proposed combinations of ice history and earth rheology using VLBI determined baseline length rates in North America

    NASA Technical Reports Server (NTRS)

    Mitrovica, J. X.; Davis, J. L.; Shapiro, I. I.

    1993-01-01

    We predict the present-day rates of change of the lengths of 19 North American baselines due to the glacial isostatic adjustment process. Contrary to previously published research, we find that the three-dimensional motion of each of the sites defining a baseline, rather than only the radial motions of these sites, needs to be considered to obtain an accurate estimate of the rate of change of the baseline length. Predictions are generated using a suite of Earth models and late Pleistocene ice histories; these include specific combinations of the two which have been proposed in the literature as satisfying a variety of rebound related geophysical observations from the North American region. A number of these published models are shown to predict rates which differ significantly from the Very Long Base Interferometry (VLBI) observations.

  16. Nonlinear Recurrent Neural Network Predictive Control for Energy Distribution of a Fuel Cell Powered Robot

    PubMed Central

    Chen, Qihong; Long, Rong; Quan, Shuhai

    2014-01-01

    This paper presents a neural network predictive control strategy to optimize power distribution for a fuel cell/ultracapacitor hybrid power system of a robot. We model the nonlinear power system by employing time variant auto-regressive moving average with exogenous (ARMAX), and using recurrent neural network to represent the complicated coefficients of the ARMAX model. Because the dynamic of the system is viewed as operating- state- dependent time varying local linear behavior in this frame, a linear constrained model predictive control algorithm is developed to optimize the power splitting between the fuel cell and ultracapacitor. The proposed algorithm significantly simplifies implementation of the controller and can handle multiple constraints, such as limiting substantial fluctuation of fuel cell current. Experiment and simulation results demonstrate that the control strategy can optimally split power between the fuel cell and ultracapacitor, limit the change rate of the fuel cell current, and so as to extend the lifetime of the fuel cell. PMID:24707206

  17. The ShakeOut earthquake source and ground motion simulations

    USGS Publications Warehouse

    Graves, R.W.; Houston, Douglas B.; Hudnut, K.W.

    2011-01-01

    The ShakeOut Scenario is premised upon the detailed description of a hypothetical Mw 7.8 earthquake on the southern San Andreas Fault and the associated simulated ground motions. The main features of the scenario, such as its endpoints, magnitude, and gross slip distribution, were defined through expert opinion and incorporated information from many previous studies. Slip at smaller length scales, rupture speed, and rise time were constrained using empirical relationships and experience gained from previous strong-motion modeling. Using this rupture description and a 3-D model of the crust, broadband ground motions were computed over a large region of Southern California. The largest simulated peak ground acceleration (PGA) and peak ground velocity (PGV) generally range from 0.5 to 1.0 g and 100 to 250 cm/s, respectively, with the waveforms exhibiting strong directivity and basin effects. Use of a slip-predictable model results in a high static stress drop event and produces ground motions somewhat higher than median level predictions from NGA ground motion prediction equations (GMPEs).

  18. Empirical estimates to reduce modeling uncertainties of soil organic carbon in permafrost regions: a review of recent progress and remaining challenges

    USGS Publications Warehouse

    Mishra, U.; Jastrow, J.D.; Matamala, R.; Hugelius, G.; Koven, C.D.; Harden, Jennifer W.; Ping, S.L.; Michaelson, G.J.; Fan, Z.; Miller, R.M.; McGuire, A.D.; Tarnocai, C.; Kuhry, P.; Riley, W.J.; Schaefer, K.; Schuur, E.A.G.; Jorgenson, M.T.; Hinzman, L.D.

    2013-01-01

    The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges.

  19. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    PubMed Central

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  20. Modeling Atmospheric CO2 Processes to Constrain the Missing Sink

    NASA Technical Reports Server (NTRS)

    Kawa, S. R.; Denning, A. S.; Erickson, D. J.; Collatz, J. C.; Pawson, S.

    2005-01-01

    We report on a NASA supported modeling effort to reduce uncertainty in carbon cycle processes that create the so-called missing sink of atmospheric CO2. Our overall objective is to improve characterization of CO2 source/sink processes globally with improved formulations for atmospheric transport, terrestrial uptake and release, biomass and fossil fuel burning, and observational data analysis. The motivation for this study follows from the perspective that progress in determining CO2 sources and sinks beyond the current state of the art will rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. The major components of this effort are: 1) Continued development of the chemistry and transport model using analyzed meteorological fields from the Goddard Global Modeling and Assimilation Office, with comparison to real time data in both forward and inverse modes; 2) An advanced biosphere model, constrained by remote sensing data, coupled to the global transport model to produce distributions of CO2 fluxes and concentrations that are consistent with actual meteorological variability; 3) Improved remote sensing estimates for biomass burning emission fluxes to better characterize interannual variability in the atmospheric CO2 budget and to better constrain the land use change source; 4) Evaluating the impact of temporally resolved fossil fuel emission distributions on atmospheric CO2 gradients and variability. 5) Testing the impact of existing and planned remote sensing data sources (e.g., AIRS, MODIS, OCO) on inference of CO2 sources and sinks, and use the model to help establish measurement requirements for future remote sensing instruments. The results will help to prepare for the use of OCO and other satellite data in a multi-disciplinary carbon data assimilation system for analysis and prediction of carbon cycle changes and carbodclimate interactions.

  1. Using palaeoclimate data to improve models of the Antarctic Ice Sheet

    NASA Astrophysics Data System (ADS)

    Phipps, Steven; King, Matt; Roberts, Jason; White, Duanne

    2017-04-01

    Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modelling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how palaeoclimate data can improve our ability to predict the future evolution of the AIS. A 50-member perturbed-physics ensemble is generated, spanning uncertainty in the parameterisations of three key physical processes within the model: (i) the stress balance within the ice sheet, (ii) basal sliding and (iii) calving of ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Palaeoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.

  2. Using paleoclimate data to improve models of the Antarctic Ice Sheet

    NASA Astrophysics Data System (ADS)

    King, M. A.; Phipps, S. J.; Roberts, J. L.; White, D.

    2016-12-01

    Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modeling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how paleoclimate data can improve our ability to predict the future evolution of the AIS. A large, perturbed-physics ensemble is generated, spanning uncertainty in the parameterizations of four key physical processes within ice sheet models: ice rheology, ice shelf calving, and the stress balances within ice sheets and ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Paleoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.

  3. Enhanced Constrained Predictive Control for Applications to Autonomous Vehicles and Missions

    DTIC Science & Technology

    2016-10-18

    AFRL /RVSV 3550 Aberdeen Ave, SE 11. SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER(S) AFRL -RV-PS-TR-2016-0122 12. DISTRIBUTION...Suite 0944 Ft Belvoir, VA 22060-6218 1 cy AFRL /RVIL Kirtland AFB, NM 87117-5776 2 cys Official Record Copy AFRL /RVSV/Richard S. Erwin 1 cy ... AFRL -RV-PS- AFRL -RV-PS- TR-2016-0122 TR-2016-0122 ENHANCED CONSTRAINED PREDICTIVE CONTROL FOR APPLICATIONS TO AUTONOMOUS VEHICLES

  4. A novel phenomenological multi-physics model of Li-ion battery cells

    NASA Astrophysics Data System (ADS)

    Oh, Ki-Yong; Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; Stefanopoulou, Anna G.; Epureanu, Bogdan I.

    2016-09-01

    A novel phenomenological multi-physics model of Lithium-ion battery cells is developed for control and state estimation purposes. The model can capture electrical, thermal, and mechanical behaviors of battery cells under constrained conditions, e.g., battery pack conditions. Specifically, the proposed model predicts the core and surface temperatures and reaction force induced from the volume change of battery cells because of electrochemically- and thermally-induced swelling. Moreover, the model incorporates the influences of changes in preload and ambient temperature on the force considering severe environmental conditions electrified vehicles face. Intensive experimental validation demonstrates that the proposed multi-physics model accurately predicts the surface temperature and reaction force for a wide operational range of preload and ambient temperature. This high fidelity model can be useful for more accurate and robust state of charge estimation considering the complex dynamic behaviors of the battery cell. Furthermore, the inherent simplicity of the mechanical measurements offers distinct advantages to improve the existing power and thermal management strategies for battery management.

  5. Assessment of Glacial Isostatic Adjustment in Greenland using GPS

    NASA Astrophysics Data System (ADS)

    Khan, S. A.; Bevis, M. G.; Sasgen, I.; van Dam, T. M.; Wahr, J. M.; Wouters, B.; Bamber, J. L.; Willis, M. J.; Knudsen, P.; Helm, V.; Kuipers Munneke, P.; Muresan, I. S.

    2015-12-01

    The Greenland GPS network (GNET) was constructed to provide a new means to assess viscoelastic and elastic adjustments driven by past and present-day changes in ice mass. Here we assess existing glacial isostatic adjustments (GIA) predictions by analysing 1995-2015 data from 61 continuous GPS receivers located along the margin of the Greenland ice sheet. Since GPS receivers measure both the GIA and elastic signals, we isolate GIA, by removing the elastic adjustments of the lithosphere due to present-day mass changes using high-resolution fields of ice surface elevation change derived from satellite and airborne altimetry measurements (ERS1/2, ICESat, ATM, ENVISAT, and CryoSat-2). For most GPS stations, our observed GIA rates contradict GIA predictions; particularly, we find huge uplift rates in southeast Greenland of up to 14 mm/yr while models predict rates of 0-2 mm/yr. Our results suggest possible improvements of GIA predictions, and hence of the poorly constrained ice load history and Earth structure models for Greenland.

  6. Cosmic shear as a probe of galaxy formation physics

    DOE PAGES

    Foreman, Simon; Becker, Matthew R.; Wechsler, Risa H.

    2016-09-01

    Here, we evaluate the potential for current and future cosmic shear measurements from large galaxy surveys to constrain the impact of baryonic physics on the matter power spectrum. We do so using a model-independent parametrization that describes deviations of the matter power spectrum from the dark-matter-only case as a set of principal components that are localized in wavenumber and redshift. We perform forecasts for a variety of current and future data sets, and find that at least ~90 per cent of the constraining power of these data sets is contained in no more than nine principal components. The constraining powermore » of different surveys can be quantified using a figure of merit defined relative to currently available surveys. With this metric, we find that the final Dark Energy Survey data set (DES Y5) and the Hyper Suprime-Cam Survey will be roughly an order of magnitude more powerful than existing data in constraining baryonic effects. Upcoming Stage IV surveys (Large Synoptic Survey Telescope, Euclid, and Wide Field Infrared Survey Telescope) will improve upon this by a further factor of a few. We show that this conclusion is robust to marginalization over several key systematics. The ultimate power of cosmic shear to constrain galaxy formation is dependent on understanding systematics in the shear measurements at small (sub-arcminute) scales. Lastly, if these systematics can be sufficiently controlled, cosmic shear measurements from DES Y5 and other future surveys have the potential to provide a very clean probe of galaxy formation and to strongly constrain a wide range of predictions from modern hydrodynamical simulations.« less

  7. Predicting the High Redshift Galaxy Population for JWST

    NASA Astrophysics Data System (ADS)

    Flynn, Zoey; Benson, Andrew

    2017-01-01

    The James Webb Space Telescope will be launched in Oct 2018 with the goal of observing galaxies in the redshift range of z = 10 - 15. As redshift increases, the age of the Universe decreases, allowing us to study objects formed only a few hundred million years after the Big Bang. This will provide a valuable opportunity to test and improve current galaxy formation theory by comparing predictions for mass, luminosity, and number density to the observed data. We have made testable predictions with the semi-analytical galaxy formation model Galacticus. The code uses Markov Chain Monte Carlo methods to determine viable sets of model parameters that match current astronomical data. The resulting constrained model was then set to match the specifications of the JWST Ultra Deep Field Imaging Survey. Predictions utilizing up to 100 viable parameter sets were calculated, allowing us to assess the uncertainty in current theoretical expectations. We predict that the planned UDF will be able to observe a significant number of objects past redshift z > 9 but nothing at redshift z > 11. In order to detect these faint objects at redshifts z = 11-15 we need to increase exposure time by at least a factor of 1.66.

  8. Intraplate deformation, stress in the lithosphere and the driving mechanism for plate motions

    NASA Technical Reports Server (NTRS)

    Albee, Arden L.

    1993-01-01

    The initial research proposed was to use the predictions of geodynamical models of mantle flow, combined with geodetic observations of intraplate strain and stress, to better constrain mantle convection and the driving mechanism for plate motions and deformation. It is only now that geodetic observations of intraplate strain are becoming sufficiently well resolved to make them useful for substantial geodynamical inference to be made. A model of flow in the mantle that explains almost 90 percent of the variance in the observed longwavelength nonhydrostatic geoid was developed.

  9. Status of GRMHD simulations and radiative models of Sgr A*

    NASA Astrophysics Data System (ADS)

    Mościbrodzka, Monika

    2017-01-01

    The Galactic center is a perfect laboratory for testing various theoretical models of accretion flows onto a supermassive black hole. Here, I review general relativistic magnetohydrodynamic simulations that were used to model emission from the central object - Sgr A*. These models predict dynamical and radiative properties of hot, magnetized, thick accretion disks with jets around a Kerr black hole. Models are compared to radio-VLBI, mm-VLBI, NIR, and X-ray observations of Sgr A*. I present the recent constrains on the free parameters of the model such as accretion rate onto the black hole, the black hole angular momentum, and orientation of the system with respect to our line of sight.

  10. Halo effective field theory constrains the solar 7Be + p → 8B + γ rate

    DOE PAGES

    Zhang, Xilin; Nollett, Kenneth M.; Phillips, D. R.

    2015-11-06

    In this study, we report an improved low-energy extrapolation of the cross section for the process 7Be(p,γ) 8B, which determines the 8B neutrino flux from the Sun. Our extrapolant is derived from Halo Effective Field Theory (EFT) at next-to-leading order. We apply Bayesian methods to determine the EFT parameters and the low-energy S-factor, using measured cross sections and scattering lengths as inputs. Asymptotic normalization coefficients of 8B are tightly constrained by existing radiative capture data, and contributions to the cross section beyond external direct capture are detected in the data at E < 0.5 MeV. Most importantly, the S-factor atmore » zero energy is constrained to be S(0) = 21.3 ± 0.7 eV b, which is an uncertainty smaller by a factor of two than previously recommended. That recommendation was based on the full range for S(0) obtained among a discrete set of models judged to be reasonable. In contrast, Halo EFT subsumes all models into a controlled low-energy approximant, where they are characterized by nine parameters at next-to-leading order. These are fit to data, and marginalized over via Monte Carlo integration to produce the improved prediction for S(E).« less

  11. Potential for an Arctic-breeding migratory bird to adjust spring migration phenology to Arctic amplification.

    PubMed

    Lameris, Thomas K; Scholten, Ilse; Bauer, Silke; Cobben, Marleen M P; Ens, Bruno J; Nolet, Bart A

    2017-10-01

    Arctic amplification, the accelerated climate warming in the polar regions, is causing a more rapid advancement of the onset of spring in the Arctic than in temperate regions. Consequently, the arrival of many migratory birds in the Arctic is thought to become increasingly mismatched with the onset of local spring, consequently reducing individual fitness and potentially even population levels. We used a dynamic state variable model to study whether Arctic long-distance migrants can advance their migratory schedules under climate warming scenarios which include Arctic amplification, and whether such an advancement is constrained by fuel accumulation or the ability to anticipate climatic changes. Our model predicts that barnacle geese Branta leucopsis suffer from considerably reduced reproductive success with increasing Arctic amplification through mistimed arrival, when they cannot anticipate a more rapid progress of Arctic spring from their wintering grounds. When geese are able to anticipate a more rapid progress of Arctic spring, they are predicted to advance their spring arrival under Arctic amplification up to 44 days without any reproductive costs in terms of optimal condition or timing of breeding. Negative effects of mistimed arrival on reproduction are predicted to be somewhat mitigated by increasing summer length under warming in the Arctic, as late arriving geese can still breed successfully. We conclude that adaptation to Arctic amplification may rather be constrained by the (un)predictability of changes in the Arctic spring than by the time available for fuel accumulation. Social migrants like geese tend to have a high behavioural plasticity regarding stopover site choice and migration schedule, giving them the potential to adapt to future climate changes on their flyway. © 2017 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  12. Basin geometry and cumulative offsets in the Eastern Transverse Ranges, southern California: Implications for transrotational deformation along the San Andreas fault system

    USGS Publications Warehouse

    Langenheim, V.E.; Powell, R.E.

    2009-01-01

    The Eastern Transverse Ranges, adjacent to and southeast of the big left bend of the San Andreas fault, southern California, form a crustal block that has rotated clockwise in response to dextral shear within the San Andreas system. Previous studies have indicated a discrepancy between the measured magnitudes of left slip on through-going east-striking fault zones of the Eastern Transverse Ranges and those predicted by simple geometric models using paleomagnetically determined clockwise rotations of basalts distributed along the faults. To assess the magnitude and source of this discrepancy, we apply new gravity and magnetic data in combination with geologic data to better constrain cumulative fault offsets and to define basin structure for the block between the Pinto Mountain and Chiriaco fault zones. Estimates of offset from using the length of pull-apart basins developed within left-stepping strands of the sinistral faults are consistent with those derived by matching offset magnetic anomalies and bedrock patterns, indicating a cumulative offset of at most ???40 km. The upper limit of displacements constrained by the geophysical and geologic data overlaps with the lower limit of those predicted at the 95% confidence level by models of conservative slip located on margins of rigid rotating blocks and the clockwise rotation of the paleomagnetic vectors. Any discrepancy is likely resolved by internal deformation within the blocks, such as intense deformation adjacent to the San Andreas fault (that can account for the absence of basins there as predicted by rigid-block models) and linkage via subsidiary faults between the main faults. ?? 2009 Geological Society of America.

  13. Improving our fundamental understanding of the role of aerosol-cloud interactions in the climate system.

    PubMed

    Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert

    2016-05-24

    The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.

  14. Improving Our Fundamental Understanding of the Role of Aerosol Cloud Interactions in the Climate System

    NASA Technical Reports Server (NTRS)

    Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph; hide

    2016-01-01

    The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.

  15. Convergence in parameters and predictions using computational experimental design.

    PubMed

    Hagen, David R; White, Jacob K; Tidor, Bruce

    2013-08-06

    Typically, biological models fitted to experimental data suffer from significant parameter uncertainty, which can lead to inaccurate or uncertain predictions. One school of thought holds that accurate estimation of the true parameters of a biological system is inherently problematic. Recent work, however, suggests that optimal experimental design techniques can select sets of experiments whose members probe complementary aspects of a biochemical network that together can account for its full behaviour. Here, we implemented an experimental design approach for selecting sets of experiments that constrain parameter uncertainty. We demonstrated with a model of the epidermal growth factor-nerve growth factor pathway that, after synthetically performing a handful of optimal experiments, the uncertainty in all 48 parameters converged below 10 per cent. Furthermore, the fitted parameters converged to their true values with a small error consistent with the residual uncertainty. When untested experimental conditions were simulated with the fitted models, the predicted species concentrations converged to their true values with errors that were consistent with the residual uncertainty. This paper suggests that accurate parameter estimation is achievable with complementary experiments specifically designed for the task, and that the resulting parametrized models are capable of accurate predictions.

  16. Improving our fundamental understanding of the role of aerosol-cloud interactions in the climate system

    DOE PAGES

    Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...

    2016-05-24

    The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less

  17. Improving our fundamental understanding of the role of aerosol−cloud interactions in the climate system

    PubMed Central

    Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert

    2016-01-01

    The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566

  18. Prediction in a visual language: real-time sentence processing in American Sign Language across development.

    PubMed

    Lieberman, Amy M; Borovsky, Arielle; Mayberry, Rachel I

    2018-01-01

    Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eyetracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimizing visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.

  19. Balancing computation and communication power in power constrained clusters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piga, Leonardo; Paul, Indrani; Huang, Wei

    Systems, apparatuses, and methods for balancing computation and communication power in power constrained environments. A data processing cluster with a plurality of compute nodes may perform parallel processing of a workload in a power constrained environment. Nodes that finish tasks early may be power-gated based on one or more conditions. In some scenarios, a node may predict a wait duration and go into a reduced power consumption state if the wait duration is predicted to be greater than a threshold. The power saved by power-gating one or more nodes may be reassigned for use by other nodes. A cluster agentmore » may be configured to reassign the unused power to the active nodes to expedite workload processing.« less

  20. Reducing the Uncertainties in Direct Aerosol Radiative Forcing

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph A.

    2011-01-01

    Airborne particles, which include desert and soil dust, wildfire smoke, sea salt, volcanic ash, black carbon, natural and anthropogenic sulfate, nitrate, and organic aerosol, affect Earth's climate, in part by reflecting and absorbing sunlight. This paper reviews current status, and evaluates future prospects for reducing the uncertainty aerosols contribute to the energy budget of Earth, which at present represents a leading factor limiting the quality of climate predictions. Information from satellites is critical for this work, because they provide frequent, global coverage of the diverse and variable atmospheric aerosol load. Both aerosol amount and type must be determined. Satellites are very close to measuring aerosol amount at the level-of-accuracy needed, but aerosol type, especially how bright the airborne particles are, cannot be constrained adequately by current techniques. However, satellite instruments can map out aerosol air mass type, which is a qualitative classification rather than a quantitative measurement, and targeted suborbital measurements can provide the required particle property detail. So combining satellite and suborbital measurements, and then using this combination to constrain climate models, will produce a major advance in climate prediction.

  1. Reflected stochastic differential equation models for constrained animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  2. The sensitivity of soil respiration to soil temperature, moisture, and carbon supply at the global scale.

    PubMed

    Hursh, Andrew; Ballantyne, Ashley; Cooper, Leila; Maneta, Marco; Kimball, John; Watts, Jennifer

    2017-05-01

    Soil respiration (Rs) is a major pathway by which fixed carbon in the biosphere is returned to the atmosphere, yet there are limits to our ability to predict respiration rates using environmental drivers at the global scale. While temperature, moisture, carbon supply, and other site characteristics are known to regulate soil respiration rates at plot scales within certain biomes, quantitative frameworks for evaluating the relative importance of these factors across different biomes and at the global scale require tests of the relationships between field estimates and global climatic data. This study evaluates the factors driving Rs at the global scale by linking global datasets of soil moisture, soil temperature, primary productivity, and soil carbon estimates with observations of annual Rs from the Global Soil Respiration Database (SRDB). We find that calibrating models with parabolic soil moisture functions can improve predictive power over similar models with asymptotic functions of mean annual precipitation. Soil temperature is comparable with previously reported air temperature observations used in predicting Rs and is the dominant driver of Rs in global models; however, within certain biomes soil moisture and soil carbon emerge as dominant predictors of Rs. We identify regions where typical temperature-driven responses are further mediated by soil moisture, precipitation, and carbon supply and regions in which environmental controls on high Rs values are difficult to ascertain due to limited field data. Because soil moisture integrates temperature and precipitation dynamics, it can more directly constrain the heterotrophic component of Rs, but global-scale models tend to smooth its spatial heterogeneity by aggregating factors that increase moisture variability within and across biomes. We compare statistical and mechanistic models that provide independent estimates of global Rs ranging from 83 to 108 Pg yr -1 , but also highlight regions of uncertainty where more observations are required or environmental controls are hard to constrain. © 2016 John Wiley & Sons Ltd.

  3. Constraining estimates of global soil respiration by quantifying sources of variability.

    PubMed

    Jian, Jinshi; Steele, Meredith K; Thomas, R Quinn; Day, Susan D; Hodges, Steven C

    2018-05-10

    Quantifying global soil respiration (R SG ) and its response to temperature change are critical for predicting the turnover of terrestrial carbon stocks and their feedbacks to climate change. Currently, estimates of R SG range from 68 to 98 Pg C year -1 , causing considerable uncertainty in the global carbon budget. We argue the source of this variability lies in the upscaling assumptions regarding the model format, data timescales, and precipitation component. To quantify the variability and constrain R SG , we developed R SG models using Random Forest and exponential models, and used different timescales (daily, monthly, and annual) of soil respiration (R S ) and climate data to predict R SG . From the resulting R SG estimates (range = 66.62-100.72 Pg), we calculated variability associated with each assumption. Among model formats, using monthly R S data rather than annual data decreased R SG by 7.43-9.46 Pg; however, R SG calculated from daily R S data was only 1.83 Pg lower than the R SG from monthly data. Using mean annual precipitation and temperature data instead of monthly data caused +4.84 and -4.36 Pg C differences, respectively. If the timescale of R S data is constant, R SG estimated by the first-order exponential (93.2 Pg) was greater than the Random Forest (78.76 Pg) or second-order exponential (76.18 Pg) estimates. These results highlight the importance of variation at subannual timescales for upscaling to R SG . The results indicated R SG is lower than in recent papers and the current benchmark for land models (98 Pg C year -1 ), and thus may change the predicted rates of terrestrial carbon turnover and the carbon to climate feedback as global temperatures rise. © 2018 John Wiley & Sons Ltd.

  4. A plastic flow model for the Acquara - Vadoncello landslide in Senerchia, Southern Italy

    USGS Publications Warehouse

    Savage, W.; Wasowski, J.

    2006-01-01

    A previously developed model for stress and velocity fields in two-dimensional Coulomb plastic materials under self-weight and pore pressure predicts that long, shallow landslides develop slip surfaces that manifest themselves as normal faults and normal fault scarps at the surface in areas of extending flow and as thrust faults and thrust fault scarps at the surface in areas of compressive flow. We have applied this model to describe the geometry of slip surfaces and ground stresses developed during the 1995 reactivation of the Acquara - Vadoncello landslide in Senerchia, southern Italy. This landslide is a long and shallow slide in which regions of compressive and extending flow are clearly identified. Slip surfaces in the main scarp region of the landslide have been reconstructed using surface surveys and subsurface borehole logging and inclinometer observations made during retrogression of the main scarp. Two of the four inferred main scarp slip surfaces are best constrained by field data. Slip surfaces in the toe region are reconstructed in the same way and three of the five inferred slip surfaces are similarly constrained. The location of the basal shear surface of the landslide is inferred from borehole logging and borehole inclinometry. Extensive data on material properties, landslide geometries, and pore pressures collected for the Acquara - Vadoncello landslide give values for cohesion, friction angle, and unit weight, plus average basal shear-surface slopes, and pore-pressures required for modelling slip surfaces and stress fields. Results obtained from the landslide-flow model and the field data show that predicted slip surface shapes are consistent with inferred slip surface shapes in both the extending flow main scarp region and in the compressive flow toe region of the Acquara - Vadoncello landslide. Also predicted stress distributions are found to explain deformation features seen in the toe and main scarp regions of the landslide. ?? 2005 Elsevier B.V. All rights reserved.

  5. Determinants of Antibiotic Consumption - Development of a Model using Partial Least Squares Regression based on Data from India.

    PubMed

    Tamhankar, Ashok J; Karnik, Shreyasee S; Stålsby Lundborg, Cecilia

    2018-04-23

    Antibiotic resistance, a consequence of antibiotic use, is a threat to health, with severe consequences for resource constrained settings. If determinants for human antibiotic use in India, a lower middle income country, with one of the highest antibiotic consumption in the world could be understood, interventions could be developed, having implications for similar settings. Year wise data for India, for potential determinants and antibiotic consumption, was sourced from publicly available databases for the years 2000-2010. Data was analyzed using Partial Least Squares regression and correlation between determinants and antibiotic consumption was evaluated, formulating 'Predictors' and 'Prediction models'. The 'prediction model' with the statistically most significant predictors (root mean square errors of prediction for train set-377.0 and test set-297.0) formulated from a combination of Health infrastructure + Surface transport infrastructure (HISTI), predicted antibiotic consumption within 95% confidence interval and estimated an antibiotic consumption of 11.6 standard units/person (14.37 billion standard units totally; standard units = number of doses sold in the country; a dose being a pill, capsule, or ampoule) for India for 2014. The HISTI model may become useful in predicting antibiotic consumption for countries/regions having circumstances and data similar to India, but without resources to measure actual data of antibiotic consumption.

  6. Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise

    NASA Astrophysics Data System (ADS)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2014-11-01

    Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.

  7. Surfactant enhanced recovery of tetrachloroethylene from a porous medium containing low permeability lenses. 2. Numerical simulation.

    PubMed

    Rathfelder, K M; Abriola, L M; Taylor, T P; Pennell, K D

    2001-04-01

    A numerical model of surfactant enhanced solubilization was developed and applied to the simulation of nonaqueous phase liquid recovery in two-dimensional heterogeneous laboratory sand tank systems. Model parameters were derived from independent, small-scale, batch and column experiments. These parameters included viscosity, density, solubilization capacity, surfactant sorption, interfacial tension, permeability, capillary retention functions, and interphase mass transfer correlations. Model predictive capability was assessed for the evaluation of the micellar solubilization of tetrachloroethylene (PCE) in the two-dimensional systems. Predicted effluent concentrations and mass recovery agreed reasonably well with measured values. Accurate prediction of enhanced solubilization behavior in the sand tanks was found to require the incorporation of pore-scale, system-dependent, interphase mass transfer limitations, including an explicit representation of specific interfacial contact area. Predicted effluent concentrations and mass recovery were also found to depend strongly upon the initial NAPL entrapment configuration. Numerical results collectively indicate that enhanced solubilization processes in heterogeneous, laboratory sand tank systems can be successfully simulated using independently measured soil parameters and column-measured mass transfer coefficients, provided that permeability and NAPL distributions are accurately known. This implies that the accuracy of model predictions at the field scale will be constrained by our ability to quantify soil heterogeneity and NAPL distribution.

  8. Isoprene emissions over Asia 1979-2012 : impact of climate and land use changes

    NASA Astrophysics Data System (ADS)

    Stavrakou, Trissevgeni; Müller, Jean-Francois; Bauwens, Maite; Guenther, Alex; De Smedt, Isabelle; Van Roozendael, Michel

    2014-05-01

    Due to the scarcity of observational contraints and the rapidly changing environment in East and Southeast Asia, isoprene emissions predicted by models are expected to bear substantial uncertainties. This study aims at improving upon current bottom-up estimates, and investigate the temporal evolution of isoprene fluxes in Asia over 1979-2012. For that, we use the MEGAN model and incorporate (i) changes in land use, including the rapid expansion of oil palms, (ii) meteorological variability, (iii) long-term changes in solar radiation constrained by surface network measurements, and (iv) recent experimental evidence that South Asian forests are much weaker isoprene emitters than previously assumed. These effects lead to a significant reduction of the total isoprene fluxes over the studied domain compared to the standard simulation. The bottom-up emissions are evaluated using satellite-based emission estimates derived from inverse modelling constrained by GOME-2/MetOp-A formaldehyde columns through 2007-2012. The top-down estimates support our assumptions and confirm the lower isoprene emission rate in tropical forests of Indonesia and Malaysia.

  9. Constraints on the symmetry energy from neutron star observations

    NASA Astrophysics Data System (ADS)

    Newton, W. G.; Gearheart, M.; Wen, De-Hua; Li, Bao-An

    2013-03-01

    The modeling of many neutron star observables incorporates the microphysics of both the stellar crust and core, which is tied intimately to the properties of the nuclear matter equation of state (EoS). We explore the predictions of such models over the range of experimentally constrained nuclear matter parameters, focusing on the slope of the symmetry energy at nuclear saturation density L. We use a consistent model of the composition and EoS of neutron star crust and core matter to model the binding energy of pulsar B of the double pulsar system J0737-3039, the frequencies of torsional oscillations of the neutron star crust and the instability region for r-modes in the neutron star core damped by electron-electron viscosity at the crust-core interface. By confronting these models with observations, we illustrate the potential of astrophysical observables to offer constraints on poorly known nuclear matter parameters complementary to terrestrial experiments, and demonstrate that our models consistently predict L < 70 MeV.

  10. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks.

    PubMed

    Yan, Zheng; Wang, Jun

    2014-03-01

    This paper presents a neural network approach to robust model predictive control (MPC) for constrained discrete-time nonlinear systems with unmodeled dynamics affected by bounded uncertainties. The exact nonlinear model of underlying process is not precisely known, but a partially known nominal model is available. This partially known nonlinear model is first decomposed to an affine term plus an unknown high-order term via Jacobian linearization. The linearization residue combined with unmodeled dynamics is then modeled using an extreme learning machine via supervised learning. The minimax methodology is exploited to deal with bounded uncertainties. The minimax optimization problem is reformulated as a convex minimization problem and is iteratively solved by a two-layer recurrent neural network. The proposed neurodynamic approach to nonlinear MPC improves the computational efficiency and sheds a light for real-time implementability of MPC technology. Simulation results are provided to substantiate the effectiveness and characteristics of the proposed approach.

  11. Toward Process-resolving Synthesis and Prediction of Arctic Climate Change Using the Regional Arctic System Model

    NASA Astrophysics Data System (ADS)

    Maslowski, W.

    2017-12-01

    The Regional Arctic System Model (RASM) has been developed to better understand the operation of Arctic System at process scale and to improve prediction of its change at a spectrum of time scales. RASM is a pan-Arctic, fully coupled ice-ocean-atmosphere-land model with marine biogeochemistry extension to the ocean and sea ice models. The main goal of our research is to advance a system-level understanding of critical processes and feedbacks in the Arctic and their links with the Earth System. The secondary, an equally important objective, is to identify model needs for new or additional observations to better understand such processes and to help constrain models. Finally, RASM has been used to produce sea ice forecasts for September 2016 and 2017, in contribution to the Sea Ice Outlook of the Sea Ice Prediction Network. Future RASM forecasts, are likely to include increased resolution for model components and ecosystem predictions. Such research is in direct support of the US environmental assessment and prediction needs, including those of the U.S. Navy, Department of Defense, and the recent IARPC Arctic Research Plan 2017-2021. In addition to an overview of RASM technical details, selected model results are presented from a hierarchy of climate models together with available observations in the region to better understand potential oceanic contributions to polar amplification. RASM simulations are analyzed to evaluate model skill in representing seasonal climatology as well as interannual and multi-decadal climate variability and predictions. Selected physical processes and resulting feedbacks are discussed to emphasize the need for fully coupled climate model simulations, high model resolution and sensitivity of simulated sea ice states to scale dependent model parameterizations controlling ice dynamics, thermodynamics and coupling with the atmosphere and ocean.

  12. The Image-Optimized Corona; Progress on Using Coronagraph Images to Constrain Coronal Magnetic Field Models

    NASA Astrophysics Data System (ADS)

    Jones, S. I.; Uritsky, V. M.; Davila, J. M.

    2017-12-01

    In absence of reliable coronal magnetic field measurements, solar physicists have worked for several decades to develop techniques for extrapolating photospheric magnetic field measurements into the solar corona and/or heliosphere. The products of these efforts tend to be very sensitive to variation in the photospheric measurements, such that the uncertainty in the photospheric measurements introduces significant uncertainty into the coronal and heliospheric models needed to predict such things as solar wind speed, IMF polarity at Earth, and CME propagation. Ultimately, the reason for the sensitivity of the model to the boundary conditions is that the model is trying to extact a great deal of information from a relatively small amout of data. We have published in recent years about a new method we are developing to use morphological information gleaned from coronagraph images to constrain models of the global coronal magnetic field. In our approach, we treat the photospheric measurements as approximations and use an optimization algorithm to iteratively find a global coronal model that best matches both the photospheric measurements and quasi-linear features observed in polarization brightness coronagraph images. Here we will summarize the approach we have developed and present recent progress in optimizing PFSS models based on GONG magnetograms and MLSO K-Cor images.

  13. Constraining the JULES land-surface model for different land-use types using citizen-science generated hydrological data

    NASA Astrophysics Data System (ADS)

    Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.

    2017-12-01

    Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.

  14. Genome Informed Trait-Based Models

    NASA Astrophysics Data System (ADS)

    Karaoz, U.; Cheng, Y.; Bouskill, N.; Tang, J.; Beller, H. R.; Brodie, E.; Riley, W. J.

    2013-12-01

    Trait-based approaches are powerful tools for representing microbial communities across both spatial and temporal scales within ecosystem models. Trait-based models (TBMs) represent the diversity of microbial taxa as stochastic assemblages with a distribution of traits constrained by trade-offs between these traits. Such representation with its built-in stochasticity allows the elucidation of the interactions between the microbes and their environment by reducing the complexity of microbial community diversity into a limited number of functional ';guilds' and letting them emerge across spatio-temporal scales. From the biogeochemical/ecosystem modeling perspective, the emergent properties of the microbial community could be directly translated into predictions of biogeochemical reaction rates and microbial biomass. The accuracy of TBMs depends on the identification of key traits of the microbial community members and on the parameterization of these traits. Current approaches to inform TBM parameterization are empirical (i.e., based on literature surveys). Advances in omic technologies (such as genomics, metagenomics, metatranscriptomics, and metaproteomics) pave the way to better-initialize models that can be constrained in a generic or site-specific fashion. Here we describe the coupling of metagenomic data to the development of a TBM representing the dynamics of metabolic guilds from an organic carbon stimulated groundwater microbial community. Illumina paired-end metagenomic data were collected from the community as it transitioned successively through electron-accepting conditions (nitrate-, sulfate-, and Fe(III)-reducing), and used to inform estimates of growth rates and the distribution of metabolic pathways (i.e., aerobic and anaerobic oxidation, fermentation) across a spatially resolved TBM. We use this model to evaluate the emergence of different metabolisms and predict rates of biogeochemical processes over time. We compare our results to observational outputs.

  15. Gravitational Wave Signals from the First Massive Black Hole Seeds

    NASA Astrophysics Data System (ADS)

    Hartwig, Tilman; Agarwal, Bhaskar; Regan, John A.

    2018-05-01

    Recent numerical simulations reveal that the isothermal collapse of pristine gas in atomic cooling haloes may result in stellar binaries of supermassive stars with M* ≳ 104M⊙. For the first time, we compute the in-situ merger rate for such massive black hole remnants by combining their abundance and multiplicity estimates. For black holes with initial masses in the range 104 - 6M⊙ merging at redshifts z ≳ 15 our optimistic model predicts that LISA should be able to detect 0.6 mergers per year. This rate of detection can be attributed, without confusion, to the in-situ mergers of seeds from the collapse of very massive stars. Equally, in the case where LISA observes no mergers from heavy seeds at z ≳ 15 we can constrain the combined number density, multiplicity, and coalesence times of these high-redshift systems. This letter proposes gravitational wave signatures as a means to constrain theoretical models and processes that govern the abundance of massive black hole seeds in the early Universe.

  16. Constrained Source Apportionment of Coarse Particulate Matter and Selected Trace Elements in Three Cities from the Multi-Ethnic Study of Atherosclerosis

    PubMed Central

    Sturtz, Timothy M.; Adar, Sara D.; Gould, Timothy; Larson, Timothy V.

    2016-01-01

    PM10-2.5 mass and trace element concentrations were measured in Winston-Salem, Chicago, and St. Paul at up to 60 sites per city during two different seasons in 2010. Positive Matrix Factorization (PMF) was used to explore the underlying sources of variability. Information on previously reported PM10-2.5 tire and brake wear profiles was used to constrain these features in PMF by prior specification of selected species ratios. We also modified PMF to allow for combining the measurements from all three cities into a single model while preserving city-specific soil features. Relatively minor differences were observed between model predictions with and without the prior ratio constraints, increasing confidence in our ability to identify separate brake wear and tire wear features. Brake wear, tire wear, fertilized soil, and re-suspended soil were found to be important sources of copper, zinc, phosphorus, and silicon respectively across all three urban areas. PMID:27468256

  17. Constrained source apportionment of coarse particulate matter and selected trace elements in three cities from the multi-ethnic study of atherosclerosis

    NASA Astrophysics Data System (ADS)

    Sturtz, Timothy M.; Adar, Sara D.; Gould, Timothy; Larson, Timothy V.

    2014-02-01

    PM10-2.5 mass and trace element concentrations were measured in Winston-Salem, Chicago, and St. Paul at up to 60 sites per city during two different seasons in 2010. Positive Matrix Factorization (PMF) was used to explore the underlying sources of variability. Information on previously reported PM10-2.5 tire and brake wear profiles was used to constrain these features in PMF by prior specification of selected species ratios. We also modified PMF to allow for combining the measurements from all three cities into a single model while preserving city-specific soil features. Relatively minor differences were observed between model predictions with and without the prior ratio constraints, increasing confidence in our ability to identify separate brake wear and tire wear features. Brake wear, tire wear, fertilized soil, and resuspended soil were found to be important sources of copper, zinc, phosphorus, and silicon, respectively, across all three urban areas.

  18. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  19. Physics Constrained Stochastic-Statistical Models for Extended Range Environmental Prediction

    DTIC Science & Technology

    2014-09-30

    pressure ( SLP ), respectively]. A major finding of this work, illustrated in Figure 1, is that the North Pacific patterns identified in [1] are part of...Figure II 1. Reconstruction of sea ice concentration, SST, and SLP anomalies in the arctic using NLSA reemergence modes during an active phase of...to reemerge. The geostrophic winds associated with the annular SLP pattern in the right-hand column are cold Northerlies (warm Southerlies) in the

  20. Solar ultraviolet radiation induced variations in the stratosphere and mesosphere

    NASA Technical Reports Server (NTRS)

    Hood, L. L.

    1987-01-01

    The detectability and interpretation of short-term solar UV induced responses of middle atmospheric ozone, temperature, and dynamics are reviewed. The detectability of solar UV induced perturbations in the middle atmosphere is studied in terms of seasonal and endogenic dynamical variations. The interpretation of low-latitude ozone and possible temperature responses on the solar rotation time scale is examined. The use of these data to constrain or test photochemical model predictions is discussed.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo

    Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less

  2. Gravitational baryogenesis in running vacuum models

    NASA Astrophysics Data System (ADS)

    Oikonomou, V. K.; Pan, Supriya; Nunes, Rafael C.

    2017-08-01

    We study the gravitational baryogenesis mechanism for generating baryon asymmetry in the context of running vacuum models. Regardless of whether these models can produce a viable cosmological evolution, we demonstrate that they produce a nonzero baryon-to-entropy ratio even if the universe is filled with conformal matter. This is a sound difference between the running vacuum gravitational baryogenesis and the Einstein-Hilbert one, since in the latter case, the predicted baryon-to-entropy ratio is zero. We consider two well known and most used running vacuum models and show that the resulting baryon-to-entropy ratio is compatible with the observational data. Moreover, we also show that the mechanism of gravitational baryogenesis may constrain the running vacuum models.

  3. Prioritizing CD4 Count Monitoring in Response to ART in Resource-Constrained Settings: A Retrospective Application of Prediction-Based Classification

    PubMed Central

    Liu, Yan; Li, Xiaohong; Johnson, Margaret; Smith, Collette; Kamarulzaman, Adeeba bte; Montaner, Julio; Mounzer, Karam; Saag, Michael; Cahn, Pedro; Cesar, Carina; Krolewiecki, Alejandro; Sanne, Ian; Montaner, Luis J.

    2012-01-01

    Background Global programs of anti-HIV treatment depend on sustained laboratory capacity to assess treatment initiation thresholds and treatment response over time. Currently, there is no valid alternative to CD4 count testing for monitoring immunologic responses to treatment, but laboratory cost and capacity limit access to CD4 testing in resource-constrained settings. Thus, methods to prioritize patients for CD4 count testing could improve treatment monitoring by optimizing resource allocation. Methods and Findings Using a prospective cohort of HIV-infected patients (n = 1,956) monitored upon antiretroviral therapy initiation in seven clinical sites with distinct geographical and socio-economic settings, we retrospectively apply a novel prediction-based classification (PBC) modeling method. The model uses repeatedly measured biomarkers (white blood cell count and lymphocyte percent) to predict CD4+ T cell outcome through first-stage modeling and subsequent classification based on clinically relevant thresholds (CD4+ T cell count of 200 or 350 cells/µl). The algorithm correctly classified 90% (cross-validation estimate = 91.5%, standard deviation [SD] = 4.5%) of CD4 count measurements <200 cells/µl in the first year of follow-up; if laboratory testing is applied only to patients predicted to be below the 200-cells/µl threshold, we estimate a potential savings of 54.3% (SD = 4.2%) in CD4 testing capacity. A capacity savings of 34% (SD = 3.9%) is predicted using a CD4 threshold of 350 cells/µl. Similar results were obtained over the 3 y of follow-up available (n = 619). Limitations include a need for future economic healthcare outcome analysis, a need for assessment of extensibility beyond the 3-y observation time, and the need to assign a false positive threshold. Conclusions Our results support the use of PBC modeling as a triage point at the laboratory, lessening the need for laboratory-based CD4+ T cell count testing; implementation of this tool could help optimize the use of laboratory resources, directing CD4 testing towards higher-risk patients. However, further prospective studies and economic analyses are needed to demonstrate that the PBC model can be effectively applied in clinical settings. Please see later in the article for the Editors' Summary PMID:22529752

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  5. Constraining the optical potential in the search for η-mesic 4He

    NASA Astrophysics Data System (ADS)

    Skurzok, M.; Moskal, P.; Kelkar, N. G.; Hirenzaki, S.; Nagahiro, H.; Ikeno, N.

    2018-07-01

    A consistent description of the dd →4Heη and dd → (4Heη)bound→ X cross sections was recently proposed with a broad range of real (V0) and imaginary (W0), η-4He optical potential parameters leading to a good agreement with the dd →4Heη data. Here we compare the predictions of the model below the η production threshold, with the WASA-at-COSY excitation functions for the dd →3HeNπ reactions to put stronger constraints on (V0 ,W0). The allowed parameter space (with |V0 | < ∼ 60 MeV and |W0 | < ∼ 7 MeV estimated at 90% CL) excludes most optical model predictions of η-4He nuclei except for some loosely bound narrow states.

  6. Estimating current and future streamflow characteristics at ungaged sites, central and eastern Montana, with application to evaluating effects of climate change on fish populations

    USGS Publications Warehouse

    Sando, Roy; Chase, Katherine J.

    2017-03-23

    A common statistical procedure for estimating streamflow statistics at ungaged locations is to develop a relational model between streamflow and drainage basin characteristics at gaged locations using least squares regression analysis; however, least squares regression methods are parametric and make constraining assumptions about the data distribution. The random forest regression method provides an alternative nonparametric method for estimating streamflow characteristics at ungaged sites and requires that the data meet fewer statistical conditions than least squares regression methods.Random forest regression analysis was used to develop predictive models for 89 streamflow characteristics using Precipitation-Runoff Modeling System simulated streamflow data and drainage basin characteristics at 179 sites in central and eastern Montana. The predictive models were developed from streamflow data simulated for current (baseline, water years 1982–99) conditions and three future periods (water years 2021–38, 2046–63, and 2071–88) under three different climate-change scenarios. These predictive models were then used to predict streamflow characteristics for baseline conditions and three future periods at 1,707 fish sampling sites in central and eastern Montana. The average root mean square error for all predictive models was about 50 percent. When streamflow predictions at 23 fish sampling sites were compared to nearby locations with simulated data, the mean relative percent difference was about 43 percent. When predictions were compared to streamflow data recorded at 21 U.S. Geological Survey streamflow-gaging stations outside of the calibration basins, the average mean absolute percent error was about 73 percent.

  7. Thermal and energetic constraints on ectotherm abundance: A global test using lizards

    USGS Publications Warehouse

    Buckley, L.B.; Rodda, G.H.; Jetz, W.

    2008-01-01

    Population densities of birds and mammals have been shown to decrease with body mass at approximately the same rate as metabolic rates increase, indicating that energetic needs constrain endotherm population densities. In ectotherms, the exponential increase of metabolic rate with body temperature suggests that environmental temperature may additionally constrain population densities. Here we test simple bioenergetic models for an ecologically important group of ectothermic vertebrates by examining 483 lizard populations. We find that lizard population densities decrease as a power law of body mass with a slope approximately inverse to the slope of the relationship between metabolic rates and body mass. Energy availability should limit population densities. As predicted, environmental productivity has a positive effect on lizard density, strengthening the relationship between lizard density and body mass. In contrast, the effect of environmental temperature is at most weak due to behavioral thermoregulation, thermal evolution, or the temperature dependence of ectotherm performance. Our results provide initial insights into how energy needs and availability differentially constrain ectotherm and endotherm density across broad spatial scales. ?? 2008 by the Ecological Society of America.

  8. Thermal and energetic constraints on ectotherm abundance: a global test using lizards.

    PubMed

    Buckley, Lauren B; Rodda, Gordon H; Jetz, Walter

    2008-01-01

    Population densities of birds and mammals have been shown to decrease with body mass at approximately the same rate as metabolic rates increase, indicating that energetic needs constrain endotherm population densities. In ectotherms, the exponential increase of metabolic rate with body temperature suggests that environmental temperature may additionally constrain population densities. Here we test simple bioenergetic models for an ecologically important group of ectothermic vertebrates by examining 483 lizard populations. We find that lizard population densities decrease as a power law of body mass with a slope approximately inverse to the slope of the relationship between metabolic rates and body mass. Energy availability should limit population densities. As predicted, environmental productivity has a positive effect on lizard density, strengthening the relationship between lizard density and body mass. In contrast, the effect of environmental temperature is at most weak due to behavioral thermoregulation, thermal evolution, or the temperature dependence of ectotherm performance. Our results provide initial insights into how energy needs and availability differentially constrain ectotherm and endotherm density across broad spatial scales.

  9. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  10. Seismic-geodynamic constraints on three-dimensional structure, vertical flow, and heat transfer in the mantle

    USGS Publications Warehouse

    Forte, A.M.; Woodward, R.L.

    1997-01-01

    Joint inversions of seismic and geodynamic data are carried out in which we simultaneously constrain global-scale seismic heterogeneity in the mantle as well as the amplitude of vertical mantle flow across the 670 km seismic discontinuity. These inversions reveal the existence of a family of three-dimensional (3-D) mantle models that satisfy the data while at the same time yielding predictions of layered mantle flow. The new 3-D mantle models we obtain demonstrate that the buoyancy forces due to the undulations of the 670 km phase-change boundary strongly inhibit the vertical flow between the upper and lower mantle. The strong stabilizing effect of the 670 km topography also has an important impact on the predicted dynamic topography of the Earth's solid surface and on the surface gravity anomalies. The new 3-D models that predict strongly or partially layered mantle flow provide essentially identical fits to the global seismic data as previous models that have, until now, predicted only whole-mantle flow. The convective vertical transport of heat across the mantle predicted on the basis of the new 3-D models shows that the heat flow is a minimum at 1000 km depth. This suggests the presence at this depth of a globally defined horizon across which the pattern of lateral heterogeneity changes rapidly. Copyright 1997 by the American Geophysical Union.

  11. Predicting fundamental and realized distributions based on thermal niche: A case study of a freshwater turtle

    NASA Astrophysics Data System (ADS)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco; Ribeiro, Bruno R.

    2018-04-01

    Species distribution models (SDM) have been broadly used in ecology to address theoretical and practical problems. Currently, there are two main approaches to generate SDMs: (i) correlative, which is based on species occurrences and environmental predictor layers and (ii) process-based models, which are constructed based on species' functional traits and physiological tolerances. The distributions estimated by each approach are based on different components of species niche. Predictions of correlative models approach species realized niches, while predictions of process-based are more akin to species fundamental niche. Here, we integrated the predictions of fundamental and realized distributions of the freshwater turtle Trachemys dorbigni. Fundamental distribution was estimated using data of T. dorbigni's egg incubation temperature, and realized distribution was estimated using species occurrence records. Both types of distributions were estimated using the same regression approaches (logistic regression and support vector machines), both considering macroclimatic and microclimatic temperatures. The realized distribution of T. dorbigni was generally nested in its fundamental distribution reinforcing theoretical assumptions that the species' realized niche is a subset of its fundamental niche. Both modelling algorithms produced similar results but microtemperature generated better results than macrotemperature for the incubation model. Finally, our results reinforce the conclusion that species realized distributions are constrained by other factors other than just thermal tolerances.

  12. Search for standard model production of four top quarks with same-sign and multilepton final states in proton–proton collisions at $$\\sqrt{s} = 13\\,\\text {TeV} $$

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sirunyan, A. M.; Tumasyan, A.; Adam, W.

    A search for standard model production of four top quarks (more » $$\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{t}\\overline{\\mathrm{t}})$$ is reported using events containing at least three leptons (e, $$\\mu$$) or a same-sign lepton pair. The events are produced in proton-proton collisions at a center-of-mass energy of 13 TeV at the LHC, and the data sample, recorded in 2016, corresponds to an integrated luminosity of 35.9 fb$$^{-1}$$. Jet multiplicity and flavor are used to enhance signal sensitivity, and dedicated control regions are used to constrain the dominant backgrounds. The observed and expected signal significances are, respectively, 1.6 and 1.0 standard deviations, and the $$\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{t}\\overline{\\mathrm{t}}$$ cross section is measured to be 16.9 $$^{+13.8}_{-11.4}$$ fb, in agreement with next-to-leading-order standard model predictions. These results are also used to constrain the Yukawa coupling between the top quark and the Higgs boson to be less than 2.1 times its expected standard model value at 95% confidence level.« less

  13. Search for standard model production of four top quarks with same-sign and multilepton final states in proton–proton collisions at $$\\sqrt{s} = 13\\,\\text {TeV} $$

    DOE PAGES

    Sirunyan, A. M.; Tumasyan, A.; Adam, W.; ...

    2018-02-19

    A search for standard model production of four top quarks (more » $$\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{t}\\overline{\\mathrm{t}})$$ is reported using events containing at least three leptons (e, $$\\mu$$) or a same-sign lepton pair. The events are produced in proton-proton collisions at a center-of-mass energy of 13 TeV at the LHC, and the data sample, recorded in 2016, corresponds to an integrated luminosity of 35.9 fb$$^{-1}$$. Jet multiplicity and flavor are used to enhance signal sensitivity, and dedicated control regions are used to constrain the dominant backgrounds. The observed and expected signal significances are, respectively, 1.6 and 1.0 standard deviations, and the $$\\mathrm{t}\\overline{\\mathrm{t}}\\mathrm{t}\\overline{\\mathrm{t}}$$ cross section is measured to be 16.9 $$^{+13.8}_{-11.4}$$ fb, in agreement with next-to-leading-order standard model predictions. These results are also used to constrain the Yukawa coupling between the top quark and the Higgs boson to be less than 2.1 times its expected standard model value at 95% confidence level.« less

  14. Intelligent modelling of bioprocesses: a comparison of structured and unstructured approaches.

    PubMed

    Hodgson, Benjamin J; Taylor, Christopher N; Ushio, Misti; Leigh, J R; Kalganova, Tatiana; Baganz, Frank

    2004-12-01

    This contribution moves in the direction of answering some general questions about the most effective and useful ways of modelling bioprocesses. We investigate the characteristics of models that are good at extrapolating. We trained three fully predictive models with different representational structures (differential equations, differential equations with inheritance of rates and a network of reactions) on Saccharopolyspora erythraea shake flask fermentation data using genetic programming. The models were then tested on unseen data outside the range of the training data and the resulting performances were compared. It was found that constrained models with mathematical forms analogous to internal mass balancing and stoichiometric relations were superior to flexible unconstrained models, even though no a priori knowledge of this fermentation was used.

  15. Automated antibody structure prediction using Accelrys tools: Results and best practices

    PubMed Central

    Fasnacht, Marc; Butenhof, Ken; Goupil-Lamy, Anne; Hernandez-Guzman, Francisco; Huang, Hongwei; Yan, Lisa

    2014-01-01

    We describe the methodology and results from our participation in the second Antibody Modeling Assessment experiment. During the experiment we predicted the structure of eleven unpublished antibody Fv fragments. Our prediction methods centered on template-based modeling; potential templates were selected from an antibody database based on their sequence similarity to the target in the framework regions. Depending on the quality of the templates, we constructed models of the antibody framework regions either using a single, chimeric or multiple template approach. The hypervariable loop regions in the initial models were rebuilt by grafting the corresponding regions from suitable templates onto the model. For the H3 loop region, we further refined models using ab initio methods. The final models were subjected to constrained energy minimization to resolve severe local structural problems. The analysis of the models submitted show that Accelrys tools allow for the construction of quite accurate models for the framework and the canonical CDR regions, with RMSDs to the X-ray structure on average below 1 Å for most of these regions. The results show that accurate prediction of the H3 hypervariable loops remains a challenge. Furthermore, model quality assessment of the submitted models show that the models are of quite high quality, with local geometry assessment scores similar to that of the target X-ray structures. Proteins 2014; 82:1583–1598. © 2014 The Authors. Proteins published by Wiley Periodicals, Inc. PMID:24833271

  16. Non Linear Programming (NLP) Formulation for Quantitative Modeling of Protein Signal Transduction Pathways

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239

  17. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  18. The Impact of Ocean Observations in Seasonal Climate Prediction

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele; Keppenne, Christian; Kovach, Robin; Marshak, Jelena

    2010-01-01

    The ocean provides the most significant memory for the climate system. Hence, a critical element in climate forecasting with coupled models is the initialization of the ocean with states from an ocean data assimilation system. Remotely-sensed ocean surface fields (e.g., sea surface topography, SST, winds) are now available for extensive periods and have been used to constrain ocean models to provide a record of climate variations. Since the ocean is virtually opaque to electromagnetic radiation, the assimilation of these satellite data is essential to extracting the maximum information content. More recently, the Argo drifters have provided unprecedented sampling of the subsurface temperature and salinity. Although the duration of this observation set has been too short to provide solid statistical evidence of its impact, there are indications that Argo improves the forecast skill of coupled systems. This presentation will address the impact these different observations have had on seasonal climate predictions with the GMAO's coupled model.

  19. Search for gluinos and scalar quarks in pp collisions at square root[s] = 1.8 TeV using the missing energy plus multijets signature.

    PubMed

    Affolder, T; Akimoto, H; Akopian, A; Albrow, M G; Amaral, P; Amidei, D; Anikeev, K; Antos, J; Apollinari, G; Arisawa, T; Artikov, A; Asakawa, T; Ashmanskas, W; Azfar, F; Azzi-Bacchetta, P; Bacchetta, N; Bachacou, H; Bailey, S; de Barbaro, P; Barbaro-Galtieri, A; Barnes, V E; Barnett, B A; Baroiant, S; Barone, M; Bauer, G; Bedeschi, F; Belforte, S; Bell, W H; Bellettini, G; Bellinger, J; Benjamin, D; Bensinger, J; Beretvas, A; Berge, J P; Berryhill, J; Bhatti, A; Binkley, M; Bisello, D; Bishai, M; Blair, R E; Blocker, C; Bloom, K; Blumenfeld, B; Blusk, S R; Bocci, A; Bodek, A; Bokhari, W; Bolla, G; Bonushkin, Y; Bortoletto, D; Boudreau, J; Brandl, A; van den Brink, S; Bromberg, C; Brozovic, M; Brubaker, E; Bruner, N; Buckley-Geer, E; Budagov, J; Budd, H S; Burkett, K; Busetto, G; Byon-Wagner, A; Byrum, K L; Cabrera, S; Calafiura, P; Campbell, M; Carithers, W; Carlson, J; Carlsmith, D; Caskey, W; Castro, A; Cauz, D; Cerri, A; Chan, A W; Chang, P S; Chang, P T; Chapman, J; Chen, C; Chen, Y C; Cheng, M-T; Chertok, M; Chiarelli, G; Chirikov-Zorin, I; Chlachidze, G; Chlebana, F; Christofek, L; Chu, M L; Chung, Y S; Ciobanu, C I; Clark, A G; Connolly, A; Conway, J; Cordelli, M; Cranshaw, J; Cropp, R; Culbertson, R; Dagenhart, D; D'Auria, S; DeJongh, F; Dell'Agnello, S; Dell'Orso, M; Demortier, L; Deninno, M; Derwent, P F; Devlin, T; Dittmann, J R; Dominguez, A; Donati, S; Done, J; D'Onofrio, M; Dorigo, T; Eddy, N; Einsweiler, K; Elias, J E; Engels, E; Erbacher, R; Errede, D; Errede, S; Fan, Q; Feild, R G; Fernandez, J P; Ferretti, C; Field, R D; Fiori, I; Flaugher, B; Foster, G W; Franklin, M; Freeman, J; Friedman, J; Frisch, H J; Fukui, Y; Furic, I; Galeotti, S; Gallas, A; Gallinaro, M; Gao, T; Garcia-Sciveres, M; Garfinkel, A F; Gatti, P; Gay, C; Gerdes, D W; Giannetti, P; Giromini, P; Glagolev, V; Glenzinski, D; Gold, M; Goldstein, J; Gorelov, I; Goshaw, A T; Gotra, Y; Goulianos, K; Green, C; Grim, G; Gris, P; Groer, L; Grosso-Pilcher, C; Guenther, M; Guillian, G; Guimaraes da Costa, J; Haas, R M; Haber, C; Hahn, S R; Hall, C; Handa, T; Handler, R; Hao, W; Happacher, F; Hara, K; Hardman, A D; Harris, R M; Hartmann, F; Hatakeyama, K; Hauser, J; Heinrich, J; Heiss, A; Herndon, M; Hill, C; Hoffman, K D; Holck, C; Hollebeek, R; Holloway, L; Hughes, R; Huston, J; Huth, J; Ikeda, H; Incandela, J; Introzzi, G; Iwai, J; Iwata, Y; James, E; Jones, M; Joshi, U; Kambara, H; Kamon, T; Kaneko, T; Karr, K; Kasha, H; Kato, Y; Keaffaber, T A; Kelley, K; Kelly, M; Kennedy, R D; Kephart, R; Khazins, D; Kikuchi, T; Kilminster, B; Kim, B J; Kim, D H; Kim, H S; Kim, M J; Kim, S B; Kim, S H; Kim, Y K; Kirby, M; Kirk, M; Kirsch, L; Klimenko, S; Koehn, P; Kondo, K; Konigsberg, J; Korn, A; Korytov, A; Kovacs, E; Kroll, J; Kruse, M; Kuhlmann, S E; Kurino, K; Kuwabara, T; Laasanen, A T; Lai, N; Lami, S; Lammel, S; Lancaster, J; Lancaster, M; Lander, R; Lath, A; Latino, G; LeCompte, T; Lee, A M; Lee, K; Leone, S; Lewis, J D; Lindgren, M; Liss, T M; Liu, J B; Liu, Y C; Litvintsev, D O; Lobban, O; Lockyer, N; Loken, J; Loreti, M; Lucchesi, D; Lukens, P; Lusin, S; Lyons, L; Lys, J; Madrak, R; Maeshima, K; Maksimovic, P; Malferrari, L; Mangano, M; Mariotti, M; Martignon, G; Martin, A; Matthews, J A J; Mayer, J; Mazzanti, P; McFarland, K S; McIntyre, P; McKigney, E; Menguzzato, M; Menzione, A; Mesropian, C; Meyer, A; Miao, T; Miller, R; Miller, J S; Minato, H; Miscetti, S; Mishina, M; Mitselmakher, G; Moggi, N; Moore, E; Moore, R; Morita, Y; Moulik, T; Mulhearn, M; Mukherjee, A; Muller, T; Munar, A; Murat, P; Murgia, S; Nachtman, J; Nagaslaev, V; Nahn, S; Nakada, H; Nakano, I; Nelson, C; Nelson, T; Neu, C; Neuberger, D; Newman-Holmes, C; Ngan, C-Y P; Niu, H; Nodulman, L; Nomerotski, A; Oh, S H; Oh, Y D; Ohmoto, T; Ohsugi, T; Oishi, R; Okusawa, T; Olsen, J; Orejudos, W; Pagliarone, C; Palmonari, F; Paoletti, R; Papadimitriou, V; Partos, D; Patrick, J; Pauletta, G; Paulini, M; Paus, C; Pescara, L; Phillips, T J; Piacentino, G; Pitts, K T; Pompos, A; Pondrom, L; Pope, G; Popovic, M; Prokoshin, F; Proudfoot, J; Ptohos, F; Pukhov, O; Punzi, G; Rakitine, A; Ratnikov, F; Reher, D; Reichold, A; Ribon, A; Riegler, W; Rimondi, F; Ristori, L; Riveline, M; Robertson, W J; Robinson, A; Rodrigo, T; Rolli, S; Rosenson, L; Roser, R; Rossin, R; Roy, A; Ruiz, A; Safonov, A; St Denis, R; Sakumoto, W K; Saltzberg, D; Sanchez, C; Sansoni, A; Santi, L; Sato, H; Savard, P; Schlabach, P; Schmidt, E E; Schmidt, M P; Schmitt, M; Scodellaro, L; Scott, A; Scribano, A; Segler, S; Seidel, S; Seiya, Y; Semenov, A; Semeria, F; Shah, T; Shapiro, M D; Shepard, P F; Shibayama, T; Shimojima, M; Shochet, M; Sidoti, A; Siegrist, J; Sill, A; Sinervo, P; Singh, P; Slaughter, A J; Sliwa, K; Smith, C; Snider, F D; Solodsky, A; Spalding, J; Speer, T; Sphicas, P; Spinella, F; Spiropulu, M; Spiegel, L; Steele, J; Stefanini, A; Strologas, J; Strumia, F; Stuart, D; Sumorok, K; Suzuki, T; Takano, T; Takashima, R; Takikawa, K; Tamburello, P; Tanaka, M; Tannenbaum, B; Tecchio, M; Tesarek, R; Teng, P K; Terashi, K; Tether, S; Thompson, A S; Thurman-Keup, R; Tipton, P; Tkaczyk, S; Toback, D; Tollefson, K; Tollestrup, A; Tonelli, D; Toyoda, H; Trischuk, W; de Troconiz, J F; Tseng, J; Turini, N; Ukegawa, F; Vaiciulis, T; Valls, J; Vejcik, S; Velev, G; Veramendi, G; Vidal, R; Vila, I; Vilar, R; Volobouev, I; von der Mey, M; Vucinic, D; Wagner, R G; Wagner, R L; Wallace, N B; Wan, Z; Wang, C; Wang, M J; Ward, B; Waschke, S; Watanabe, T; Waters, D; Watts, T; Webb, R; Wenzel, H; Wester, W C; Wicklund, A B; Wicklund, E; Wilkes, T; Williams, H H; Wilson, P; Winer, B L; Winn, D; Wolbers, S; Wolinski, D; Wolinski, J; Wolinski, S; Worm, S; Wu, X; Wyss, J; Yao, W; Yagil, A; Yeh, G P; Yoh, J; Yosef, C; Yoshida, T; Yu, I; Yu, S; Yu, Z; Zanetti, A; Zetti, F; Zucchelli, S

    2002-01-28

    We have performed a search for gluinos (g) and scalar quarks (q) in a data sample of 84 pb(-1) of pp collisions at square root[s] = 1.8 TeV, recorded by the Collider Detector at Fermilab. We investigate the final state of large missing transverse energy and three or more jets, a characteristic signature in R-parity-conserving supersymmetric models. The analysis has been performed "blind," in that the inspection of the signal region is made only after the predictions from standard model backgrounds have been calculated. Comparing the data with predictions of constrained supersymmetric models, we exclude gluino masses below 195 GeV/c2 (95% C.L.), independent of the squark mass. For the case m(q) approximately m(g), gluino masses below 300 GeV/c2 are excluded.

  20. On the frequency of close binary systems among very low-mass stars and brown dwarfs

    NASA Astrophysics Data System (ADS)

    Maxted, P. F. L.; Jeffries, R. D.

    2005-09-01

    We have used Monte Carlo simulation techniques and published radial velocity surveys to constrain the frequency of very low-mass star (VLMS) and brown dwarf (BD) binary systems and their separation (a) distribution. Gaussian models for the separation distribution with a peak at a= 4au and 0.6 <=σlog(a/au)<= 1.0, correctly predict the number of observed binaries, yielding a close (a < 2.6au) binary frequency of 17-30 per cent and an overall VLMS/BD binary frequency of 32-45 per cent. We find that the available N-body models of VLMS/BD formation from dynamically decaying protostellar multiple systems are excluded at >99 per cent confidence because they predict too few close binary VLMS/BDs. The large number of close binaries and high overall binary frequency are also very inconsistent with recent smoothed particle hydrodynamical modelling and argue against a dynamical origin for VLMS/BDs.

  1. Underwater noise modelling for environmental impact assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farcas, Adrian; Thompson, Paul M.; Merchant, Nathan D., E-mail: nathan.merchant@cefas.co.uk

    Assessment of underwater noise is increasingly required by regulators of development projects in marine and freshwater habitats, and noise pollution can be a constraining factor in the consenting process. Noise levels arising from the proposed activity are modelled and the potential impact on species of interest within the affected area is then evaluated. Although there is considerable uncertainty in the relationship between noise levels and impacts on aquatic species, the science underlying noise modelling is well understood. Nevertheless, many environmental impact assessments (EIAs) do not reflect best practice, and stakeholders and decision makers in the EIA process are often unfamiliarmore » with the concepts and terminology that are integral to interpreting noise exposure predictions. In this paper, we review the process of underwater noise modelling and explore the factors affecting predictions of noise exposure. Finally, we illustrate the consequences of errors and uncertainties in noise modelling, and discuss future research needs to reduce uncertainty in noise assessments.« less

  2. A frictionally and hydraulically constrained model of the convectively driven mean flow in partially enclosed seas

    NASA Astrophysics Data System (ADS)

    Maxworthy, T.

    1997-08-01

    A simple three-layer model of the dynamics of partially enclosed seas, driven by a surface buoyancy flux, is presented. It contains two major elements, a hydraulic constraint at the exit contraction and friction in the interior of the main body of the sea; both together determine the vertical structure and magnitudes of the interior flow variables, i.e. velocity and density. Application of the model to the large-scale dynamics of the Red Sea gives results that are not in disagreement with observation once the model is applied, also, to predict the dense outflow from the Gulf of Suez. The latter appears to be the agent responsible for the formation of dense bottom water in this system. Also, the model is reasonably successful in predicting the density of the outflow from the Persian Gulf, and can be applied to any number of other examples of convectively driven flow in long, narrow channels, with or without sills and constrictions at their exits.

  3. Evolution of non-interacting entropic dark energy and its phantom nature

    NASA Astrophysics Data System (ADS)

    Mathew, Titus K.; Murali, Chinthak; Shejeelammal, J.

    2016-04-01

    Assuming the form of the entropic dark energy (EDE) as it arises from the surface term in the Einstein-Hilbert’s action, its evolution was analyzed in an expanding flat universe. The model parameters were evaluated by constraining the model using the Union data on Type Ia supernovae. We found that in the non-interacting case, the model predicts an early decelerated phase and a later accelerated phase at the background level. The evolutions of the Hubble parameter, dark energy (DE) density, equation of state parameter and deceleration parameter were obtained. The model hardly seems to be supporting the linear perturbation growth for the structure formation. We also found that the EDE shows phantom nature for redshifts z < 0.257. During the phantom epoch, the model predicts big rip effect at which both the scale factor of expansion and the DE density become infinitely large and the big rip time is found to be around 36 Giga years from now.

  4. Determination of the Ce142(γ,n) cross section using quasi-monoenergetic Compton backscattered γ rays

    NASA Astrophysics Data System (ADS)

    Sauerwein, A.; Sonnabend, K.; Fritzsche, M.; Glorius, J.; Kwan, E.; Pietralla, N.; Romig, C.; Rusev, G.; Savran, D.; Schnorrenberger, L.; Tonchev, A. P.; Tornow, W.; Weller, H. R.

    2014-03-01

    Background: Knowing the energy dependence of the (γ,n) cross section is mandatory to predict the abundances of heavy elements using astrophysical models. The data can be applied directly or used to constrain the cross section of the inverse (n,γ) reaction. Purpose: The measurement of the reaction Ce142(γ,n)141Ce just above the reaction threshold amends the existing experimental database in that mass region for p-process nucleosynthesis and helps to understand the s-process branching at the isotope Ce141. Method: The quasi-monoenergetic photon beam of the High Intensity γ-ray Source (HIγS), TUNL, USA, is used to irradiate naturally composed Ce targets. The reaction yield is determined afterwards with high-resolution γ-ray spectroscopy. Results: The experimental data are in agreement with previous measurements at higher energies. Since the cross-section prediction of the Ce142(γ,n) reaction is exclusively sensitive to the γ-ray strength function, the resulting cross-section values were compared to Hauser-Feshbach calculations using different γ-ray strength functions. A microscopic description within the framework of the Hartree-Fock-BCS model describes the experimental values well within the measured energy range. Conclusions: The measured data show that the predicted (γ,n) reaction rate is correct within a factor of 2 even though the closed neutron shell N =82 is approached. This agreement allows us to constrain the (n,γ) cross section and to improve the understanding of the s-process branching at Ce141.

  5. Using Imaging Spectrometry measurements of Ecosystem Composition to constrain Regional Predictions of Carbon, Water and Energy Fluxes

    NASA Astrophysics Data System (ADS)

    Anderson, C.; Bond-Lamberty, B. P.; Huang, M.; Xu, Y.; Stegen, J.

    2016-12-01

    Ecosystem composition is a key attribute of terrestrial ecosystems, influencing the fluxes of carbon, water, and energy between the land surface and the atmosphere. The description of current ecosystem composition has traditionally come from relatively few ground-based inventories of the plant canopy, but are spatially limited and do not provide a comprehensive picture of ecosystem composition at regional or global scales. In this analysis, imaging spectrometry measurements, collected as part of the HyspIRI Preparatory Mission, are used to provide spatially-resolved estimates of plant functional type composition providing an important constraint on terrestrial biosphere model predictions of carbon, water and energy fluxes across the heterogeneous landscapes of the Californian Sierras. These landscapes include oak savannas, mid-elevation mixed pines, fir-cedar forests, and high elevation pines. Our results show that imaging spectrometry measurements can be successfully used to estimate regional-scale variation in ecosystem composition and resulting spatial heterogeneity in patterns of carbon, water and energy fluxes and ecosystem dynamics. Simulations at four flux tower sites within the study region yield patterns of seasonal and inter-annual variation in carbon and water fluxes that have comparable accuracy to simulations initialized from ground-based inventory measurements. Finally, results indicate that during the 2012-2015 Californian drought, regional net carbon fluxes fell by 84%, evaporation and transpiration fluxes fell by 53% and 33% respectively, and sensible heat increase by 51%. This study provides a framework for assimilating near-future global satellite imagery estimates of ecosystem composition with terrestrial biosphere models, constraining and improving their predictions of large-scale ecosystem dynamics and functioning.

  6. Using Imaging Spectrometry measurements of Ecosystem Composition to constrain Regional Predictions of Carbon, Water and Energy Fluxes

    NASA Astrophysics Data System (ADS)

    Antonarakis, A. S.; Bogan, S.; Moorcroft, P. R.

    2017-12-01

    Ecosystem composition is a key attribute of terrestrial ecosystems, influencing the fluxes of carbon, water, and energy between the land surface and the atmosphere. The description of current ecosystem composition has traditionally come from relatively few ground-based inventories of the plant canopy, but are spatially limited and do not provide a comprehensive picture of ecosystem composition at regional or global scales. In this analysis, imaging spectrometry measurements, collected as part of the HyspIRI Preparatory Mission, are used to provide spatially-resolved estimates of plant functional type composition providing an important constraint on terrestrial biosphere model predictions of carbon, water and energy fluxes across the heterogeneous landscapes of the Californian Sierras. These landscapes include oak savannas, mid-elevation mixed pines, fir-cedar forests, and high elevation pines. Our results show that imaging spectrometry measurements can be successfully used to estimate regional-scale variation in ecosystem composition and resulting spatial heterogeneity in patterns of carbon, water and energy fluxes and ecosystem dynamics. Simulations at four flux tower sites within the study region yield patterns of seasonal and inter-annual variation in carbon and water fluxes that have comparable accuracy to simulations initialized from ground-based inventory measurements. Finally, results indicate that during the 2012-2015 Californian drought, regional net carbon fluxes fell by 84%, evaporation and transpiration fluxes fell by 53% and 33% respectively, and sensible heat increase by 51%. This study provides a framework for assimilating near-future global satellite imagery estimates of ecosystem composition with terrestrial biosphere models, constraining and improving their predictions of large-scale ecosystem dynamics and functioning.

  7. A multidisciplinary approach to constrain incoming plate hydration in the Central American Margin

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Guild, M. R.; Naif, S.; Eimer, M. O.; Evans, O.; Fornash, K.; Plank, T. A.; Shillington, D. J.; Vervelidou, F.; Warren, J. M.; Wiens, D.

    2017-12-01

    The oceanic crust and mantle of the incoming plate are potentially the greatest source of water to the subduction zone, but their extent of hydration is poorly constrained. Hydrothermal alteration of the oceanic crust is an important source of mineral-bound water that ultimately dehydrates during subduction. Bend faults at the trench-outer rise provide another viable mechanism to further hydrate the down-going plate. Here, we take a multidisciplinary approach to constrain the fluid budget of the subducting plate at the Northern Central American margin; this site was chosen since it has an unusually wet subducting slab at the Nicaragua segment. Abundant geophysical and geochemical datasets are available for this region and this work is an analysis of these data. Controlled-source electromagnetic (CSEM) and wide-angle seismic (WAS) observations show significant resistivity and velocity reductions in the incoming oceanic crust associated with bend faults, which suggests seawater infiltration and hydrous alteration. We used the CSEM porosity constraints to predict P-wave velocity and find that the WAS data require an additional reduction of up to 0.3 km/s in the lower crust at the trench, equivalent to 2 wt% H2O. We implemented the porosity structure together with constraints on fluid flow and reaction kinetics into two-phase flow numerical models to quantify the degree of serpentinization possible relative to WAS estimates. Thermodynamic modeling of basalt and peridotite bulk compositions were used to predict the alteration assemblages and associated water contents in the bend faulting region as well as the dehydration fluxes during subduction. In Nicaragua, the major fluid pulse at sub-arc depths results from chlorite and antigorite breakdown in the upper 10 km of the slab mantle, whereas in Costa Rica, the slab mantle is not predicted to dehydrate at sub-arc depths. In addition, comparisons between observed and predicted magnetic anomalies and geochemical variations along strike and across arc provide insights into the relative contribution of fluids from the subducted crust and mantle. Our findings suggest that, in addition to mantle serpentinization, the incoming oceanic crust also experiences a high degree of bending-induced hydration and transports a substantial flux of H2O to the mantle wedge.

  8. Application of a data assimilation method via an ensemble Kalman filter to reactive urea hydrolysis transport modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juxiu Tong; Bill X. Hu; Hai Huang

    2014-03-01

    With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less

  9. Population Dynamics and Flight Phenology Model of Codling Moth Differ between Commercial and Abandoned Apple Orchard Ecosystems.

    PubMed

    Joshi, Neelendra K; Rajotte, Edwin G; Naithani, Kusum J; Krawczyk, Greg; Hull, Larry A

    2016-01-01

    Apple orchard management practices may affect development and phenology of arthropod pests, such as the codling moth (CM), Cydia pomonella (L.) (Lepidoptera: Tortricidae), which is a serious internal fruit-feeding pest of apples worldwide. Estimating population dynamics and accurately predicting the timing of CM development and phenology events (for instance, adult flight, and egg-hatch) allows growers to understand and control local populations of CM. Studies were conducted to compare the CM flight phenology in commercial and abandoned apple orchard ecosystems using a logistic function model based on degree-days accumulation. The flight models for these orchards were derived from the cumulative percent moth capture using two types of commercially available CM lure baited traps. Models from both types of orchards were also compared to another model known as PETE (prediction extension timing estimator) that was developed in 1970s to predict life cycle events for many fruit pests including CM across different fruit growing regions of the United States. We found that the flight phenology of CM was significantly different in commercial and abandoned orchards. CM male flight patterns for first and second generations as predicted by the constrained and unconstrained PCM (Pennsylvania Codling Moth) models in commercial and abandoned orchards were different than the flight patterns predicted by the currently used CM model (i.e., PETE model). In commercial orchards, during the first and second generations, the PCM unconstrained model predicted delays in moth emergence compared to current model. In addition, the flight patterns of females were different between commercial and abandoned orchards. Such differences in CM flight phenology between commercial and abandoned orchard ecosystems suggest potential impact of orchard environment and crop management practices on CM biology.

  10. Population Dynamics and Flight Phenology Model of Codling Moth Differ between Commercial and Abandoned Apple Orchard Ecosystems

    PubMed Central

    Joshi, Neelendra K.; Rajotte, Edwin G.; Naithani, Kusum J.; Krawczyk, Greg; Hull, Larry A.

    2016-01-01

    Apple orchard management practices may affect development and phenology of arthropod pests, such as the codling moth (CM), Cydia pomonella (L.) (Lepidoptera: Tortricidae), which is a serious internal fruit-feeding pest of apples worldwide. Estimating population dynamics and accurately predicting the timing of CM development and phenology events (for instance, adult flight, and egg-hatch) allows growers to understand and control local populations of CM. Studies were conducted to compare the CM flight phenology in commercial and abandoned apple orchard ecosystems using a logistic function model based on degree-days accumulation. The flight models for these orchards were derived from the cumulative percent moth capture using two types of commercially available CM lure baited traps. Models from both types of orchards were also compared to another model known as PETE (prediction extension timing estimator) that was developed in 1970s to predict life cycle events for many fruit pests including CM across different fruit growing regions of the United States. We found that the flight phenology of CM was significantly different in commercial and abandoned orchards. CM male flight patterns for first and second generations as predicted by the constrained and unconstrained PCM (Pennsylvania Codling Moth) models in commercial and abandoned orchards were different than the flight patterns predicted by the currently used CM model (i.e., PETE model). In commercial orchards, during the first and second generations, the PCM unconstrained model predicted delays in moth emergence compared to current model. In addition, the flight patterns of females were different between commercial and abandoned orchards. Such differences in CM flight phenology between commercial and abandoned orchard ecosystems suggest potential impact of orchard environment and crop management practices on CM biology. PMID:27713702

  11. Powering a burnt bridges Brownian ratchet: a model for an extracellular motor driven by proteolysis of collagen.

    PubMed

    Saffarian, Saveez; Qian, Hong; Collier, Ivan; Elson, Elliot; Goldberg, Gregory

    2006-04-01

    Biased diffusion of collagenase on collagen fibrils may represent the first observed adenosine triphosphate-independent extracellular molecular motor. The magnitude of force generated by the enzyme remains unclear. We propose a propulsion mechanism based on a burnt bridges Brownian ratchet model with a varying degree of coupling of the free energy from collagen proteolysis to the enzyme motion. When constrained by experimental observations, our model predicts 0.1 pN stall force for individual collagenase molecules. A dimer, surprisingly, can generate a force in the range of 5 pN, suggesting that the motor can be of biological significance.

  12. Thermodynamic Constraints Improve Metabolic Networks.

    PubMed

    Krumholz, Elias W; Libourel, Igor G L

    2017-08-08

    In pursuit of establishing a realistic metabolic phenotypic space, the reversibility of reactions is thermodynamically constrained in modern metabolic networks. The reversibility constraints follow from heuristic thermodynamic poise approximations that take anticipated cellular metabolite concentration ranges into account. Because constraints reduce the feasible space, draft metabolic network reconstructions may need more extensive reconciliation, and a larger number of genes may become essential. Notwithstanding ubiquitous application, the effect of reversibility constraints on the predictive capabilities of metabolic networks has not been investigated in detail. Instead, work has focused on the implementation and validation of the thermodynamic poise calculation itself. With the advance of fast linear programming-based network reconciliation, the effects of reversibility constraints on network reconciliation and gene essentiality predictions have become feasible and are the subject of this study. Networks with thermodynamically informed reversibility constraints outperformed gene essentiality predictions compared to networks that were constrained with randomly shuffled constraints. Unconstrained networks predicted gene essentiality as accurately as thermodynamically constrained networks, but predicted substantially fewer essential genes. Networks that were reconciled with sequence similarity data and strongly enforced reversibility constraints outperformed all other networks. We conclude that metabolic network analysis confirmed the validity of the thermodynamic constraints, and that thermodynamic poise information is actionable during network reconciliation. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  13. Uncertainty prediction for PUB

    NASA Astrophysics Data System (ADS)

    Mendiondo, E. M.; Tucci, C. M.; Clarke, R. T.; Castro, N. M.; Goldenfum, J. A.; Chevallier, P.

    2003-04-01

    IAHS’ initiative of Prediction in Ungaged Basins (PUB) attempts to integrate monitoring needs and uncertainty prediction for river basins. This paper outlines alternative ways of uncertainty prediction which could be linked with new blueprints for PUB, thereby showing how equifinality-based models should be grasped using practical strategies of gauging like the Nested Catchment Experiment (NCE). Uncertainty prediction is discussed from observations of Potiribu Project, which is a NCE layout at representative basins of a suptropical biome of 300,000 km2 in South America. Uncertainty prediction is assessed at the microscale (1 m2 plots), at the hillslope (0,125 km2) and at the mesoscale (0,125 - 560 km2). At the microscale, uncertainty-based models are constrained by temporal variations of state variables with changing likelihood surfaces of experiments using Green-Ampt model. Two new blueprints emerged from this NCE for PUB: (1) the Scale Transferability Scheme (STS) at the hillslope scale and the Integrating Process Hypothesis (IPH) at the mesoscale. The STS integrates a multi-dimensional scaling with similarity thresholds, as a generalization of the Representative Elementary Area (REA), using spatial correlation from point (distributed) to area (lumped) process. In this way, STS addresses uncertainty-bounds of model parameters, into an upscaling process at the hillslope. In the other hand, the IPH approach regionalizes synthetic hydrographs, thereby interpreting the uncertainty bounds of streamflow variables. Multiscale evidences from Potiribu NCE layout show novel pathways of uncertainty prediction under a PUB perspective in representative basins of world biomes.

  14. All-atom 3D structure prediction of transmembrane β-barrel proteins from sequences.

    PubMed

    Hayat, Sikander; Sander, Chris; Marks, Debora S; Elofsson, Arne

    2015-04-28

    Transmembrane β-barrels (TMBs) carry out major functions in substrate transport and protein biogenesis but experimental determination of their 3D structure is challenging. Encouraged by successful de novo 3D structure prediction of globular and α-helical membrane proteins from sequence alignments alone, we developed an approach to predict the 3D structure of TMBs. The approach combines the maximum-entropy evolutionary coupling method for predicting residue contacts (EVfold) with a machine-learning approach (boctopus2) for predicting β-strands in the barrel. In a blinded test for 19 TMB proteins of known structure that have a sufficient number of diverse homologous sequences available, this combined method (EVfold_bb) predicts hydrogen-bonded residue pairs between adjacent β-strands at an accuracy of ∼70%. This accuracy is sufficient for the generation of all-atom 3D models. In the transmembrane barrel region, the average 3D structure accuracy [template-modeling (TM) score] of top-ranked models is 0.54 (ranging from 0.36 to 0.85), with a higher (44%) number of residue pairs in correct strand-strand registration than in earlier methods (18%). Although the nonbarrel regions are predicted less accurately overall, the evolutionary couplings identify some highly constrained loop residues and, for FecA protein, the barrel including the structure of a plug domain can be accurately modeled (TM score = 0.68). Lower prediction accuracy tends to be associated with insufficient sequence information and we therefore expect increasing numbers of β-barrel families to become accessible to accurate 3D structure prediction as the number of available sequences increases.

  15. Measuring Alignments between Galaxies and the Cosmic Web at z ˜ 2-3 Using IGM Tomography

    NASA Astrophysics Data System (ADS)

    Krolewski, Alex; Lee, Khee-Gan; Lukić, Zarija; White, Martin

    2017-03-01

    Many galaxy formation models predict alignments between galaxy spin and the cosmic web (I.e., directions of filaments and sheets), leading to an intrinsic alignment between galaxies that creates a systematic error in weak-lensing measurements. These effects are often predicted to be stronger at high redshifts (z ≳ 1) that are inaccessible to massive galaxy surveys on foreseeable instrumentation, but IGM tomography of the Lyα forest from closely spaced quasars and galaxies is starting to measure the z ˜ 2-3 cosmic web with requisite fidelity. Using mock surveys from hydrodynamical simulations, we examine the utility of this technique, in conjunction with coeval galaxy samples, to measure alignment between galaxies and the cosmic web at z ˜ 2.5. We show that IGM tomography surveys with ≲5 h -1 Mpc sightline spacing can accurately recover the eigenvectors of the tidal tensor, which we use to define the directions of the cosmic web. For galaxy spins and shapes, we use a model parameterized by the alignment strength, {{Δ }}< \\cos θ > , with respect to the tidal tensor eigenvectors from the underlying density field, and also consider observational effects such as errors in the galaxy position angle, inclination, and redshift. Measurements using the upcoming ˜1 deg2 CLAMATO tomographic survey and 600 coeval zCOSMOS-Deep galaxies should place 3σ limits on extreme alignment models with {{Δ }}< \\cos θ > ˜ 0.1, but much larger surveys encompassing >10,000 galaxies, such as Subaru PFS, will be required to constrain models with {{Δ }}< \\cos θ > ˜ 0.03. These measurements will constrain models of galaxy-cosmic web alignment and test tidal torque theory at z ˜ 2, improving our understanding of the physics of intrinsic alignments.

  16. D-term contributions and CEDM constraints in E6 × SU(2)F × U(1)A SUSY GUT model

    NASA Astrophysics Data System (ADS)

    Shigekami, Yoshihiro

    2017-11-01

    We focus on E6 × SU(2)F × U(1)A supersymmetric (SUSY) grand unified theory (GUT) model. In this model, realistic Yukawa hierarchies and mixings are realized by introducing all allowed interactions with 𝓞(1) coefficients. Moreover, we can take stop mass is smaller than the other sfermion masses. This type of spectrum called by natural SUSY type sfermion mass spectrum can suppress the SUSY contributions to flavor changing neutral current (FCNC) and stabilize weak scale at the same time. However, light stop predicts large up quark CEDM and stop contributions are not decoupled. Since there is Kobayashi-Maskawa phase, stop contributions to the up quark CEDM is severely constrained even if all SUSY breaking parameters and Higgsino mass parameter μ are real. In this model, real up Yukawa couplings are realized at the GUT scale because of spontaneous CP violation. Therefore CEDM bounds are satisfied, although up Yukawa couplings are complex at the SUSY scale through the renormalization equation group effects. We calculated the CEDMs and found that EDM constraints can be satisfied even if stop mass is 𝓞(1) TeV. In addition, we investigate the size of D-terms in this model. Since these D-term contributions is flavor dependent, the degeneracy of sfermion mass spectrum is destroyed and the size of D-term is strongly constrained by FCNCs when SUSY breaking scale is the weak scale. However, SUSY breaking scale is larger than 1 TeV in order to obtain 125 GeV Higgs mass, and therefore sizable D-term contribution is allowed. Furthermore, we obtained the non-trivial prediction for the difference of squared sfermion mass.

  17. Measuring Alignments between Galaxies and the Cosmic Web at z ~ 2–3 Using IGM Tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krolewski, Alex; Lee, Khee-Gan; Luki?, Zarija

    Many galaxy formation models predict alignments between galaxy spin and the cosmic web (i.e., directions of filaments and sheets), leading to an intrinsic alignment between galaxies that creates a systematic error in weak-lensing measurements. These effects are often predicted to be stronger at high redshifts (z ≳ 1) that are inaccessible to massive galaxy surveys on foreseeable instrumentation, but IGM tomography of the Lyα forest from closely spaced quasars and galaxies is starting to measure the z ~ 2-3 cosmic web with requisite fidelity. Using mock surveys from hydrodynamical simulations, we examine the utility of this technique, in conjunction withmore » coeval galaxy samples, to measure alignment between galaxies and the cosmic web at z ~ 2.5. We show that IGM tomography surveys with ≲ 5 h -1 Mpc sightline spacing can accurately recover the eigenvectors of the tidal tensor, which we use to define the directions of the cosmic web. For galaxy spins and shapes, we use a model parameterized by the alignment strength, Δ (cos θ), with respect to the tidal tensor eigenvectors from the underlying density field, and also consider observational effects such as errors in the galaxy position angle, inclination, and redshift. Measurements using the upcoming ~ 1 deg 2 CLAMATO tomographic survey and 600 coeval zCOSMOS-Deep galaxies should place 3σ limits on extreme alignment models with Δ (cos θ) ~ 0.1, but much larger surveys encompassing > 10,000 galaxies, such as Subaru PFS, will be required to constrain models with Δ (cos θ) ~ 0.3. These measurements will constrain models of galaxy-cosmic web alignment and test tidal torque theory at z ~ 2, improving our understanding of the physics of intrinsic alignments.« less

  18. Measuring Alignments between Galaxies and the Cosmic Web at z ~ 2–3 Using IGM Tomography

    DOE PAGES

    Krolewski, Alex; Lee, Khee-Gan; Luki?, Zarija; ...

    2017-02-28

    Many galaxy formation models predict alignments between galaxy spin and the cosmic web (i.e., directions of filaments and sheets), leading to an intrinsic alignment between galaxies that creates a systematic error in weak-lensing measurements. These effects are often predicted to be stronger at high redshifts (z ≳ 1) that are inaccessible to massive galaxy surveys on foreseeable instrumentation, but IGM tomography of the Lyα forest from closely spaced quasars and galaxies is starting to measure the z ~ 2-3 cosmic web with requisite fidelity. Using mock surveys from hydrodynamical simulations, we examine the utility of this technique, in conjunction withmore » coeval galaxy samples, to measure alignment between galaxies and the cosmic web at z ~ 2.5. We show that IGM tomography surveys with ≲ 5 h -1 Mpc sightline spacing can accurately recover the eigenvectors of the tidal tensor, which we use to define the directions of the cosmic web. For galaxy spins and shapes, we use a model parameterized by the alignment strength, Δ (cos θ), with respect to the tidal tensor eigenvectors from the underlying density field, and also consider observational effects such as errors in the galaxy position angle, inclination, and redshift. Measurements using the upcoming ~ 1 deg 2 CLAMATO tomographic survey and 600 coeval zCOSMOS-Deep galaxies should place 3σ limits on extreme alignment models with Δ (cos θ) ~ 0.1, but much larger surveys encompassing > 10,000 galaxies, such as Subaru PFS, will be required to constrain models with Δ (cos θ) ~ 0.3. These measurements will constrain models of galaxy-cosmic web alignment and test tidal torque theory at z ~ 2, improving our understanding of the physics of intrinsic alignments.« less

  19. Probability-based constrained MPC for structured uncertain systems with state and random input delays

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Li, Dewei; Xi, Yugeng

    2013-07-01

    This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.

  20. Constraining friction, dilatancy and effective stress with earthquake rates in the deep crust

    NASA Astrophysics Data System (ADS)

    Beeler, N. M.; Thomas, A.; Burgmann, R.; Shelly, D. R.

    2015-12-01

    Similar to their behavior on the deep extent of some subduction zones, families of recurring low-frequency earthquakes (LFE) within zones of non-volcanic tremor on the San Andreas fault in central California show strong sensitivity to stresses induced by the tides. Taking all of the LFE families collectively, LFEs occur at all levels of the daily tidal stress, and are in phase with the very small, ~200 Pa, shear stress amplitudes while being uncorrelated with the ~2 kPa tidal normal stresses. Following previous work we assume LFE sources are small, persistent regions that repeatedly fail during shear within a much larger scale, otherwise aseismically creeping fault zone and that the correlation of LFE occurrence reflects modulation of the fault creep rate by the tidal stresses. We examine the predictions of laboratory-observed rate-dependent dilatancy associated with frictional slip. The effect of dilatancy hardening is to damp the slip rate, so high dilatancy under undrained pore pressure reduces modulation of slip rate by the tides. The undrained end-member model produces: 1) no sensitivity to the tidal normal stress, as first suggested in this context by Hawthorne and Rubin [2010], and 2) fault creep rate in phase with the tidal shear stress. Room temperature laboratory-observed values of the dilatancy and friction coefficients for talc, an extremely weak and weakly dilatant material, under-predict the observed San Andreas modulation at least by an order of magnitude owing to too much dilatancy. This may reflect a temperature dependence of the dilatancy and friction coefficients, both of which are expected to be zero at the brittle-ductile transition. The observed tidal modulation constrains the product of the friction and dilatancy coefficients to be at most 5 x 10-7 in the LFE source region, an order of magnitude smaller than observed at room temperature for talc. Alternatively, considering the predictions of a purely rate-dependent talc friction would constrain the ambient effective normal stress to be no more than 40 kPa. In summary, for friction models that have both rate-dependent strength and dilatancy, the observations require intrinsic weakness, low dilatancy, and lithostatic pore fluid pressures.

  1. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape parameter of the retention curve n was highly constrained whilst other parameters of the retention curve showed a large equifinality. The root and storage dry matter observations were predicted with a NSE of 0.94, a low bias of 58.2 kg ha-1 and a high R2 of 0.98. Dry matters of stem and leaves were predicted with less, but still high accuracy (NSE=0.79, bias=221.7 kg ha-1, R2=0.87). We attribute this slightly poorer model performance to missing leaf senescence which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use-efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need of including agricultural management options in the coupled model.

  2. Monte Carlo based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2013-12-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape parameter of the retention curve n was highly constrained whilst other parameters of the retention curve showed a large equifinality. The root and storage dry matter observations were predicted with a NSE of 0.94, a low bias of -58.2 kg ha-1 and a high R2 of 0.98. Dry matters of stem and leaves were predicted with less, but still high accuracy (NSE = 0.79, bias = 221.7 kg ha-1, R2 = 0.87). We attribute this slightly poorer model performance to missing leaf senescence which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use-efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need of including agricultural management options in the coupled model.

  3. On the use of through-fall exclusion experiments to filter model hypotheses.

    NASA Astrophysics Data System (ADS)

    Fisher, R.

    2015-12-01

    One key threat to the continued existence of large tropical forest carbon reservoirs is the increasing severity of drought across Amazonian forests, observed both in climate model predictions, in recent extreme drought events and in the more chronic lengthening of the dry season of South Eastern Amazonia. Model comprehension of these systems is in it's infancy, particularly with regard to the sensitivities of model output to the representation of hydraulic strategies in tropical forest systems. Here we use data from the ongoing 14 year old Caxiuana through-fall exclusion experiment, in Eastern Brazil, to filter a set of representations of the costs and benefits of alternative hydraulic strategies. In representations where there is a high resource cost to hydraulic resilience, the trait filtering CLM4.5(ED) model selects vegetation types that are sensitive to drought. Conversely, where drought tolerance is inexpensive, a more robust ecosystem emerges from the vegetation dynamic prediction. Thus, there is an impact of trait trade-off relationships on rainforest drought tolerance. It is possible to constrain the more realistic scenarios using outputs from the drought experiments. Better prediction would likely result from a more comprehensive understanding of the costs and benefits of alternative plant strategies.

  4. Prediction of Phase Separation of Immiscible Ga-Tl Alloys

    NASA Astrophysics Data System (ADS)

    Kim, Yunkyum; Kim, Han Gyeol; Kang, Youn-Bae; Kaptay, George; Lee, Joonho

    2017-06-01

    Phase separation temperature of Ga-Tl liquid alloys was investigated using the constrained drop method. With this method, density and surface tension were investigated together. Despite strong repulsive interactions, molar volume showed ideal mixing behavior, whereas surface tension of the alloy was close to that of pure Tl due to preferential adsorption of Tl. Phase separation temperatures and surface tension values obtained with this method were close to the theoretically calculated values using three different thermodynamic models.

  5. High-frequency predictions for number counts and spectral properties of extragalactic radio sources. New evidence of a break at mm wavelengths in spectra of bright blazar sources

    NASA Astrophysics Data System (ADS)

    Tucci, M.; Toffolatti, L.; de Zotti, G.; Martínez-González, E.

    2011-09-01

    We present models to predict high-frequency counts of extragalactic radio sources using physically grounded recipes to describe the complex spectral behaviour of blazars that dominate the mm-wave counts at bright flux densities. We show that simple power-law spectra are ruled out by high-frequency (ν ≥ 100 GHz) data. These data also strongly constrain models featuring the spectral breaks predicted by classical physical models for the synchrotron emission produced in jets of blazars. A model dealing with blazars as a single population is, at best, only marginally consistent with data coming from current surveys at high radio frequencies. Our most successful model assumes different distributions of break frequencies, νM, for BL Lacs and flat-spectrum radio quasars (FSRQs). The former objects have substantially higher values of νM, implying that the synchrotron emission comes from more compact regions; therefore, a substantial increase of the BL Lac fraction at high radio frequencies and at bright flux densities is predicted. Remarkably, our best model is able to give a very good fit to all the observed data on number counts and on distributions of spectral indices of extragalactic radio sources at frequencies above 5 and up to 220 GHz. Predictions for the forthcoming sub-mm blazar counts from Planck, at the highest HFI frequencies, and from Herschel surveys are also presented. Appendices are available in electronic form at http://www.aanda.org

  6. Downscaling ocean conditions with application to the Gulf of Maine, Scotian Shelf and adjacent deep ocean

    NASA Astrophysics Data System (ADS)

    Katavouta, Anna; Thompson, Keith R.

    2016-08-01

    The overall goal is to downscale ocean conditions predicted by an existing global prediction system and evaluate the results using observations from the Gulf of Maine, Scotian Shelf and adjacent deep ocean. The first step is to develop a one-way nested regional model and evaluate its predictions using observations from multiple sources including satellite-borne sensors of surface temperature and sea level, CTDs, Argo floats and moored current meters. It is shown that the regional model predicts more realistic fields than the global system on the shelf because it has higher resolution and includes tides that are absent from the global system. However, in deep water the regional model misplaces deep ocean eddies and meanders associated with the Gulf Stream. This is not because the regional model's dynamics are flawed but rather is the result of internally generated variability in deep water that leads to decoupling of the regional model from the global system. To overcome this problem, the next step is to spectrally nudge the regional model to the large scales (length scales > 90 km) of the global system. It is shown this leads to more realistic predictions off the shelf. Wavenumber spectra show that even though spectral nudging constrains the large scales, it does not suppress the variability on small scales; on the contrary, it favours the formation of eddies with length scales below the cutoff wavelength of the spectral nudging.

  7. Topographic asymmetry of the South Atlantic from global models of mantle flow and lithospheric stretching

    NASA Astrophysics Data System (ADS)

    Flament, Nicolas; Gurnis, Michael; Williams, Simon; Seton, Maria; Skogseid, Jakob; Heine, Christian; Dietmar Müller, R.

    2014-02-01

    The relief of the South Atlantic is characterized by elevated passive continental margins along southern Africa and eastern Brazil, and by the bathymetric asymmetry of the southern oceanic basin where the western flank is much deeper than the eastern flank. We investigate the origin of these topographic features in the present and over time since the Jurassic with a model of global mantle flow and lithospheric deformation. The model progressively assimilates plate kinematics, plate boundaries and lithospheric age derived from global tectonic reconstructions with deforming plates, and predicts the evolution of mantle temperature, continental crustal thickness, long-wavelength dynamic topography, and isostatic topography. Mantle viscosity and the kinematics of the opening of the South Atlantic are adjustable parameters in thirteen model cases. Model predictions are compared to observables both for the present-day and in the past. Present-day predictions are compared to topography, mantle tomography, and an estimate of residual topography. Predictions for the past are compared to tectonic subsidence from backstripped borehole data along the South American passive margin, and to dynamic uplift as constrained by thermochronology in southern Africa. Comparison between model predictions and observations suggests that the first-order features of the topography of the South Atlantic are due to long-wavelength dynamic topography, rather than to asthenospheric processes. The uplift of southern Africa is best reproduced with a lower mantle that is at least 40 times more viscous than the upper mantle.

  8. Topographic asymmetry of the South Atlantic from global models of mantle flow and lithospheric stretching

    NASA Astrophysics Data System (ADS)

    Flament, Nicolas; Gurnis, Michael; Williams, Simon; Seton, Maria; Skogseid, Jakob; Heine, Christian; Müller, Dietmar

    2014-05-01

    The relief of the South Atlantic is characterized by elevated passive continental margins along southern Africa and eastern Brazil, and by the bathymetric asymmetry of the southern oceanic basin where the western flank is much deeper than the eastern flank. We investigate the origin of these topographic features in the present and over time since the Jurassic with a model of global mantle flow and lithospheric deformation. The model progressively assimilates plate kinematics, plate boundaries and lithospheric age derived from global tectonic reconstructions with deforming plates, and predicts the evolution of mantle temperature, continental crustal thickness, long-wavelength dynamic topography, and isostatic topography. Mantle viscosity and the kinematics of the opening of the South Atlantic are adjustable parameters in multiple model cases. Model predictions are compared to observables both for the present-day and in the past. Present-day predictions are compared to topography, mantle tomography, and an estimate of residual topography. Predictions for the past are compared to tectonic subsidence from backstripped borehole data along the South American passive margin, and to dynamic uplift as constrained by thermochronology in southern Africa. Comparison between model predictions and observations suggests that the first-order features of the topography of the South Atlantic are due to long-wavelength dynamic topography, rather than to asthenospheric processes. We find the uplift of southern Africa to be best reproduced with a lower mantle that is at least 40 times more viscous than the upper mantle.

  9. Comprehensive, Process-based Identification of Hydrologic Models using Satellite and In-situ Water Storage Data: A Multi-objective calibration Approach

    NASA Astrophysics Data System (ADS)

    Abdo Yassin, Fuad; Wheater, Howard; Razavi, Saman; Sapriza, Gonzalo; Davison, Bruce; Pietroniro, Alain

    2015-04-01

    The credible identification of vertical and horizontal hydrological components and their associated parameters is very challenging (if not impossible) by only constraining the model to streamflow data, especially in regions where the vertical processes significantly dominate the horizontal processes. The prairie areas of the Saskatchewan River basin, a major water system in Canada, demonstrate such behavior, where the hydrologic connectivity and vertical fluxes are mainly controlled by the amount of surface and sub-surface water storages. In this study, we develop a framework for distributed hydrologic model identification and calibration that jointly constrains the model response (i.e., streamflows) as well as a set of model state variables (i.e., water storages) to observations. This framework is set up in the form of multi-objective optimization, where multiple performance criteria are defined and used to simultaneously evaluate the fidelity of the model to streamflow observations and observed (estimated) changes of water storage in the gridded landscape over daily and monthly time scales. The time series of estimated changes in total water storage (including soil, canopy, snow and pond storages) used in this study were derived from an experimental study enhanced by the information obtained from the GRACE satellite. We test this framework on the calibration of a Land Surface Scheme-Hydrology model, called MESH (Modélisation Environmentale Communautaire - Surface and Hydrology), for the Saskatchewan River basin. Pareto Archived Dynamically Dimensioned Search (PA-DDS) is used as the multi-objective optimization engine. The significance of using the developed framework is demonstrated in comparison with the results obtained through a conventional calibration approach to streamflow observations. The approach of incorporating water storage data into the model identification process can more potentially constrain the posterior parameter space, more comprehensively evaluate the model fidelity, and yield more credible predictions.

  10. Common Crime and Domestic Violence Victimization of Older Chinese in Urban China: The Prevalence and Its Impact on Mental Health and Constrained Behavior.

    PubMed

    Qin, Nan; Yan, Elsie

    2018-03-01

    This article examines the prevalence of victimization among older Chinese living in urban China and its psychological and behavioral impacts. A representative sample of 453 older adults aged 60 or above was recruited from Kunming, the People's Republic of China, using multistage sampling method. Participants were individually interviewed on their demographic characteristics, experience of common crime and domestic violence victimization, fear of common crime and domestic violence, mental health, and constrained behavior. Results showed that 254 participants (56.1%) reported one or more types of common crime and 21 (4.6%) reported experiencing domestic violence in the past. Seventeen participants (3.8%) reportedly experienced both common crime and domestic violence victimization. There was no gender difference in the overall incidence of victimization but in some subtypes. Regression analyses indicated that past experience of common crime victimization was significantly associated with greater fear of common crime (β = .136, p = .004), poorer mental health (β = .136, p = .003), and more constrained behavior (β = .108, p = .025). Fear of common crime predicted increased constrained behavior (β = .240, p < .001) independent of gender, age, education, household finances, living arrangement, and physical health. Domestic violence victimization was not significant in predicting poor mental health and constrained behavior but was significant in predicting fear of domestic violence (β = .266, p < .001), which was related to poorer mental health (β = .102, p = .039). The study suggests the importance of taking older people's risk and experience of victimization into consideration in gerontological research, practice, and policymaking.

  11. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  12. Exploring extended scalar sectors with di-Higgs signals: a Higgs EFT perspective

    NASA Astrophysics Data System (ADS)

    Corbett, Tyler; Joglekar, Aniket; Li, Hao-Lin; Yu, Jiang-Hao

    2018-05-01

    We consider extended scalar sectors of the Standard Model as ultraviolet complete motivations for studying the effective Higgs self-interaction operators of the Standard Model effective field theory. We investigate all motivated heavy scalar models which generate the dimension-six effective operator, | H|6, at tree level and proceed to identify the full set of tree-level dimension-six operators by integrating out the heavy scalars. Of seven models which generate | H|6 at tree level only two, quadruplets of hypercharge Y = 3 Y H and Y = Y H , generate only this operator. Next we perform global fits to constrain relevant Wilson coefficients from the LHC single Higgs measurements as well as the electroweak oblique parameters S and T. We find that the T parameter puts very strong constraints on the Wilson coefficient of the | H|6 operator in the triplet and quadruplet models, while the singlet and doublet models could still have Higgs self-couplings which deviate significantly from the standard model prediction. To determine the extent to which the | H|6 operator could be constrained, we study the di-Higgs signatures at the future 100 TeV collider and explore future sensitivity of this operator. Projected onto the Higgs potential parameters of the extended scalar sectors, with 30 ab-1 luminosity data we will be able to explore the Higgs potential parameters in all seven models.

  13. Modeling Lake Storage Dynamics to support Arctic Boreal Vulnerability Experiment (ABoVE)

    NASA Astrophysics Data System (ADS)

    Vimal, S.; Lettenmaier, D. P.; Smith, L. C.; Smith, S.; Bowling, L. C.; Pavelsky, T.

    2017-12-01

    The Arctic and Boreal Zone (ABZ) of Canada and Alaska includes vast areas of permafrost, lakes, and wetlands. Permafrost thawing in this area is expected to increase due to the projected rise of temperature caused by climate change. Over the long term, this may reduce overall surface water area, but in the near-term, the opposite is being observed, with rising paludification (lake/wetland expansion). One element of NASA's ABoVE field experiment is observations of lake and wetland extent and surface elevations using NASA's AirSWOT airborne interferometric radar, accompanied by a high-resolution camera. One use of the WSE retrievals will be to constrain model estimates of lake storage dynamics. Here, we compare predictions using the lake dynamics algorithm within the Variable Infiltration Capacity (VIC) land surface scheme. The VIC lake algorithm includes representation of sub-grid topography, where the depth and area of seasonally-flooded areas are modeled as a function of topographic wetness index, basin area, and slope. The topography data used is from a new global digital elevation model, MERIT-DEM. We initially set up VIC at sites with varying permafrost conditions (i.e., no permafrost, discontinuous, continuous) in Saskatoon and Yellowknife, Canada, and Toolik Lake, Alaska. We constrained the uncalibrated model with the WSE at the time of the first ABoVE flight, and quantified the model's ability to predict WSE and ΔWSE during the time of the second flight. Finally, we evaluated the sensitivity of the VIC-lakes model and compared the three permafrost conditions. Our results quantify the sensitivity of surface water to permafrost state across the target sites. Furthermore, our evaluation of the lake modeling framework contributes to the modeling and mapping framework for lake and reservoir storage change evaluation globally as part of the SWOT mission, planned for launch in 2021.

  14. Dust models post-Planck: constraining the far-infrared opacity of dust in the diffuse interstellar medium

    NASA Astrophysics Data System (ADS)

    Fanciullo, L.; Guillet, V.; Aniano, G.; Jones, A. P.; Ysard, N.; Miville-Deschênes, M.-A.; Boulanger, F.; Köhler, M.

    2015-08-01

    Aims: We compare the performance of several dust models in reproducing the dust spectral energy distribution (SED) per unit extinction in the diffuse interstellar medium (ISM). We use our results to constrain the variability of the optical properties of big grains in the diffuse ISM, as published by the Planck collaboration. Methods: We use two different techniques to compare the predictions of dust models to data from the Planck HFI, IRAS, and SDSS surveys. First, we fit the far-infrared emission spectrum to recover the dust extinction and the intensity of the interstellar radiation field (ISRF). Second, we infer the ISRF intensity from the total power emitted by dust per unit extinction, and then predict the emission spectrum. In both cases, we test the ability of the models to reproduce dust emission and extinction at the same time. Results: We identify two issues. Not all models can reproduce the average dust emission per unit extinction: there are differences of up to a factor ~2 between models, and the best accord between model and observation is obtained with the more emissive grains derived from recent laboratory data on silicates and amorphous carbons. All models fail to reproduce the variations in the emission per unit extinction if the only variable parameter is the ISRF intensity: this confirms that the optical properties of dust are indeed variable in the diffuse ISM. Conclusions: Diffuse ISM observations are consistent with a scenario where both ISRF intensity and dust optical properties vary. The ratio of the far-infrared opacity to the V band extinction cross-section presents variations of the order of ~20% (40-50% in extreme cases), while ISRF intensity varies by ~30% (~60% in extreme cases). This must be accounted for in future modelling. Appendices are available in electronic form at http://www.aanda.org

  15. Crustal tracers in the atmosphere and ocean: Relating their concentrations, fluxes, and ages

    NASA Astrophysics Data System (ADS)

    Han, Qin

    Crustal tracers are important sources of key limiting nutrients (e.g., iron) in remote ocean regions where they have a large impact on global biogeochemical cycles. However, the atmospheric delivery of bio-available iron to oceans via mineral dust aerosol deposition is poorly constrained. This dissertation aims to improve understanding and model representation of oceanic dust deposition and to provide soluble iron flux maps by testing observations of crustal tracer concentrations and solubilities against predictions from two conceptual solubility models. First, we assemble a database of ocean surface dissolved Al and incorporate Al cycling into the global Biogeochemical Elemental Cycling (BEC) model. The observed Al concentrations show clear basin-scale differences that are useful for constraining dust deposition. The dynamic mixed layer depth and Al residence time in the BEC model significantly improve the simulated dissolved Al field. Some of the remaining model-data discrepancies appear related to the neglect of aerosol size, age, and air mass characteristics in estimating tracer solubility. Next, we develop the Mass-Age Tracking method (MAT) to efficiently and accurately estimate the mass-weighted age of tracers. We apply MAT to four sizes of desert dust aerosol and simulate, for the first time, global distributions of aerosol age in the atmosphere and at deposition. These dust size and age distributions at deposition, together with independent information on air mass acidity, allow us to test two simple yet plausible models for predicting the dissolution of mineral dust iron and aluminum during atmospheric transport. These models represent aerosol solubility as controlled (1) by a diffusive process leaching nutrients from the dust into equilibrium with the liquid water coating or (2) by a process that continually dissolves nutrients in proportion to the particle surface area. The surface-controlled model better captures the spatial pattern of observed solubility in the Atlantic. Neither model improves previous estimates of the solubility in the Pacific, nor do they significantly improve the global BEC simulation of dissolved iron or aluminum.

  16. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522

  17. Constrained reduced-order models based on proper orthogonal decomposition

    DOE PAGES

    Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...

    2017-04-09

    A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.

  18. The wing pattern of Moerarchis Durrant, 1914 (Lepidoptera: Tineidae) clarifies transitions between predictive models

    PubMed Central

    2017-01-01

    The evolution of wing pattern in Lepidoptera is a popular area of inquiry but few studies have examined microlepidoptera, with fewer still focusing on intraspecific variation. The tineid genus Moerarchis Durrant, 1914 includes two species with high intraspecific variation of wing pattern. A subset of the specimens examined here provide, to my knowledge, the first examples of wing patterns that follow both the ‘alternating wing-margin’ and ‘uniform wing-margin’ models in different regions along the costa. These models can also be evaluated along the dorsum of Moerarchis, where a similar transition between the two models can be seen. Fusion of veins is shown not to effect wing pattern, in agreement with previous inferences that the plesiomorphic location of wing veins constrains the development of colour pattern. The significant correlation between wing length and number of wing pattern elements in Moerarchis australasiella shows that wing size can act as a major determinant of wing pattern complexity. Lastly, some M. australasiella specimens have wing patterns that conform entirely to the ‘uniform wing-margin’ model and contain more than six bands, providing new empirical insight into the century-old question of how wing venation constrains wing patterns with seven or more bands. PMID:28405390

  19. Distribution drivers and physiological responses in geothermal bryophyte communities.

    PubMed

    García, Estefanía Llaneza; Rosenstiel, Todd N; Graves, Camille; Shortlidge, Erin E; Eppley, Sarah M

    2016-04-01

    Our ability to explain community structure rests on our ability to define the importance of ecological niches, including realized ecological niches, in shaping communities, but few studies of plant distributions have combined predictive models with physiological measures. Using field surveys and statistical modeling, we predicted distribution drivers in geothermal bryophyte (moss) communities of Lassen Volcanic National Park (California, USA). In the laboratory, we used drying and rewetting experiments to test whether the strong species-specific effects of relative humidity on distributions predicted by the models were correlated with physiological characters. We found that the three most common bryophytes in geothermal communities were significantly affected by three distinct distribution drivers: temperature, light, and relative humidity. Aulacomnium palustre, whose distribution is significantly affected by relative humidity according to our model, and which occurs in high-humidity sites, showed extreme signs of stress after drying and never recovered optimal values of PSII efficiency after rewetting. Campylopus introflexus, whose distribution is not affected by humidity according to our model, was able to maintain optimal values of PSII efficiency for 48 hr at 50% water loss and recovered optimal values of PSII efficiency after rewetting. Our results suggest that species-specific environmental stressors tightly constrain the ecological niches of geothermal bryophytes. Tests of tolerance to drying in two bryophyte species corresponded with model predictions of the comparative importance of relative humidity as distribution drivers for these species. © 2016 Botanical Society of America.

  20. Evaluation of tropical Pacific observing systems using NCEP and GFDL ocean data assimilation systems

    NASA Astrophysics Data System (ADS)

    Xue, Yan; Wen, Caihong; Yang, Xiaosong; Behringer, David; Kumar, Arun; Vecchi, Gabriel; Rosati, Anthony; Gudgel, Rich

    2017-08-01

    The TAO/TRITON array is the cornerstone of the tropical Pacific and ENSO observing system. Motivated by the recent rapid decline of the TAO/TRITON array, the potential utility of TAO/TRITON was assessed for ENSO monitoring and prediction. The analysis focused on the period when observations from Argo floats were also available. We coordinated observing system experiments (OSEs) using the global ocean data assimilation system (GODAS) from the National Centers for Environmental Prediction and the ensemble coupled data assimilation (ECDA) from the Geophysical Fluid Dynamics Laboratory for the period 2004-2011. Four OSE simulations were conducted with inclusion of different subsets of in situ profiles: all profiles (XBT, moorings, Argo), all except the moorings, all except the Argo and no profiles. For evaluation of the OSE simulations, we examined the mean bias, standard deviation difference, root-mean-square difference (RMSD) and anomaly correlation against observations and objective analyses. Without assimilation of in situ observations, both GODAS and ECDA had large mean biases and RMSD in all variables. Assimilation of all in situ data significantly reduced mean biases and RMSD in all variables except zonal current at the equator. For GODAS, the mooring data is critical in constraining temperature in the eastern and northwestern tropical Pacific, while for ECDA both the mooring and Argo data is needed in constraining temperature in the western tropical Pacific. The Argo data is critical in constraining temperature in off-equatorial regions for both GODAS and ECDA. For constraining salinity, sea surface height and surface current analysis, the influence of Argo data was more pronounced. In addition, the salinity data from the TRITON buoys played an important role in constraining salinity in the western Pacific. GODAS was more sensitive to withholding Argo data in off-equatorial regions than ECDA because it relied on local observations to correct model biases and there were few XBT profiles in those regions. The results suggest that multiple ocean data assimilation systems should be used to assess sensitivity of ocean analyses to changes in the distribution of ocean observations to get more robust results that can guide the design of future tropical Pacific observing systems.

  1. Digital Image Restoration Under a Regression Model - The Unconstrained, Linear Equality and Inequality Constrained Approaches

    DTIC Science & Technology

    1974-01-01

    REGRESSION MODEL - THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January 1974 Nelson Delfino d’Avila Mascarenha;? Image...Report 520 DIGITAL IMAGE RESTORATION UNDER A REGRESSION MODEL THE UNCONSTRAINED, LINEAR EQUALITY AND INEQUALITY CONSTRAINED APPROACHES January...a two- dimensional form adequately describes the linear model . A dis- cretization is performed by using quadrature methods. By trans

  2. Stress history of the Tharsis Region, Mars

    NASA Technical Reports Server (NTRS)

    Francis, Robert A.

    1987-01-01

    The Tharsis topographic rise of Mars is roughly 5000 km wide and 10 km high and is believed to have originated more than 3.5 BY ago. Within its boundaries lie the four largest volcanoes on the planet. It is also the locus of a series of fracture traces which extend over approximately a hemisphere. The events leading to the formation of the Tharsis region continue to generate debate. Three geophysical models of the formation of Tharsis are now in general contention and each of these models has been used to predict a characteristic stress-field. These models are: the volcanic construct model, the isostatic compensation model, and the lithospheric uplift model. Each has been used by its proponents to predict some of the features observed in the Tharsis region but none accurately accounts for all of the fracture features observed. This is due, in part, to the use of fractures too young to be directly related to the origin of Tharsis. To constrain the origin of Tharsis, as opposed to its later history, one should look for the oldest fractures related to Tharsis and compare these to the predictions made by the models. Mapping of old terrains in and around the Tharsis rise has revealed 175 hitherto unknown old fracture features.

  3. A physically-based method for predicting peak discharge of floods caused by failure of natural and constructed earthen dams

    USGS Publications Warehouse

    Walder, J.S.; O'Connor, J. E.; Costa, J.E.; ,

    1997-01-01

    We analyse a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V.D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether < ??? 1 or < ??? 1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.We analyze a simple, physically-based model of breach formation in natural and constructed earthen dams to elucidate the principal factors controlling the flood hydrograph at the breach. Formation of the breach, which is assumed trapezoidal in cross-section, is parameterized by the mean rate of downcutting, k, the value of which is constrained by observations. A dimensionless formulation of the model leads to the prediction that the breach hydrograph depends upon lake shape, the ratio r of breach width to depth, the side slope ?? of the breach, and the parameter ?? = (V/D3)(k/???gD), where V = lake volume, D = lake depth, and g is the acceleration due to gravity. Calculations show that peak discharge Qp depends weakly on lake shape r and ??, but strongly on ??, which is the product of a dimensionless lake volume and a dimensionless erosion rate. Qp(??) takes asymptotically distinct forms depending on whether ?????1 or ?????1. Theoretical predictions agree well with data from dam failures for which k could be reasonably estimated. The analysis provides a rapid and in many cases graphical way to estimate plausible values of Qp at the breach.

  4. Explaining dark matter and B decay anomalies with an L μ - L τ model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altmannshofer, Wolfgang; Gori, Stefania; Profumo, Stefano

    We present a dark sector model based on gauging the L μ - L τ symmetry that addresses anomalies in b→ sμ +μ - decays and that features a particle dark matter candidate. The dark matter particle candidate is a vector-like Dirac fermion coupled to the Z' gauge boson of the L μ - L τ symmetry. We compute the dark matter thermal relic density, its pair-annihilation cross section, and the loop-suppressed dark matter-nucleon scattering cross section, and compare our predictions with current and future experimental results. We demonstrate that after taking into account bounds from Bs meson oscillations, darkmore » matter direct detection, and the CMB, the model is highly predictive: B physics anomalies and a viable particle dark matter candidate, with a mass of ~ (5 - 23) GeV, can be accommodated only in a tightly-constrained region of parameter space, with sharp predictions for future experimental tests. The viable region of parameter space expands if the dark matter is allowed to have L μ - L τ charges that are smaller than those of the SM leptons.« less

  5. Explaining dark matter and B decay anomalies with an L μ - L τ model

    DOE PAGES

    Altmannshofer, Wolfgang; Gori, Stefania; Profumo, Stefano; ...

    2016-12-20

    We present a dark sector model based on gauging the L μ - L τ symmetry that addresses anomalies in b→ sμ +μ - decays and that features a particle dark matter candidate. The dark matter particle candidate is a vector-like Dirac fermion coupled to the Z' gauge boson of the L μ - L τ symmetry. We compute the dark matter thermal relic density, its pair-annihilation cross section, and the loop-suppressed dark matter-nucleon scattering cross section, and compare our predictions with current and future experimental results. We demonstrate that after taking into account bounds from Bs meson oscillations, darkmore » matter direct detection, and the CMB, the model is highly predictive: B physics anomalies and a viable particle dark matter candidate, with a mass of ~ (5 - 23) GeV, can be accommodated only in a tightly-constrained region of parameter space, with sharp predictions for future experimental tests. The viable region of parameter space expands if the dark matter is allowed to have L μ - L τ charges that are smaller than those of the SM leptons.« less

  6. The mid-cretaceous water bearer: Isotope mass balance quantification of the Albian hydrologic cycle

    USGS Publications Warehouse

    Ufnar, David F.; Gonzalez, Luis A.; Ludvigson, Greg A.; Brenner, Richard L.; Witzke, B.J.

    2002-01-01

    A latitudinal gradient in meteoric ??18O compositions compiled from paleosol sphaerosiderites throughout the Cretaceous Western Interior Basin (KWIB) (34-75??N paleolatitude) exhibits a steeper, more depleted trend than modern (predicted) values (3.0??? [34??N latitude] to 9.7??? [75??N] lighter). Furthermore, the sphaerosiderite meteoric ??18O latitudinal gradient is significantly steeper and more depleted (5.8??? [34??N] to 13.8??? [75??N] lighter) than a predicted gradient for the warm mid-Cretaceous using modern empirical temperature-??18O precipitation relationships. We have suggested that the steeper and more depleted (relative to the modern theoretical gradient) meteoric sphaerosiderite ??18O latitudinal gradient resulted from increased air mass rainout effects in coastal areas of the KWIB during the mid-Cretaceous. The sphaerosiderite isotopic data have been used to constrain a mass balance model of the hydrologic cycle in the northern hemisphere and to quantify precipitation rates of the equable 'greenhouse' Albian Stage in the KWIB. The mass balance model tracks the evolving isotopic composition of an air mass and its precipitation, and is driven by latitudinal temperature gradients. Our simulations indicate that significant increases in Albian precipitation (34-52%) and evaporation fluxes (76-96%) are required to reproduce the difference between modern and Albian meteoric siderite ??18O latitudinal gradients. Calculations of precipitation rates from model outputs suggest mid-high latitude precipitation rates greatly exceeded modern rates (156-220% greater in mid latitudes [2600-3300 mm/yr], 99% greater at high latitudes [550 mm/yr]). The calculated precipitation rates are significantly different from the precipitation rates predicted by some recent general circulation models (GCMs) for the warm Cretaceous, particularly in the mid to high latitudes. Our mass balance model by no means replaces GCMs. However, it is a simple and effective means of obtaining quantitative data regarding the mid-Cretaceous hydrologic cycle in the KWIB. Our goal is to encourage the incorporation of isotopic tracers into GCM simulations of the mid-Cretaceous, and to show how our empirical data and mass balance model estimates help constrain the boundary conditions. ?? 2002 Elsevier Science B.V. All rights reserved.

  7. A contact stress model for multifingered grasps of rough objects

    NASA Technical Reports Server (NTRS)

    Sinha, Pramath Raj; Abel, Jacob M.

    1990-01-01

    The model developed utilizes a contact-stress analysis of an arbitrarily shaped object in a multifingered grasp. The fingers and the object are all treated as elastic bodies, and the region of contact is modeled as a deformable surface patch. The relationship between the friction and normal forces is nonlocal and nonlinear in nature and departs from the Coulomb approximation. The nature of the constraints arising out of conditions for compatibility and static equilibrium motivated the formulation of the model as a nonlinear constrained minimization problem. The model is able to predict the magnitude of the inwardly directed normal forces and both the magnitude and direction of the tangential (friction) forces at each finger-object interface for grasped objects in static equilibrium.

  8. Dynamical Evolution of Planetary Embryos

    NASA Technical Reports Server (NTRS)

    Wetherill, George W.

    2002-01-01

    During the past decade, progress has been made by relating the 'standard model' for the formation of planetary systems to computational and observational advances. A significant contribution to this has been provided by this grant. The consequence of this is that the rigor of the physical modeling has improved considerably. This has identified discrepancies between the predictions of the standard model and recent observations of extrasolar planets. In some cases, the discrepancies can be resolved by recognition of the stochastic nature of the planetary formation process, leading to variations in the final state of a planetary system. In other cases, it seems more likely that there are major deficiencies in the standard model, requiring our identifying variations to the model that are not so strongly constrained to our Solar System.

  9. Variations of leaf longevity in tropical moist forests predicted by a trait-driven carbon optimality model

    DOE PAGES

    Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph; ...

    2017-07-04

    Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less

  10. Hints for new sources of flavour violation in meson mixing

    NASA Astrophysics Data System (ADS)

    Blanke, M.

    2017-07-01

    The recent results by the Fermilab-Lattice and MILC collaborations on the hadronic matrix elements entering B_{d,s} - bar{B}_{d,s} mixing show a significant tension of the measured values of the mass differences Δ M_{d,s} with their SM predictions. We review the implications of these results in the context of Constrained Minimal Flavour Violation models. In these models, the CKM elements γ and \\vert V_{ub}\\vert/\\vert V_{cb}\\vert can be determined from B_{d,s} - bar{B}_{d,s} mixing observables, yielding a prediction for γ below its tree-level value. Determining subsequently \\vert V_{cb}\\vert from the measured value of either Δ M_s or ɛ_K gives inconsistent results, with the tension being smallest in the Standard Model limit. This tension can be resolved if the flavour universality of new contributions to Δ F = 2 observables is broken. We briefly discuss the case of U(2)^3 flavour models as an illustrative example.

  11. Variations of leaf longevity in tropical moist forests predicted by a trait-driven carbon optimality model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xiangtao; Medvigy, David; Wright, Stuart Joseph

    Leaf longevity (LL) varies more than 20-fold in tropical evergreen forests, but it remains unclear how to capture these variations using predictive models. Current theories of LL that are based on carbon optimisation principles are challenging to quantitatively assess because of uncertainty across species in the ‘ageing rate:’ the rate at which leaf photosynthetic capacity declines with age. Here in this paper, we present a meta-analysis of 49 species across temperate and tropical biomes, demonstrating that the ageing rate of photosynthetic capacity is positively correlated with the mass-based carboxylation rate of mature leaves. We assess an improved trait-driven carbon optimalitymore » model with in situLL data for 105 species in two Panamanian forests. Additionally, we show that our model explains over 40% of the cross-species variation in LL under contrasting light environment. Collectively, our results reveal how variation in LL emerges from carbon optimisation constrained by both leaf structural traits and abiotic environment.« less

  12. Dispersal and extrapolation on the accuracy of temporal predictions from distribution models for the Darwin's frog.

    PubMed

    Uribe-Rivera, David E; Soto-Azat, Claudio; Valenzuela-Sánchez, Andrés; Bizama, Gustavo; Simonetti, Javier A; Pliscoff, Patricio

    2017-07-01

    Climate change is a major threat to biodiversity; the development of models that reliably predict its effects on species distributions is a priority for conservation biogeography. Two of the main issues for accurate temporal predictions from Species Distribution Models (SDM) are model extrapolation and unrealistic dispersal scenarios. We assessed the consequences of these issues on the accuracy of climate-driven SDM predictions for the dispersal-limited Darwin's frog Rhinoderma darwinii in South America. We calibrated models using historical data (1950-1975) and projected them across 40 yr to predict distribution under current climatic conditions, assessing predictive accuracy through the area under the ROC curve (AUC) and True Skill Statistics (TSS), contrasting binary model predictions against temporal-independent validation data set (i.e., current presences/absences). To assess the effects of incorporating dispersal processes we compared the predictive accuracy of dispersal constrained models with no dispersal limited SDMs; and to assess the effects of model extrapolation on the predictive accuracy of SDMs, we compared this between extrapolated and no extrapolated areas. The incorporation of dispersal processes enhanced predictive accuracy, mainly due to a decrease in the false presence rate of model predictions, which is consistent with discrimination of suitable but inaccessible habitat. This also had consequences on range size changes over time, which is the most used proxy for extinction risk from climate change. The area of current climatic conditions that was absent in the baseline conditions (i.e., extrapolated areas) represents 39% of the study area, leading to a significant decrease in predictive accuracy of model predictions for those areas. Our results highlight (1) incorporating dispersal processes can improve predictive accuracy of temporal transference of SDMs and reduce uncertainties of extinction risk assessments from global change; (2) as geographical areas subjected to novel climates are expected to arise, they must be reported as they show less accurate predictions under future climate scenarios. Consequently, environmental extrapolation and dispersal processes should be explicitly incorporated to report and reduce uncertainties in temporal predictions of SDMs, respectively. Doing so, we expect to improve the reliability of the information we provide for conservation decision makers under future climate change scenarios. © 2017 by the Ecological Society of America.

  13. PREDICTING EVAPORATION RATES AND TIMES FOR SPILLS OF CHEMICAL MIXTURES

    EPA Science Inventory


    Spreadsheet and short-cut methods have been developed for predicting evaporation rates and evaporation times for spills (and constrained baths) of chemical mixtures. Steady-state and time-varying predictions of evaporation rates can be made for six-component mixtures, includ...

  14. Constant-roll (quasi-)linear inflation

    NASA Astrophysics Data System (ADS)

    Karam, A.; Marzola, L.; Pappas, T.; Racioppi, A.; Tamvakis, K.

    2018-05-01

    In constant-roll inflation, the scalar field that drives the accelerated expansion of the Universe is rolling down its potential at a constant rate. Within this framework, we highlight the relations between the Hubble slow-roll parameters and the potential ones, studying in detail the case of a single-field Coleman-Weinberg model characterised by a non-minimal coupling of the inflaton to gravity. With respect to the exact constant-roll predictions, we find that assuming an approximate slow-roll behaviour yields a difference of Δ r = 0.001 in the tensor-to-scalar ratio prediction. Such a discrepancy is in principle testable by future satellite missions. As for the scalar spectral index ns, we find that the existing 2-σ bound constrains the value of the non-minimal coupling to ξphi ~ 0.29–0.31 in the model under consideration.

  15. Anomalous transport theory for the reversed field pinch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, P.W.; Hegna, C.C; Sovinec, C.R.

    1996-09-01

    Physically motivated transport models with predictive capabilities and significance beyond the reversed field pinch (RFP) are presented. It is shown that the ambipolar constrained electron heat loss observed in MST can be quantitatively modeled by taking account of the clumping in parallel streaming electrons and the resultant self-consistent interaction with collective modes; that the discrete dynamo process is a relaxation oscillation whose dependence on the tearing instability and profile relaxation physics leads to amplitude and period scaling predictions consistent with experiment; that the Lundquist number scaling in relaxed plasmas driven by magnetic turbulence has a weak S{sup {minus}1/4} scaling; andmore » that radial E{times}B shear flow can lead to large reductions in the edge particle flux with little change in the heat flux, as observed in the RFP and tokamak. 24 refs.« less

  16. Theoretical Modeling of Interstellar Chemistry

    NASA Technical Reports Server (NTRS)

    Charnley, Steven

    2009-01-01

    The chemistry of complex interstellar organic molecules will be described. Gas phase processes that may build large carbon-chain species in cold molecular clouds will be summarized. Catalytic reactions on grain surfaces can lead to a large variety of organic species, and models of molecule formation by atom additions to multiply-bonded molecules will be presented. The subsequent desorption of these mixed molecular ices can initiate a distinctive organic chemistry in hot molecular cores. The general ion-molecule pathways leading to even larger organics will be outlined. The predictions of this theory will be compared with observations to show how possible organic formation pathways in the interstellar medium may be constrained. In particular, the success of the theory in explaining trends in the known interstellar organics, in predicting recently-detected interstellar molecules, and, just as importantly, non-detections, will be discussed.

  17. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    PubMed

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Extracting electron transfer coupling elements from constrained density functional theory

    NASA Astrophysics Data System (ADS)

    Wu, Qin; Van Voorhis, Troy

    2006-10-01

    Constrained density functional theory (DFT) is a useful tool for studying electron transfer (ET) reactions. It can straightforwardly construct the charge-localized diabatic states and give a direct measure of the inner-sphere reorganization energy. In this work, a method is presented for calculating the electronic coupling matrix element (Hab) based on constrained DFT. This method completely avoids the use of ground-state DFT energies because they are known to irrationally predict fractional electron transfer in many cases. Instead it makes use of the constrained DFT energies and the Kohn-Sham wave functions for the diabatic states in a careful way. Test calculations on the Zn2+ and the benzene-Cl atom systems show that the new prescription yields reasonable agreement with the standard generalized Mulliken-Hush method. We then proceed to produce the diabatic and adiabatic potential energy curves along the reaction pathway for intervalence ET in the tetrathiafulvalene-diquinone (Q-TTF-Q) anion. While the unconstrained DFT curve has no reaction barrier and gives Hab≈17kcal /mol, which qualitatively disagrees with experimental results, the Hab calculated from constrained DFT is about 3kcal /mol and the generated ground state has a barrier height of 1.70kcal/mol, successfully predicting (Q-TTF-Q)- to be a class II mixed-valence compound.

  19. Synergism and Antagonism of Proximate Mechanisms Enable and Constrain the Response to Simultaneous Selection on Body Size and Development Time: An Empirical Test Using Experimental Evolution.

    PubMed

    Davidowitz, Goggy; Roff, Derek; Nijhout, H Frederik

    2016-11-01

    Natural selection acts on multiple traits simultaneously. How mechanisms underlying such traits enable or constrain their response to simultaneous selection is poorly understood. We show how antagonism and synergism among three traits at the developmental level enable or constrain evolutionary change in response to simultaneous selection on two focal traits at the phenotypic level. After 10 generations of 25% simultaneous directional selection on all four combinations of body size and development time in Manduca sexta (Sphingidae), the changes in the three developmental traits predict 93% of the response of development time and 100% of the response of body size. When the two focal traits were under synergistic selection, the response to simultaneous selection was enabled by juvenile hormone and ecdysteroids and constrained by growth rate. When the two focal traits were under antagonistic selection, the response to selection was due primarily to change in growth rate and constrained by the two hormonal traits. The approach used here reduces the complexity of the developmental and endocrine mechanisms to three proxy traits. This generates explicit predictions for the evolutionary response to selection that are based on biologically informed mechanisms. This approach has broad applicability to a diverse range of taxa, including algae, plants, amphibians, mammals, and insects.

  20. Monte Carlo-based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2014-04-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures - for example, by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow for a more detailed analysis of the dynamic behaviour of the soil-plant interface. We coupled two of such high-process-oriented independent models and calibrated both models simultaneously. The catchment modelling framework (CMF) simulated soil hydrology based on the Richards equation and the van Genuchten-Mualem model of the soil hydraulic properties. CMF was coupled with the plant growth modelling framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo-based generalized likelihood uncertainty estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from a uniform distribution. The model was applied to three sites with different management in Müncheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matter of roots, storages, stems and leaves. The shape parameter of the retention curve n was highly constrained, whereas other parameters of the retention curve showed a large equifinality. We attribute this slightly poorer model performance to missing leaf senescence, which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need for including agricultural management options in the coupled model.

  1. A Test of the Sophisticated Guessing Theory of Word Perception

    ERIC Educational Resources Information Center

    Johnston, James C.

    1978-01-01

    Experiments tested the predictions that words are perceived more accurately in strongly constraining word contexts than in weakly constraining word contexts, and that a strong perceptual advantage would be present for letters in words vs. letters alone or in unrelated-letter strings. Several alternative theories of word perception are discussed.…

  2. Reference governors for controlled belt restraint systems

    NASA Astrophysics Data System (ADS)

    van der Laan, E. P.; Heemels, W. P. M. H.; Luijten, H.; Veldpaus, F. E.; Steinbuch, M.

    2010-07-01

    Today's restraint systems typically include a number of airbags, and a three-point seat belt with load limiter and pretensioner. For the class of real-time controlled restraint systems, the restraint actuator settings are continuously manipulated during the crash. This paper presents a novel control strategy for these systems. The control strategy developed here is based on a combination of model predictive control and reference management, in which a non-linear device - a reference governor (RG) - is added to a primal closed-loop controlled system. This RG determines an optimal setpoint in terms of injury reduction and constraint satisfaction by solving a constrained optimisation problem. Prediction of the vehicle motion, required to predict future constraint violation, is included in the design and is based on past crash data, using linear regression techniques. Simulation results with MADYMO models show that, with ideal sensors and actuators, a significant reduction (45%) of the peak chest acceleration can be achieved, without prior knowledge of the crash. Furthermore, it is shown that the algorithms are sufficiently fast to be implemented online.

  3. Parse, simulation, and prediction of NOx emission across the Midwestern United States

    NASA Astrophysics Data System (ADS)

    Fang, H.; Michalski, G. M.; Spak, S.

    2017-12-01

    Accurately constraining N emissions in space and time has been a challenge for atmospheric scientists. It has been suggested that 15N isotopes may be a way of tracking N emission sources across various spatial and temporal scales. However, the complexity of multiple N sources that can quickly change in intensity has made this a difficult problem. We have used a SMOKE emission model to parse NOx emission across the Midwestern United States for a one-year simulation. An isotope mass balance methods was used to assign 15N values to road, non-road, point, and area sources. The SMOKE emissions and isotope mass balance were then combined to predict the 15N of NOx emissions (Figure 1). This ^15N of NOx emissions model was then incorporated into CMAQ to assess the role of transport and chemistry would impact the 15N value of NOx due to mixing and removal processes. The predicted 15N value of NOx was compared to those in recent measurements of NOx and atmospheric nitrate.

  4. Short-Term Retrospective Land Data Assimilation Schemes

    NASA Technical Reports Server (NTRS)

    Houser, P. R.; Cosgrove, B. A.; Entin, J. K.; Lettenmaier, D.; ODonnell, G.; Mitchell, K.; Marshall, C.; Lohmann, D.; Schaake, J. C.; Duan, Q.; hide

    2000-01-01

    Subsurface moisture and temperature and snow/ice stores exhibit persistence on various time scales that has important implications for the extended prediction of climatic and hydrologic extremes. Hence, to improve their specification of the land surface, many numerical weather prediction (NWP) centers have incorporated complex land surface schemes in their forecast models. However, because land storages are integrated states, errors in NWP forcing accumulates in these stores, which leads to incorrect surface water and energy partitioning. This has motivated the development of Land Data Assimilation Schemes (LDAS) that can be used to constrain NWP surface storages. An LDAS is an uncoupled land surface scheme that is forced primarily by observations, and is therefore less affected by NWP forcing biases. The implementation of an LDAS also provides the opportunity to correct the model's trajectory using remotely-sensed observations of soil temperature, soil moisture, and snow using data assimilation methods. The inclusion of data assimilation in LDAS will greatly increase its predictive capacity, as well as provide high-quality land surface assimilated data.

  5. Inverse Modeling of Tropospheric Methane Constrained by 13C Isotope in Methane

    NASA Astrophysics Data System (ADS)

    Mikaloff Fletcher, S. E.; Tans, P. P.; Bruhwiler, L. M.

    2001-12-01

    Understanding the budget of methane is crucial to predicting climate change and managing earth's carbon reservoirs. Methane is responsible for approximately 15% of the anthropogenic greenhouse forcing and has a large impact on the oxidative capacity of Earth's atmosphere due to its reaction with hydroxyl radical. At present, many of the sources and sinks of methane are poorly understood, due in part to the large spatial and temporal variability of the methane flux. Model calculations of methane mixing ratios using most process-based source estimates typically over-predict the inter-hemispheric gradient of atmospheric methane. Inverse models, which estimate trace gas budgets by using observations of atmospheric mixing ratios and transport models to estimate sources and sinks, have been used to incorporate features of the atmospheric observations into methane budgets. While inverse models of methane generally tend to find a decrease in northern hemisphere sources and an increase in southern hemisphere sources relative to process-based estimates,no inverse study has definitively associated the inter-hemispheric gradient difference with a specific source process or group of processes. In this presentation, observations of isotopic ratios of 13C in methane and isotopic signatures of methane source processes are used in conjunction with an inverse model of methane to further constrain the source estimates of methane. In order to investigate the advantages of incorporating 13C, the TM3 three-dimensional transport model was used. The methane and carbon dioxide measurements used are from a cooperative international effort, the Cooperative Air Sampling Network, lead by the Climate Monitoring Diagnostics Laboratory (CMDL) at the National Oceanic and Atmospheric Administration (NOAA). Experiments using model calculations based on process-based source estimates show that the inter-hemispheric gradient of δ 13CH4 is not reproduced by these source estimates, showing that the addition of observations of δ 13CH4 should provide unique insight into the methane problem.

  6. The anatomy of language: contributions from functional neuroimaging

    PubMed Central

    PRICE, CATHY J.

    2000-01-01

    This article illustrates how functional neuroimaging can be used to test the validity of neurological and cognitive models of language. Three models of language are described: the 19th Century neurological model which describes both the anatomy and cognitive components of auditory and visual word processing, and 2 20th Century cognitive models that are not constrained by anatomy but emphasise 2 different routes to reading that are not present in the neurological model. A series of functional imaging studies are then presented which show that, as predicted by the 19th Century neurologists, auditory and visual word repetition engage the left posterior superior temporal and posterior inferior frontal cortices. More specifically, the roles Wernicke and Broca assigned to these regions lie respectively in the posterior superior temporal sulcus and the anterior insula. In addition, a region in the left posterior inferior temporal cortex is activated for word retrieval, thereby providing a second route to reading, as predicted by the 20th Century cognitive models. This region and its function may have been missed by the 19th Century neurologists because selective damage is rare. The angular gyrus, previously linked to the visual word form system, is shown to be part of a distributed semantic system that can be accessed by objects and faces as well as speech. Other components of the semantic system include several regions in the inferior and middle temporal lobes. From these functional imaging results, a new anatomically constrained model of word processing is proposed which reconciles the anatomical ambitions of the 19th Century neurologists and the cognitive finesse of the 20th Century cognitive models. The review focuses on single word processing and does not attempt to discuss how words are combined to generate sentences or how several languages are learned and interchanged. Progress in unravelling these and other related issues will depend on the integration of behavioural, computational and neurophysiological approaches, including neuroimaging. PMID:11117622

  7. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  8. Wavelength Dependent Luminosity Functions for Super Star Clusters

    NASA Astrophysics Data System (ADS)

    Garmany, Catharine

    1997-07-01

    Starburst galaxies, considered to exhibit enhanced star formation on a galaxy-wide scale, have now been found with HST to contain very intense knots of star formation, referred to as ``super star clusters'', or SSCs. A steepening of the luminosity function with increasing wavelength for young burst populations, such as SSCs, has recently been predicted by Hogg & Phinney {1997}. This prediction, not previously addressed in the literature, is straightforward to test with multi- wavelength photometry. Using the colors of the SSCs in a galaxy in combination with the difference in slopes of the luminosity functions derived from different wavelength bands and applying population synthesis models, we can also constrain the high mass stellar initial mass function {IMF}. Recent work has suggested that the slope of the IMF is roughly constant in a variety of local environments, from galactic OB associations to the closest analog of a super star cluster, R136 in the LMC. This investigation will allow us to compare the IMFs in the extreme environments of SSCs in starburst galaxies to IMFs found locally in the Galaxy, LMC, and SMC. Archival imaging data in both the UV and optical bands is available for about 10 young starburst systems. These data will allow us to test the predictions of Hogg & Phinney, as well as constrain the IMF for environments not found in the nearby universe.

  9. Cultural and Environmental Predictors of Pre-European Deforestation on Pacific Islands

    PubMed Central

    Coomber, Ties; Passmore, Sam; Greenhill, Simon J.; Kushnick, Geoff

    2016-01-01

    The varied islands of the Pacific provide an ideal natural experiment for studying the factors shaping human impact on the environment. Previous research into pre-European deforestation across the Pacific indicated a major effect of environment but did not account for cultural variation or control for dependencies in the data due to shared cultural ancestry and geographic proximity. The relative importance of environment and culture on Pacific deforestation and forest replacement and the extent to which environmental impact is constrained by cultural ancestry therefore remain unexplored. Here we use comparative phylogenetic methods to model the effect of nine ecological and two cultural variables on pre-European Pacific forest outcomes at 80 locations across 67 islands. We show that some but not all ecological features remain important predictors of forest outcomes after accounting for cultural covariates and non-independence in the data. Controlling for ecology, cultural variation in agricultural intensification predicts deforestation and forest replacement, and there is some evidence that land tenure norms predict forest replacement. These findings indicate that, alongside ecology, cultural factors also predict pre-European Pacific forest outcomes. Although forest outcomes covary with cultural ancestry, this effect disappears after controlling for geographic proximity and ecology. This suggests that forest outcomes were not tightly constrained by colonists’ cultural ancestry, but instead reflect a combination of ecological constraints and the short-term responses of each culture in the face of those constraints. PMID:27232713

  10. Constraining hot plasma in a non-flaring solar active region with FOXSI hard X-ray observations

    NASA Astrophysics Data System (ADS)

    Ishikawa, Shin-nosuke; Glesener, Lindsay; Christe, Steven; Ishibashi, Kazunori; Brooks, David H.; Williams, David R.; Shimojo, Masumi; Sako, Nobuharu; Krucker, Säm

    2014-12-01

    We present new constraints on the high-temperature emission measure of a non-flaring solar active region using observations from the recently flown Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket payload. FOXSI has performed the first focused hard X-ray (HXR) observation of the Sun in its first successful flight on 2012 November 2. Focusing optics, combined with small strip detectors, enable high-sensitivity observations with respect to previous indirect imagers. This capability, along with the sensitivity of the HXR regime to high-temperature emission, offers the potential to better characterize high-temperature plasma in the corona as predicted by nanoflare heating models. We present a joint analysis of the differential emission measure (DEM) of active region 11602 using coordinated observations by FOXSI, Hinode/XRT, and Hinode/EIS. The Hinode-derived DEM predicts significant emission measure between 1 MK and 3 MK, with a peak in the DEM predicted at 2.0-2.5 MK. The combined XRT and EIS DEM also shows emission from a smaller population of plasma above 8 MK. This is contradicted by FOXSI observations that significantly constrain emission above 8 MK. This suggests that the Hinode DEM analysis has larger uncertainties at higher temperatures and that > 8 MK plasma above an emission measure of 3 × 1044 cm-3 is excluded in this active region.

  11. Cultural and Environmental Predictors of Pre-European Deforestation on Pacific Islands.

    PubMed

    Atkinson, Quentin D; Coomber, Ties; Passmore, Sam; Greenhill, Simon J; Kushnick, Geoff

    2016-01-01

    The varied islands of the Pacific provide an ideal natural experiment for studying the factors shaping human impact on the environment. Previous research into pre-European deforestation across the Pacific indicated a major effect of environment but did not account for cultural variation or control for dependencies in the data due to shared cultural ancestry and geographic proximity. The relative importance of environment and culture on Pacific deforestation and forest replacement and the extent to which environmental impact is constrained by cultural ancestry therefore remain unexplored. Here we use comparative phylogenetic methods to model the effect of nine ecological and two cultural variables on pre-European Pacific forest outcomes at 80 locations across 67 islands. We show that some but not all ecological features remain important predictors of forest outcomes after accounting for cultural covariates and non-independence in the data. Controlling for ecology, cultural variation in agricultural intensification predicts deforestation and forest replacement, and there is some evidence that land tenure norms predict forest replacement. These findings indicate that, alongside ecology, cultural factors also predict pre-European Pacific forest outcomes. Although forest outcomes covary with cultural ancestry, this effect disappears after controlling for geographic proximity and ecology. This suggests that forest outcomes were not tightly constrained by colonists' cultural ancestry, but instead reflect a combination of ecological constraints and the short-term responses of each culture in the face of those constraints.

  12. USING ForeCAT DEFLECTIONS AND ROTATIONS TO CONSTRAIN THE EARLY EVOLUTION OF CMEs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kay, C.; Opher, M.; Colaninno, R. C.

    2016-08-10

    To accurately predict the space weather effects of the impacts of coronal mass ejection (CME) at Earth one must know if and when a CME will impact Earth and the CME parameters upon impact. In 2015 Kay et al. presented Forecasting a CME’s Altered Trajectory (ForeCAT), a model for CME deflections based on the magnetic forces from the background solar magnetic field. Knowing the deflection and rotation of a CME enables prediction of Earth impacts and the orientation of the CME upon impact. We first reconstruct the positions of the 2010 April 8 and the 2012 July 12 CMEs frommore » the observations. The first of these CMEs exhibits significant deflection and rotation (34° deflection and 58° rotation), while the second shows almost no deflection or rotation (<3° each). Using ForeCAT, we explore a range of initial parameters, such as the CME’s location and size, and find parameters that can successfully reproduce the behavior for each CME. Additionally, since the deflection depends strongly on the behavior of a CME in the low corona, we are able to constrain the expansion and propagation of these CMEs in the low corona.« less

  13. Predicting dynamic metabolic demands in the photosynthetic eukaryote Chlorella vulgaris

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuniga, Cristal; Levering, Jennifer; Antoniewicz, Maciek R.

    Phototrophic organisms exhibit a highly dynamic proteome, adapting their biomass composition in response to diurnal light/dark cycles and nutrient availability. We used experimentally determined biomass compositions over the course of growth to determine and constrain the biomass objective function (BOF) in a genome-scale metabolic model of Chlorella vulgaris UTEX 395 over time. Changes in the BOF, which encompasses all metabolites necessary to produce biomass, influence the state of the metabolic network thus directly affecting predictions. Simulations using dynamic BOFs predicted distinct proteome demands during heterotrophic or photoautotrophic growth. Model-driven analysis of extracellular nitrogen concentrations and predicted nitrogen uptake rates revealedmore » an intracellular nitrogen pool, which contains 38% of the total nitrogen provided in the medium for photoautotrophic and 13% for heterotrophic growth. Agreement between flux and gene expression trends was determined by statistical comparison. Accordance between predicted fluxes trends and gene expression trends was found for 65% of multi-subunit enzymes and 75% of allosteric reactions. Reactions with the highest agreement between simulations and experimental data were associated with energy metabolism, terpenoid biosynthesis, fatty acids, nucleotides, and amino acids metabolism. Moreover, predicted flux distributions at each time point were compared with gene expression data to gain new insights into intracellular compartmentalization, specifically for transporters. A total of 103 genes related to internal transport reactions were identified and added to the updated model of C. vulgaris, iCZ946, thus increasing our knowledgebase by 10% for this model green alga.« less

  14. Predicting dynamic metabolic demands in the photosynthetic eukaryote Chlorella vulgaris

    DOE PAGES

    Zuniga, Cristal; Levering, Jennifer; Antoniewicz, Maciek R.; ...

    2017-09-26

    Phototrophic organisms exhibit a highly dynamic proteome, adapting their biomass composition in response to diurnal light/dark cycles and nutrient availability. We used experimentally determined biomass compositions over the course of growth to determine and constrain the biomass objective function (BOF) in a genome-scale metabolic model of Chlorella vulgaris UTEX 395 over time. Changes in the BOF, which encompasses all metabolites necessary to produce biomass, influence the state of the metabolic network thus directly affecting predictions. Simulations using dynamic BOFs predicted distinct proteome demands during heterotrophic or photoautotrophic growth. Model-driven analysis of extracellular nitrogen concentrations and predicted nitrogen uptake rates revealedmore » an intracellular nitrogen pool, which contains 38% of the total nitrogen provided in the medium for photoautotrophic and 13% for heterotrophic growth. Agreement between flux and gene expression trends was determined by statistical comparison. Accordance between predicted fluxes trends and gene expression trends was found for 65% of multi-subunit enzymes and 75% of allosteric reactions. Reactions with the highest agreement between simulations and experimental data were associated with energy metabolism, terpenoid biosynthesis, fatty acids, nucleotides, and amino acids metabolism. Moreover, predicted flux distributions at each time point were compared with gene expression data to gain new insights into intracellular compartmentalization, specifically for transporters. A total of 103 genes related to internal transport reactions were identified and added to the updated model of C. vulgaris, iCZ946, thus increasing our knowledgebase by 10% for this model green alga.« less

  15. Stimulus Dependence of Correlated Variability across Cortical Areas

    PubMed Central

    Cohen, Marlene R.

    2016-01-01

    The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163

  16. Automatic Hazard Detection for Landers

    NASA Technical Reports Server (NTRS)

    Huertas, Andres; Cheng, Yang; Matthies, Larry H.

    2008-01-01

    Unmanned planetary landers to date have landed 'blind'; that is, without the benefit of onboard landing hazard detection and avoidance systems. This constrains landing site selection to very benign terrain,which in turn constrains the scientific agenda of missions. The state of the art Entry, Descent, and Landing (EDL) technology can land a spacecraft on Mars somewhere within a 20-100km landing ellipse.Landing ellipses are very likely to contain hazards such as craters, discontinuities, steep slopes, and large rocks, than can cause mission-fatal damage. We briefly review sensor options for landing hazard detection and identify a perception approach based on stereo vision and shadow analysis that addresses the broadest set of missions. Our approach fuses stereo vision and monocular shadow-based rock detection to maximize spacecraft safety. We summarize performance models for slope estimation and rock detection within this approach and validate those models experimentally. Instantiating our model of rock detection reliability for Mars predicts that this approach can reduce the probability of failed landing by at least a factor of 4 in any given terrain. We also describe a rock detector/mapper applied to large-high-resolution images from the Mars Reconnaissance Orbiter (MRO) for landing site characterization and selection for Mars missions.

  17. A multidimensional stability model for predicting shallow landslide size and shape across landscapes.

    PubMed

    Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E

    2014-11-01

    The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data.

  18. A multidimensional stability model for predicting shallow landslide size and shape across landscapes

    PubMed Central

    Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E

    2014-01-01

    The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data. PMID:26213663

  19. Stabilizing l1-norm prediction models by supervised feature grouping.

    PubMed

    Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha

    2016-02-01

    Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Application of all relevant feature selection for failure analysis of parameter-induced simulation crashes in climate models

    NASA Astrophysics Data System (ADS)

    Paja, W.; Wrzesień, M.; Niemiec, R.; Rudnicki, W. R.

    2015-07-01

    The climate models are extremely complex pieces of software. They reflect best knowledge on physical components of the climate, nevertheless, they contain several parameters, which are too weakly constrained by observations, and can potentially lead to a crash of simulation. Recently a study by Lucas et al. (2013) has shown that machine learning methods can be used for predicting which combinations of parameters can lead to crash of simulation, and hence which processes described by these parameters need refined analyses. In the current study we reanalyse the dataset used in this research using different methodology. We confirm the main conclusion of the original study concerning suitability of machine learning for prediction of crashes. We show, that only three of the eight parameters indicated in the original study as relevant for prediction of the crash are indeed strongly relevant, three other are relevant but redundant, and two are not relevant at all. We also show that the variance due to split of data between training and validation sets has large influence both on accuracy of predictions and relative importance of variables, hence only cross-validated approach can deliver robust prediction of performance and relevance of variables.

  1. Higher-order QCD predictions for dark matter production at the LHC in simplified models with s-channel mediators.

    PubMed

    Backović, Mihailo; Krämer, Michael; Maltoni, Fabio; Martini, Antony; Mawatari, Kentarou; Pellen, Mathieu

    Weakly interacting dark matter particles can be pair-produced at colliders and detected through signatures featuring missing energy in association with either QCD/EW radiation or heavy quarks. In order to constrain the mass and the couplings to standard model particles, accurate and precise predictions for production cross sections and distributions are of prime importance. In this work, we consider various simplified models with s -channel mediators. We implement such models in the FeynRules/MadGraph5_aMC@NLO framework, which allows to include higher-order QCD corrections in realistic simulations and to study their effect systematically. As a first phenomenological application, we present predictions for dark matter production in association with jets and with a top-quark pair at the LHC, at next-to-leading order accuracy in QCD, including matching/merging to parton showers. Our study shows that higher-order QCD corrections to dark matter production via s -channel mediators have a significant impact not only on total production rates, but also on shapes of distributions. We also show that the inclusion of next-to-leading order effects results in a sizeable reduction of the theoretical uncertainties.

  2. A preliminary 1-D model investigation of tidal variations of temperature and chlorinity at the Grotto mound, Endeavour Segment, Juan de Fuca Ridge

    NASA Astrophysics Data System (ADS)

    Xu, G.; Larson, B. I.; Bemis, K. G.; Lilley, Marvin D.

    2017-01-01

    Tidal oscillations of venting temperature and chlorinity have been observed in the long-term time series data recorded by the Benthic and Resistivity Sensors (BARS) at the Grotto mound on the Juan de Fuca Ridge. In this study, we use a one-dimensional two-layer poroelastic model to conduct a preliminary investigation of three hypothetical scenarios in which seafloor tidal loading can modulate the venting temperature and chlorinity at Grotto through the mechanisms of subsurface tidal mixing and/or subsurface tidal pumping. For the first scenario, our results demonstrate that it is unlikely for subsurface tidal mixing to cause coupled tidal oscillations in venting temperature and chlorinity of the observed amplitudes. For the second scenario, the model results suggest that it is plausible that the tidal oscillations in venting temperature and chlorinity are decoupled with the former caused by subsurface tidal pumping and the latter caused by subsurface tidal mixing, although the mixing depth is not well constrained. For the third scenario, our results suggest that it is plausible for subsurface tidal pumping to cause coupled tidal oscillations in venting temperature and chlorinity. In this case, the observed tidal phase lag between venting temperature and chlorinity is close to the poroelastic model prediction if brine storage occurs throughout the upflow zone under the premise that layers 2A and 2B have similar crustal permeabilities. However, the predicted phase lag is poorly constrained if brine storage is limited to layer 2B as would be expected when its crustal permeability is much smaller than that of layer 2A.

  3. Sensitivity to Factors Underlying the Hiatus

    NASA Technical Reports Server (NTRS)

    Marvel, Kate; Schmidt, Gavin A.; Tsigaridis, Kostas; Cook, Benjamin I.

    2015-01-01

    Recent trends in global mean surface air temperature fall outside the 90 range predicted by models using the CMIP5 forcings and scenarios; this recent period of muted warming is dubbed the hiatus. The hiatus has attracted broad attention in both the popular press and the scientific literature, primarily because of its perceived implications for understanding long-term trends. Many hypotheses have been offered to explain the warming slowdown during the hiatus, and comprehensive studies of this period across multiple variables and spatial scales will likely improve our understanding of the physical mechanisms driving global temperature change and variability.We argue, however, that decadal temperature trends by themselves are unlikely to constrain future trajectories of global mean temperature and that the hiatus does not significantly revise our understanding of overall climate sensitivity. Instead, we demonstrate that, because of the poorly constrained nature of the hiatus, model-observation disagreements over this period may be resolvable via uncertainties in the observations, modeled internal variability, forcing estimates, or (more likely) some combination of all three factors. We define the hiatus interval as 1998-2012, endpoints judiciously chosen to minimize observed warming by including the large 1998 El Nio event and excluding 2014, an exceptionally warm year. Such choices are fundamentally subjective and cannot be considered random, so any probabilistic statements regarding the likelihood of this occurring need to be made carefully. Using this definition, the observed global temperature trend estimates from four datasets fall outside the 5-95 interval predicted by the CMIP5 models. Here we explore some of the plausible explanations for this discrepancy, and show that no unique explanation is likely to fully account for the hiatus.

  4. Probing nuclear symmetry energy at high densities using pion, kaon, eta and photon productions in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Xiao, Zhi-Gang; Yong, Gao-Chan; Chen, Lie-Wen; Li, Bao-An; Zhang, Ming; Xiao, Guo-Qing; Xu, Nu

    2014-02-01

    The high-density behavior of nuclear symmetry energy is among the most uncertain properties of dense neutron-rich matter. Its accurate determination has significant ramifications in understanding not only the reaction dynamics of heavy-ion reactions, especially those induced by radioactive beams, but also many interesting phenomena in astrophysics, such as the explosion mechanism of supernova and the properties of neutron stars. The heavy-ion physics community has devoted much effort during the last few years to constrain the high-density symmetry using various probes. In particular, the / ratio has been most extensively studied both theoretically and experimentally. All models have consistently predicted qualitatively that the / ratio is a sensitive probe of the high-density symmetry energy especially with beam energies near the pion production threshold. However, the predicted values of the / ratio are still quite model dependent mostly because of the complexity of modeling pion production and reabsorption dynamics in heavy-ion collisions, leading to currently still controversial conclusions regarding the high-density behavior of nuclear symmetry energy from comparing various model calculations with available experimental data. As more / data become available and a deeper understanding about the pion dynamics in heavy-ion reactions is obtained, more penetrating probes, such as the K +/ K 0 ratio, meson and high-energy photons are also being investigated or planned at several facilities. Here, we review some of our recent contributions to the community effort of constraining the high-density behavior of nuclear symmetry energy in heavy-ion collisions. In addition, the status of some worldwide experiments for studying the high-density symmetry energy, including the HIRFL-CSR external target experiment (CEE) are briefly introduced.

  5. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    NASA Astrophysics Data System (ADS)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  6. Characterization and predictability of basin scale SWE distributions using ASO snow depth and SWE retrievals

    NASA Astrophysics Data System (ADS)

    Bormann, K.; Hedrick, A. R.; Marks, D. G.; Painter, T. H.

    2017-12-01

    The spatial and temporal distribution of snow water resources (SWE) in the mountains has been examined extensively through the use of models, in-situ networks and remote sensing techniques. However, until the Airborne Snow Observatory (http://aso.jpl.nasa.gov), our understanding of SWE dynamics has been limited due to a lack of well-constrained spatial distributions of SWE in complex terrain, particularly at high elevations and at regional scales (100km+). ASO produces comprehensive snow depth measurements and well-constrained SWE products providing the opportunity to re-examine our current understanding of SWE distributions with a robust and rich data source. We collected spatially-distributed snow depth and SWE data from over 150 individual ASO acquisitions spanning seven basins in California during the five-year operational period of 2013 - 2017. For each of these acquisitions, we characterized the spatial distribution of snow depth and SWE and examined how these distributions changed with time during snowmelt. We compared these distribution patterns between each of the seven basins and finally, examined the predictability of the SWE distributions using statistical extrapolations through both space and time. We compare and contrast these observationally-based characteristics with those from a physically-based snow model to highlight the strengths and weaknesses of the implementation of our understanding of SWE processes in the model environment. In practice, these results may be used to support or challenge our current understanding of mountain SWE dynamics and provide techniques for enhanced evaluation of high-resolution snow models that go beyond in-situ point comparisons. In application, this work may provide guidance on the potential of ASO to guide backfilling of sparse spaceborne measurements of snow depth and snow water equivalent.

  7. Does Terrestrial Carbon Explain Lake Superior Model-Data pCO2 Discrepancy?

    NASA Astrophysics Data System (ADS)

    Bennington, V.; McKinley, G. A.; Atilla, N.; Kimura, N.; Urban, N.; Wu, C.; Desai, A.

    2008-12-01

    As part of the CyCLeS project, a three-dimensional hydrodynamic model (MITgcm) was coupled to a medium- complexity ecosystem model and applied to Lake Superior in order to constrain the seasonal cycle of lake pCO2 and air-lake fluxes of CO2. Previous estimates of CO2 emissions from the lake, while very large, were based on field measurements of very limited spatial and temporal extent. The model allows a more realistic extrapolation from the limited data by incorporation of lake-wide circulation and food web dynamics. A large discrepancy (200 uatm) between observations and model-predicted pCO2 during spring suggests a significant input of terrestrial carbon into the lake. The physical model has 10-km horizontal resolution with 29 vertical layers, ten of which are in the top 50 m of the water column. The model is forced by interpolated meteorological data obtained from land-based weather stations, buoys, and other measurements. Modeled surface temperatures compare well to satellite- based surface water temperature images derived from NOAA AVHRR (Advanced Very High Resolution Radiometer), though there are regional patterns of bias that suggest errors in the heat flux forcing. Growth of two classes of phytoplankton is modeled as a function of temperature, light, and nutrients. One grazer preys upon all phytoplankton. The cycles of carbon and phosphorous are explicitly modeled throughout the water column. The model is able to replicate the observed seasonal cycle of lake chlorophyll and the deep chlorophyll maximum. The model is unable to capture the magnitude of observed CO2 super-saturation during spring without considering external carbon inputs to the lake. Simple box model results suggest that the estimated pool of terrestrial carbon in the lake (17 TgC) must remineralize with a timescale of months during spring in order to account for the model/data pCO2 difference. River inputs and enhanced remineralization in spring due to photo-oxidation are other mechanisms considered to explain the discrepancy between model predictions and observations of pCO2. Model results suggest that year-round and lake-wide direct measurements of pCO2 would help to better constrain the lake carbon cycle.

  8. Can we go From Tomographically Determined Seismic Velocities to Composition? Amplitude Resolution Issues in Local Earthquake Tomography

    NASA Astrophysics Data System (ADS)

    Wagner, L.

    2007-12-01

    There have been a number of recent papers (i.e. Lee (2003), James et al. (2004), Hacker and Abers (2004), Schutt and Lesher (2006)) which calculate predicted velocities for xenolith compositions at mantle pressures and temperatures. It is tempting, therefore, to attempt to go the other way ... to use tomographically determined absolute velocities to constrain mantle composition. However, in order to do this, it is vital that one is able to accurately constrain not only the polarity of the determined velocity deviations (i.e. fast vs slow) but also how much faster, how much slower relative to the starting model, if absolute velocities are to be so closely analyzed. While much attention has been given to issues concerning spatial resolution in seismic tomography (i.e. what areas are fast, what areas are slow), little attention has been directed at the issue of amplitude resolution (how fast, how slow). Velocity deviation amplitudes in seismic tomography are heavily influenced by the amount of regularization used and the number of iterations performed. Determining these two parameters is a difficult and little discussed problem. I explore the effect of these two parameters on the amplitudes obtained from the tomographic inversion of the Chile Argentina Geophysical Experiment (CHARGE) dataset, and attempt to determine a reasonable solution space for the low Vp, high Vs, low Vp/Vs anomaly found above the flat slab in central Chile. I then compare this solution space to the range in experimentally determined velocities for peridotite end-members to evaluate our ability to constrain composition using tomographically determined seismic velocities. I find that in general, it will be difficult to constrain the compositions of normal mantle peridotites using tomographically determined velocities, but that in the unusual case of the anomaly above the flat slab, the observed velocity structure still has an anomalously high S wave velocity and low Vp/Vs ratio that is most consistent with enstatite, but inconsistent with the predicted velocities of known mantle xenoliths.

  9. Galaxy Zoo: constraining the origin of spiral arms

    NASA Astrophysics Data System (ADS)

    Hart, Ross E.; Bamford, Steven P.; Keel, William C.; Kruk, Sandor J.; Masters, Karen L.; Simmons, Brooke D.; Smethurst, Rebecca J.

    2018-07-01

    Since the discovery that the majority of low-redshift galaxies exhibit some level of spiral structure, a number of theories have been proposed as to why these patterns exist. A popular explanation is a process known as swing amplification, yet there is no observational evidence to prove that such a mechanism is at play. By using a number of measured properties of galaxies, and scaling relations where there are no direct measurements, we model samples of SDSS and S4G spiral galaxies in terms of their relative halo, bulge, and disc mass and size. Using these models, we test predictions of swing amplification theory with respect to directly measured spiral arm numbers from Galaxy Zoo 2. We find that neither a universal cored nor cuspy inner dark matter profile can correctly predict observed numbers of arms in galaxies. However, by invoking a halo contraction/expansion model, a clear bimodality in the spiral galaxy population emerges. Approximately 40 per cent of unbarred spiral galaxies at z ≲ 0.1 and M* ≳ 1010 M⊙ have spiral arms that can be modelled by swing amplification. This population display a significant correlation between predicted and observed spiral arm numbers, evidence that they are swing amplified modes. The remainder are dominated by two-arm systems for which the model predicts significantly higher arm numbers. These are likely driven by tidal interactions or other mechanisms.

  10. Galaxy Zoo: constraining the origin of spiral arms

    NASA Astrophysics Data System (ADS)

    Hart, Ross E.; Bamford, Steven P.; Keel, William C.; Kruk, Sandor J.; Masters, Karen L.; Simmons, Brooke D.; Smethurst, Rebecca J.

    2018-05-01

    Since the discovery that the majority of low-redshift galaxies exhibit some level of spiral structure, a number of theories have been proposed as to why these patterns exist. A popular explanation is a process known as swing amplification, yet there is no observational evidence to prove that such a mechanism is at play. By using a number of measured properties of galaxies, and scaling relations where there are no direct measurements, we model samples of SDSS and S4G spiral galaxies in terms of their relative halo, bulge and disc mass and size. Using these models, we test predictions of swing amplification theory with respect to directly measured spiral arm numbers from Galaxy Zoo 2. We find that neither a universal cored or cuspy inner dark matter profile can correctly predict observed numbers of arms in galaxies. However, by invoking a halo contraction/expansion model, a clear bimodality in the spiral galaxy population emerges. Approximately 40 per cent of unbarred spiral galaxies at z ≲ 0.1 and M* ≳ 1010M⊙ have spiral arms that can be modelled by swing amplification. This population display a significant correlation between predicted and observed spiral arm numbers, evidence that they are swing amplified modes. The remainder are dominated by two-arm systems for which the model predicts significantly higher arm numbers. These are likely driven by tidal interactions or other mechanisms.

  11. State-of-stress in magmatic rift zones: Predicting the role of surface and subsurface topography

    NASA Astrophysics Data System (ADS)

    Oliva, S. J. C.; Ebinger, C.; Rivalta, E.; Williams, C. A.

    2017-12-01

    Continental rift zones are segmented along their length by large fault systems that form in response to extensional stresses. Volcanoes and crustal magma chambers cause fundamental changes to the density structure, load the plates, and alter the state-of-stress within the crust, which then dictates fracture orientation. In this study, we develop geodynamic models scaled to a < 7 My rift sector in the Eastern rift, East Africa where geophysical imaging provides tight constraints on subsurface structure, petrologic and thermodynamic studies constrain material densities, and seismicity and structural analyses constrain active and time-averaged kinematics. This area is an ideal test area because a 60º stress rotation is observed in time-averaged fault and magma intrusion, and in local seismicity, and because this was the site of a large volume dike intrusion and seismic sequence in 2007. We use physics-based 2D and 3D models (analytical and finite elements) constrained by data from active rift zones to quantify the effects of loading on state-of-stress. By modeling varying geometric arrangements, and density contrasts of topographic and subsurface loads, and with reasonable regional extensional forces, the resulting state-of-stress reveals the favored orientation for new intrusions. Although our models are generalized, they allow us to evaluate whether a magmatic system (surface and subsurface) can explain the observed stress rotation, and enable new intrusions, new faults, or fault reactivation with orientations oblique to the main border faults. Our results will improve our understanding of the different factors at play in these extensional regimes, as well as contribute to a better assessment of the hazards in the area.

  12. Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DallAnese, Emiliano; Baker, Kyri; Summers, Tyler

    This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less

  13. Thermochemical Constraints of the Old Faithful Model for Radiation-Driven Cryovolcanism on Enceladus

    NASA Astrophysics Data System (ADS)

    Cooper, Paul; Franzel, C. J.; Cooper, J. F.

    2010-10-01

    We have used a combination of thermochemical data, plume composition, and the estimated surface power flux to constrain the Old Faithful model for radiation-driven cryovolcanism on Enceladus (1). This model proposes episodic cryovolcanic activity brought about by the chemical reaction between reductants that are primordially present within Enceladus's ice, and oxidants produced by energetic particles impacting the icy surface. Assuming no limit on accumulation of oxidants in the ice crust in the billions of years since formation and subsequent magnetospheric irradiation of Enceladus, this new work extends (1) by examining limits on activity from reductant abundances. Our calculations show that an almost negligible amount of methane or ammonia, compared with the mass of Enceladus, would potentially be needed to account for the surface power flux of the gas plume over 10 million years of activity, consistent with geologic models for episodic overturn of the ice crust and heat flow (2). Limiting the permanently ejected fluid mass during this time by the volume of the topographical depression in the SPT of Enceladus, we have constrained the number ratio of reductant-to-water. Results are in support of our model. In addition, using the measured abundances of CO2 and N2 (products of CH4 and NH3 oxidation) in the plume, we have further constrained the amounts of CH4 and NH3 that could be present and these are also in line with our predictions. These calculations fully support the Old Faithful model (1). 1) Cooper, J. F., Cooper, P. D. Sittler, E. C., Sturner, S. J., Rymer, A. M., "Old Faithful Model for Radiolytic Gas-Driven Cryovolcanism at Enceladus", Planet. Space Sci., 57, 1607-1620,2009. 2) O'Neill, C., F. Nimmo, "The Role of Episodic Overturn in Generating the Surface Geology and Heat Flow on Enceladus, Nature Geosci., 3, 88-91. 2010.

  14. Network models provide insights into how oriens–lacunosum-moleculare and bistratified cell interactions influence the power of local hippocampal CA1 theta oscillations

    PubMed Central

    Ferguson, Katie A.; Huh, Carey Y. L.; Amilhon, Bénédicte; Manseau, Frédéric; Williams, Sylvain; Skinner, Frances K.

    2015-01-01

    Hippocampal theta is a 4–12 Hz rhythm associated with episodic memory, and although it has been studied extensively, the cellular mechanisms underlying its generation are unclear. The complex interactions between different interneuron types, such as those between oriens–lacunosum-moleculare (OLM) interneurons and bistratified cells (BiCs), make their contribution to network rhythms difficult to determine experimentally. We created network models that are tied to experimental work at both cellular and network levels to explore how these interneuron interactions affect the power of local oscillations. Our cellular models were constrained with properties from patch clamp recordings in the CA1 region of an intact hippocampus preparation in vitro. Our network models are composed of three different types of interneurons: parvalbumin-positive (PV+) basket and axo-axonic cells (BC/AACs), PV+ BiCs, and somatostatin-positive OLM cells. Also included is a spatially extended pyramidal cell model to allow for a simplified local field potential representation, as well as experimentally-constrained, theta frequency synaptic inputs to the interneurons. The network size, connectivity, and synaptic properties were constrained with experimental data. To determine how the interactions between OLM cells and BiCs could affect local theta power, we explored how the number of OLM-BiC connections and connection strength affected local theta power. We found that our models operate in regimes that could be distinguished by whether OLM cells minimally or strongly affected the power of network theta oscillations due to balances that, respectively, allow compensatory effects or not. Inactivation of OLM cells could result in no change or even an increase in theta power. We predict that the dis-inhibitory effect of OLM cells to BiCs to pyramidal cell interactions plays a critical role in the resulting power of network theta oscillations. Overall, our network models reveal a dynamic interplay between different classes of interneurons in influencing local theta power. PMID:26300744

  15. Constraining ammonia dairy emissions during NASA DISCOVER-AQ California: surface and airborne observation comparisons with CMAQ simulations

    NASA Astrophysics Data System (ADS)

    Miller, D. J.; Liu, Z.; Sun, K.; Tao, L.; Nowak, J. B.; Bambha, R.; Michelsen, H. A.; Zondlo, M. A.

    2014-12-01

    Agricultural ammonia (NH3) emissions are highly uncertain in current bottom-up inventories. Ammonium nitrate is a dominant component of fine aerosols in agricultural regions such as the Central Valley of California, especially during winter. Recent high resolution regional modeling efforts in this region have found significant ammonium nitrate and gas-phase NH3 biases during summer. We compare spatially-resolved surface and boundary layer gas-phase NH3 observations during NASA DISCOVER-AQ California with Community Multi-Scale Air Quality (CMAQ) regional model simulations driven by the EPA NEI 2008 inventory to constrain wintertime NH3 model biases. We evaluate model performance with respect to aerosol partitioning, mixing and deposition to constrain contributions to modeled NH3 concentration biases in the Central Valley Tulare dairy region. Ammonia measurements performed with an open-path mobile platform on a vehicle are gridded to 4 km resolution hourly background concentrations. A peak detection algorithm is applied to remove local feedlot emission peaks. Aircraft NH3, NH4+ and NO3- observations are also compared with simulations extracted along the flight tracks. We find NH3 background concentrations in the dairy region are underestimated by three to five times during winter and NH3 simulations are moderately correlated with observations (r = 0.36). Although model simulations capture NH3 enhancements in the dairy region, these simulations are biased low by 30-60 ppbv NH3. Aerosol NH4+ and NO3- are also biased low in CMAQ by three and four times respectively. Unlike gas-phase NH3, CMAQ simulations do not capture typical NH4+ or NO3- enhancements observed in the dairy region. In contrast, boundary layer height simulations agree well with observations within 13%. We also address observational constraints on simulated NH3 deposition fluxes. These comparisons suggest that NEI 2008 wintertime dairy emissions are underestimated by a factor of three to five. We test sensitivity to emissions by increasing the NEI 2008 NH3 emissions uniformly across the dairy region and evaluate the impact on modeled concentrations. These results are applicable to improving predictions of ammoniated aerosol loading and highlight the value of mobile platform spatial NH3 measurements to constrain emission inventories.

  16. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    NASA Astrophysics Data System (ADS)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z < 3.2km) as well the more detailed level perspective of clouds (40 levels from 0 to 19km). Results show that in most models an increase of the SST leads to a decrease of the low-layer cloud fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental parameters.

  17. Probing 6D operators at future e - e + colliders

    NASA Astrophysics Data System (ADS)

    Chiu, Wen Han; Leung, Sze Ching; Liu, Tao; Lyu, Kun-Feng; Wang, Lian-Tao

    2018-05-01

    We explore the sensitivities at future e - e + colliders to probe a set of six-dimensional operators which can modify the SM predictions on Higgs physics and electroweak precision measurements. We consider the case in which the operators are turned on simultaneously. Such an analysis yields a "conservative" interpretation on the collider sensitivities, complementary to the "optimistic" scenario where the operators are individually probed. After a detail analysis at CEPC in both "conservative" and "optimistic" scenarios, we also considered the sensitivities for FCC-ee and ILC. As an illustration of the potential of constraining new physics models, we applied sensitivity analysis to two benchmarks: holographic composite Higgs model and littlest Higgs model.

  18. Assessment of the interactions between economic growth and industrial wastewater discharges using co-integration analysis: a case study for China's Hunan Province.

    PubMed

    Xiao, Qiang; Gao, Yang; Hu, Dan; Tan, Hong; Wang, Tianxiang

    2011-07-01

    We have investigated the interactions between economic growth and industrial wastewater discharge from 1978 to 2007 in China's Hunan Province using co-integration theory and an error-correction model. Two main economic growth indicators and four representative industrial wastewater pollutants were selected to demonstrate the interaction mechanism. We found a long-term equilibrium relationship between economic growth and the discharge of industrial pollutants in wastewater between 1978 and 2007 in Hunan Province. The error-correction mechanism prevented the variable expansion for long-term relationship at quantity and scale, and the size of the error-correction parameters reflected short-term adjustments that deviate from the long-term equilibrium. When economic growth changes within a short term, the discharge of pollutants will constrain growth because the values of the parameters in the short-term equation are smaller than those in the long-term co-integrated regression equation, indicating that a remarkable long-term influence of economic growth on the discharge of industrial wastewater pollutants and that increasing pollutant discharge constrained economic growth. Economic growth is the main driving factor that affects the discharge of industrial wastewater pollutants in Hunan Province. On the other hand, the discharge constrains economic growth by producing external pressure on growth, although this feedback mechanism has a lag effect. Economic growth plays an important role in explaining the predicted decomposition of the variance in the discharge of industrial wastewater pollutants, but this discharge contributes less to predictions of the variations in economic growth.

  19. Assessment of the Interactions between Economic Growth and Industrial Wastewater Discharges Using Co-integration Analysis: A Case Study for China’s Hunan Province

    PubMed Central

    Xiao, Qiang; Gao, Yang; Hu, Dan; Tan, Hong; Wang, Tianxiang

    2011-01-01

    We have investigated the interactions between economic growth and industrial wastewater discharge from 1978 to 2007 in China’s Hunan Province using co-integration theory and an error-correction model. Two main economic growth indicators and four representative industrial wastewater pollutants were selected to demonstrate the interaction mechanism. We found a long-term equilibrium relationship between economic growth and the discharge of industrial pollutants in wastewater between 1978 and 2007 in Hunan Province. The error-correction mechanism prevented the variable expansion for long-term relationship at quantity and scale, and the size of the error-correction parameters reflected short-term adjustments that deviate from the long-term equilibrium. When economic growth changes within a short term, the discharge of pollutants will constrain growth because the values of the parameters in the short-term equation are smaller than those in the long-term co-integrated regression equation, indicating that a remarkable long-term influence of economic growth on the discharge of industrial wastewater pollutants and that increasing pollutant discharge constrained economic growth. Economic growth is the main driving factor that affects the discharge of industrial wastewater pollutants in Hunan Province. On the other hand, the discharge constrains economic growth by producing external pressure on growth, although this feedback mechanism has a lag effect. Economic growth plays an important role in explaining the predicted decomposition of the variance in the discharge of industrial wastewater pollutants, but this discharge contributes less to predictions of the variations in economic growth. PMID:21845167

  20. A new approach for monthly updates of anthropogenic sulfur dioxide emissions from space: Application to China and implications for air quality forecasts

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Wang, Yuxuan; Qu, Zhen

    2016-09-01

    SO2 emissions, the largest source of anthropogenic aerosols, can respond rapidly to economic and policy driven changes. However, bottom-up SO2 inventories have inherent limitations owing to 24-48 months latency and lack of month-to-month variation in emissions (especially in developing countries). This study develops a new approach that integrates Ozone Monitoring Instrument (OMI) SO2 satellite measurements and GEOS-Chem adjoint model simulations to constrain monthly anthropogenic SO2 emissions. The approach's effectiveness is demonstrated for 14 months in East Asia; resultant posterior emissions not only capture a 20% SO2 emission reduction in Beijing during the 2008 Olympic Games but also improve agreement between modeled and in situ surface measurements. Further analysis reveals that posterior emissions estimates, compared to the prior, lead to significant improvements in forecasting monthly surface and columnar SO2. With the pending availability of geostationary measurements of tropospheric composition, we show that it may soon be possible to rapidly constrain SO2 emissions and associated air quality predictions at fine spatiotemporal scales.

  1. The role of residence time in diagnostic models of global carbon storage capacity: model decomposition based on a traceable scheme.

    PubMed

    Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang

    2015-11-06

    As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ'E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ'E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models.

  2. The role of residence time in diagnostic models of global carbon storage capacity: model decomposition based on a traceable scheme

    PubMed Central

    Yizhao, Chen; Jianyang, Xia; Zhengguo, Sun; Jianlong, Li; Yiqi, Luo; Chengcheng, Gang; Zhaoqi, Wang

    2015-01-01

    As a key factor that determines carbon storage capacity, residence time (τE) is not well constrained in terrestrial biosphere models. This factor is recognized as an important source of model uncertainty. In this study, to understand how τE influences terrestrial carbon storage prediction in diagnostic models, we introduced a model decomposition scheme in the Boreal Ecosystem Productivity Simulator (BEPS) and then compared it with a prognostic model. The result showed that τE ranged from 32.7 to 158.2 years. The baseline residence time (τ′E) was stable for each biome, ranging from 12 to 53.7 years for forest biomes and 4.2 to 5.3 years for non-forest biomes. The spatiotemporal variations in τE were mainly determined by the environmental scalar (ξ). By comparing models, we found that the BEPS uses a more detailed pool construction but rougher parameterization for carbon allocation and decomposition. With respect to ξ comparison, the global difference in the temperature scalar (ξt) averaged 0.045, whereas the moisture scalar (ξw) had a much larger variation, with an average of 0.312. We propose that further evaluations and improvements in τ′E and ξw predictions are essential to reduce the uncertainties in predicting carbon storage by the BEPS and similar diagnostic models. PMID:26541245

  3. Uncertainty of climate change impact on groundwater reserves - Application to a chalk aquifer

    NASA Astrophysics Data System (ADS)

    Goderniaux, Pascal; Brouyère, Serge; Wildemeersch, Samuel; Therrien, René; Dassargues, Alain

    2015-09-01

    Recent studies have evaluated the impact of climate change on groundwater resources for different geographical and climatic contexts. However, most studies have either not estimated the uncertainty around projected impacts or have limited the analysis to the uncertainty related to climate models. In this study, the uncertainties around impact projections from several sources (climate models, natural variability of the weather, hydrological model calibration) are calculated and compared for the Geer catchment (465 km2) in Belgium. We use a surface-subsurface integrated model implemented using the finite element code HydroGeoSphere, coupled with climate change scenarios (2010-2085) and the UCODE_2005 inverse model, to assess the uncertainty related to the calibration of the hydrological model. This integrated model provides a more realistic representation of the water exchanges between surface and subsurface domains and constrains more the calibration with the use of both surface and subsurface observed data. Sensitivity and uncertainty analyses were performed on predictions. The linear uncertainty analysis is approximate for this nonlinear system, but it provides some measure of uncertainty for computationally demanding models. Results show that, for the Geer catchment, the most important uncertainty is related to calibration of the hydrological model. The total uncertainty associated with the prediction of groundwater levels remains large. By the end of the century, however, the uncertainty becomes smaller than the predicted decline in groundwater levels.

  4. Constraining the break of spatial diffeomorphism invariance with Planck data

    NASA Astrophysics Data System (ADS)

    Graef, L. L.; Benetti, M.; Alcaniz, J. S.

    2017-07-01

    The current most accepted paradigm for the early universe cosmology, the inflationary scenario, shows a good agreement with the recent Cosmic Microwave Background (CMB) and polarization data. However, when the inflation consistency relation is relaxed, these observational data exclude a larger range of red tensor tilt values, prevailing the blue ones which are not predicted by the minimal inflationary models. Recently, it has been shown that the assumption of spatial diffeomorphism invariance breaking (SDB) in the context of an effective field theory of inflation leads to interesting observational consequences. Among them, the possibility of generating a blue tensor spectrum, which can recover the specific consistency relation of the String Gas Cosmology, for a certain choice of parameters. We use the most recent CMB data to constrain the SDB model and test its observational viability through a Bayesian analysis assuming as reference an extended ΛCDM+tensor perturbation model, which considers a power-law tensor spectrum parametrized in terms of the tensor-to-scalar ratio, r, and the tensor spectral index, nt. If the inflation consistency relation is imposed, r=-8 nt, we obtain a strong evidence in favor of the reference model whereas if such relation is relaxed, a weak evidence in favor of the model with diffeomorphism breaking is found. We also use the same CMB data set to make an observational comparison between the SDB model, standard inflation and String Gas Cosmology.

  5. Communication: CDFT-CI couplings can be unreliable when there is fractional charge transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu

    2015-12-21

    Constrained density functional theory with configuration interaction (CDFT-CI) is a useful, low-cost tool for the computational prediction of electronic couplings between pseudo-diabatic constrained electronic states. Such couplings are of paramount importance in electron transfer theory and transition state theory, among other areas of chemistry. Unfortunately, CDFT-CI occasionally fails significantly, predicting a coupling that does not decay exponentially with distance and/or overestimating the expected coupling by an order of magnitude or more. In this communication, we show that the eigenvalues of the difference density matrix between the two constrained states can be used as an a priori metric to determine whenmore » CDFT-CI are likely to be reliable: when the eigenvalues are near 0 or ±1, transfer of a whole electron is occurring, and CDFT-CI can be trusted. We demonstrate the utility of this metric with several illustrative examples.« less

  6. Communication: CDFT-CI couplings can be unreliable when there is fractional charge transfer

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2015-12-01

    Constrained density functional theory with configuration interaction (CDFT-CI) is a useful, low-cost tool for the computational prediction of electronic couplings between pseudo-diabatic constrained electronic states. Such couplings are of paramount importance in electron transfer theory and transition state theory, among other areas of chemistry. Unfortunately, CDFT-CI occasionally fails significantly, predicting a coupling that does not decay exponentially with distance and/or overestimating the expected coupling by an order of magnitude or more. In this communication, we show that the eigenvalues of the difference density matrix between the two constrained states can be used as an a priori metric to determine when CDFT-CI are likely to be reliable: when the eigenvalues are near 0 or ±1, transfer of a whole electron is occurring, and CDFT-CI can be trusted. We demonstrate the utility of this metric with several illustrative examples.

  7. Beyond equilibrium climate sensitivity

    NASA Astrophysics Data System (ADS)

    Knutti, Reto; Rugenstein, Maria A. A.; Hegerl, Gabriele C.

    2017-10-01

    Equilibrium climate sensitivity characterizes the Earth's long-term global temperature response to increased atmospheric CO2 concentration. It has reached almost iconic status as the single number that describes how severe climate change will be. The consensus on the 'likely' range for climate sensitivity of 1.5 °C to 4.5 °C today is the same as given by Jule Charney in 1979, but now it is based on quantitative evidence from across the climate system and throughout climate history. The quest to constrain climate sensitivity has revealed important insights into the timescales of the climate system response, natural variability and limitations in observations and climate models, but also concerns about the simple concepts underlying climate sensitivity and radiative forcing, which opens avenues to better understand and constrain the climate response to forcing. Estimates of the transient climate response are better constrained by observed warming and are more relevant for predicting warming over the next decades. Newer metrics relating global warming directly to the total emitted CO2 show that in order to keep warming to within 2 °C, future CO2 emissions have to remain strongly limited, irrespective of climate sensitivity being at the high or low end.

  8. Phenology of two interdependent traits in migratory birds in response to climate change.

    PubMed

    Kristensen, Nadiah Pardede; Johansson, Jacob; Ripa, Jörgen; Jonzén, Niclas

    2015-05-22

    In migratory birds, arrival date and hatching date are two key phenological markers that have responded to global warming. A body of knowledge exists relating these traits to evolutionary pressures. In this study, we formalize this knowledge into general mathematical assumptions, and use them in an ecoevolutionary model. In contrast to previous models, this study novelty accounts for both traits-arrival date and hatching date-and the interdependence between them, revealing when one, the other or both will respond to climate. For all models sharing the assumptions, the following phenological responses will occur. First, if the nestling-prey peak is late enough, hatching is synchronous with, and arrival date evolves independently of, prey phenology. Second, when resource availability constrains the length of the pre-laying period, hatching is adaptively asynchronous with prey phenology. Predictions for both traits compare well with empirical observations. In response to advancing prey phenology, arrival date may advance, remain unchanged, or even become delayed; the latter occurring when egg-laying resources are only available relatively late in the season. The model shows that asynchronous hatching and unresponsive arrival date are not sufficient evidence that phenological adaptation is constrained. The work provides a framework for exploring microevolution of interdependent phenological traits. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  9. Multidisciplinary Optimization Approach for Design and Operation of Constrained and Complex-shaped Space Systems

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young

    The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue, for the ADCS operations, controllers based on Model Predictive Control that are effective for constraint handling were developed and implemented. All the suggested design and operation methodologies are applied to a mission "CADRE", which is space weather mission scheduled for operation in 2016. This application demonstrates the usefulness and capability of the methodology to enhance CADRE's capabilities, and its ability to be applied to a variety of missions.

  10. Linking erosion history and mantle processes in southern Africa

    NASA Astrophysics Data System (ADS)

    Stanley, J. R.; Braun, J.; Flowers, R. M.; Baby, G.; Wildman, M.; Guillocheau, F.; Robin, C.; Beucher, R.; Brown, R. W.

    2017-12-01

    The large, low relief, high elevation plateau of southern Africa has been the focus of many studies, but there is still considerable debate about how it formed. Lack of tectonic convergence and crustal thickening suggests mantle dynamics play an important role in the evolution of topography there, but the time and specific mechanisms of topographic development are still contested. Many mantle mechanisms of topographic support have been suggested including dynamic topography associated with either deep or shallow mantle thermal anomalies, thermochemical modification of the lithosphere, and plume tails related to Mesozoic magmatic activity. These mechanisms predict different timing and patterns of surface uplift such that better constraints on the uplift history have the potential to constrain the nature of the source of topographic support. Here we test several of these geodynamic hypotheses using a landscape evolution model that is used to predict the erosional response to surface uplift. Several recent studies have provided a clearer picture of the erosion history of the plateau surface and margins using low temperature thermochronology and the geometries of the surrounding offshore depositional systems. Model results are directly compared with these data. We use an inversion method (the Neighborhood Algorithm) to constrain the range in erosional and uplift parameters that can best reproduce the observed data. The combination of different types of geologic information including sedimentary flux, landscape shape, and thermochronolology is valuable for constraining many of these parameters. We show that both the characteristics of the geodynamic forcing as well as the physical characteristics of the eroding plateau have significant control on the plateau erosion patterns. Models that match the erosion history data well suggest uplift of the eastern margin in the Cretaceous ( 100 Ma) followed by uplift of the western margin 20 Myr later. The amplitude of this uplift is on the order of 1000 m. The data cannot resolve whether there was smaller amplitude phase of uplift in the Cenozoic. These results suggest that the scenario proposed by Braun et al. (2014) of uplift caused by the continent moving over the African superswell is viable. We are currently investigating the compatibility of other uplift geometries.

  11. Probabilistic Modeling of Aircraft Trajectories for Dynamic Separation Volumes

    NASA Technical Reports Server (NTRS)

    Lewis, Timothy A.

    2016-01-01

    With a proliferation of new and unconventional vehicles and operations expected in the future, the ab initio airspace design will require new approaches to trajectory prediction for separation assurance and other air traffic management functions. This paper presents an approach to probabilistic modeling of the trajectory of an aircraft when its intent is unknown. The approach uses a set of feature functions to constrain a maximum entropy probability distribution based on a set of observed aircraft trajectories. This model can be used to sample new aircraft trajectories to form an ensemble reflecting the variability in an aircraft's intent. The model learning process ensures that the variability in this ensemble reflects the behavior observed in the original data set. Computational examples are presented.

  12. Using experimental data to test an n -body dynamical model coupled with an energy-based clusterization algorithm at low incident energies

    NASA Astrophysics Data System (ADS)

    Kumar, Rohit; Puri, Rajeev K.

    2018-03-01

    Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.

  13. Constraining terrestrial ecosystem CO2 fluxes by integrating models of biogeochemistry and atmospheric transport and data of surface carbon fluxes and atmospheric CO2 concentrations

    NASA Astrophysics Data System (ADS)

    Zhu, Q.; Zhuang, Q.; Henze, D.; Bowman, K.; Chen, M.; Liu, Y.; He, Y.; Matsueda, H.; Machida, T.; Sawa, Y.; Oechel, W.

    2014-09-01

    Regional net carbon fluxes of terrestrial ecosystems could be estimated with either biogeochemistry models by assimilating surface carbon flux measurements or atmospheric CO2 inversions by assimilating observations of atmospheric CO2 concentrations. Here we combine the ecosystem biogeochemistry modeling and atmospheric CO2 inverse modeling to investigate the magnitude and spatial distribution of the terrestrial ecosystem CO2 sources and sinks. First, we constrain a terrestrial ecosystem model (TEM) at site level by assimilating the observed net ecosystem production (NEP) for various plant functional types. We find that the uncertainties of model parameters are reduced up to 90% and model predictability is greatly improved for all the plant functional types (coefficients of determination are enhanced up to 0.73). We then extrapolate the model to a global scale at a 0.5° × 0.5° resolution to estimate the large-scale terrestrial ecosystem CO2 fluxes, which serve as prior for atmospheric CO2 inversion. Second, we constrain the large-scale terrestrial CO2 fluxes by assimilating the GLOBALVIEW-CO2 and mid-tropospheric CO2 retrievals from the Atmospheric Infrared Sounder (AIRS) into an atmospheric transport model (GEOS-Chem). The transport inversion estimates that: (1) the annual terrestrial ecosystem carbon sink in 2003 is -2.47 Pg C yr-1, which agrees reasonably well with the most recent inter-comparison studies of CO2 inversions (-2.82 Pg C yr-1); (2) North America temperate, Europe and Eurasia temperate regions act as major terrestrial carbon sinks; and (3) The posterior transport model is able to reasonably reproduce the atmospheric CO2 concentrations, which are validated against Comprehensive Observation Network for TRace gases by AIrLiner (CONTRAIL) CO2 concentration data. This study indicates that biogeochemistry modeling or atmospheric transport and inverse modeling alone might not be able to well quantify regional terrestrial carbon fluxes. However, combining the two modeling approaches and assimilating data of surface carbon flux as well as atmospheric CO2 mixing ratios might significantly improve the quantification of terrestrial carbon fluxes.

  14. Hydrogen and Oxygen Isotope Ratios in Body Water and Hair: Modeling Isotope Dynamics in Nonhuman Primates

    PubMed Central

    O’Grady, Shannon P.; Valenzuela, Luciano O.; Remien, Christopher H.; Enright, Lindsey E.; Jorgensen, Matthew J.; Kaplan, Jay R.; Wagner, Janice D.; Cerling, Thure E.; Ehleringer, James R.

    2012-01-01

    The stable isotopic composition of drinking water, diet, and atmospheric oxygen influence the isotopic composition of body water (2H/1H, 18O/16O expressed as δ2H and δ18O). In turn, body water influences the isotopic composition of organic matter in tissues, such as hair and teeth, which are often used to reconstruct historical dietary and movement patterns of animals and humans. Here, we used a nonhuman primate system (Macaca fascicularis) to test the robustness of two different mechanistic stable isotope models: a model to predict the δ2H and δ18O values of body water and a second model to predict the δ2H and δ18O values of hair. In contrast to previous human-based studies, use of nonhuman primates fed controlled diets allowed us to further constrain model parameter values and evaluate model predictions. Both models reliably predicted the δ2H and δ18O values of body water and of hair. Moreover, the isotope data allowed us to better quantify values for two critical variables in the models: the δ2H and δ18O values of gut water and the 18O isotope fractionation associated with a carbonyl oxygen-water interaction in the gut (αow). Our modeling efforts indicated that better predictions for body water and hair isotope values were achieved by making the isotopic composition of gut water approached that of body water. Additionally, the value of αow was 1.0164, in close agreement with the only other previously measured observation (microbial spore cell walls), suggesting robustness of this fractionation factor across different biological systems. PMID:22553163

  15. Hydrogen and oxygen isotope ratios in body water and hair: modeling isotope dynamics in nonhuman primates.

    PubMed

    O'Grady, Shannon P; Valenzuela, Luciano O; Remien, Christopher H; Enright, Lindsey E; Jorgensen, Matthew J; Kaplan, Jay R; Wagner, Janice D; Cerling, Thure E; Ehleringer, James R

    2012-07-01

    The stable isotopic composition of drinking water, diet, and atmospheric oxygen influence the isotopic composition of body water ((2)H/(1)H, (18)O/(16)O expressed as δ(2) H and δ(18)O). In turn, body water influences the isotopic composition of organic matter in tissues, such as hair and teeth, which are often used to reconstruct historical dietary and movement patterns of animals and humans. Here, we used a nonhuman primate system (Macaca fascicularis) to test the robustness of two different mechanistic stable isotope models: a model to predict the δ(2)H and δ(18)O values of body water and a second model to predict the δ(2)H and δ(18)O values of hair. In contrast to previous human-based studies, use of nonhuman primates fed controlled diets allowed us to further constrain model parameter values and evaluate model predictions. Both models reliably predicted the δ(2)H and δ(18)O values of body water and of hair. Moreover, the isotope data allowed us to better quantify values for two critical variables in the models: the δ(2)H and δ(18)O values of gut water and the (18)O isotope fractionation associated with a carbonyl oxygen-water interaction in the gut (α(ow)). Our modeling efforts indicated that better predictions for body water and hair isotope values were achieved by making the isotopic composition of gut water approached that of body water. Additionally, the value of α(ow) was 1.0164, in close agreement with the only other previously measured observation (microbial spore cell walls), suggesting robustness of this fractionation factor across different biological systems. © 2012 Wiley Periodicals, Inc.

  16. Vibration control of multiferroic fibrous composite plates using active constrained layer damping

    NASA Astrophysics Data System (ADS)

    Kattimani, S. C.; Ray, M. C.

    2018-06-01

    Geometrically nonlinear vibration control of fiber reinforced magneto-electro-elastic or multiferroic fibrous composite plates using active constrained layer damping treatment has been investigated. The piezoelectric (BaTiO3) fibers are embedded in the magnetostrictive (CoFe2O4) matrix forming magneto-electro-elastic or multiferroic smart composite. A three-dimensional finite element model of such fiber reinforced magneto-electro-elastic plates integrated with the active constrained layer damping patches is developed. Influence of electro-elastic, magneto-elastic and electromagnetic coupled fields on the vibration has been studied. The Golla-Hughes-McTavish method in time domain is employed for modeling a constrained viscoelastic layer of the active constrained layer damping treatment. The von Kármán type nonlinear strain-displacement relations are incorporated for developing a three-dimensional finite element model. Effect of fiber volume fraction, fiber orientation and boundary conditions on the control of geometrically nonlinear vibration of the fiber reinforced magneto-electro-elastic plates is investigated. The performance of the active constrained layer damping treatment due to the variation of piezoelectric fiber orientation angle in the 1-3 Piezoelectric constraining layer of the active constrained layer damping treatment has also been emphasized.

  17. Predicting Lg Coda Using Synthetic Seismograms and Media With Stochastic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Tibuleac, I. M.; Stroujkova, A.; Bonner, J. L.; Mayeda, K.

    2005-12-01

    Recent examinations of the characteristics of coda-derived Sn and Lg spectra for yield estimation have shown that the spectral peak of Nevada Test Site (NTS) explosion spectra is depth-of-burial dependent, and that this peak is shifted to higher frequencies for Lop Nor explosions at the same depths. To confidently use coda-based yield formulas, we need to understand and predict coda spectral shape variations with depth, source media, velocity structure, topography, and geological heterogeneity. We present results of a coda modeling study to predict Lg coda. During the initial stages of this research, we have acquired and parameterized a deterministic 6 deg. x 6 deg. velocity and attenuation model centered on the Nevada Test Site. Near-source data are used to constrain density and attenuation profiles for the upper five km. The upper crust velocity profiles are quilted into a background velocity profile at depths greater than five km. The model is parameterized for use in a modified version of the Generalized Fourier Method in two dimensions (GFM2D). We modify this model to include stochastic heterogeneities of varying correlation lengths within the crust. Correlation length, Hurst number and fractional velocity perturbation of the heterogeneities are used to construct different realizations of the random media. We use nuclear explosion and earthquake cluster waveform analysis, as well as well log and geological information to constrain the stochastic parameters for a path between the NTS and the seismic stations near Mina, Nevada. Using multiple runs, we quantify the effects of variations in the stochastic parameters, of heterogeneity location in the crust and attenuation on coda amplitude and spectral characteristics. We calibrate these parameters by matching synthetic earthquake Lg coda envelopes to coda envelopes of local earthquakes with well-defined moments and mechanisms. We generate explosion synthetics for these calibrated deterministic and stochastic models. Secondary effects, including a compensated linear vector dipole source, are superposed on the synthetics in order to adequately characterize the Lg generation. We use this technique to characterize the effects of depth of burial on the coda spectral shapes.

  18. Thermal structure, magmatism, and evolution of fast-spreading mid-ocean ridges

    NASA Astrophysics Data System (ADS)

    Shah, Anjana K.

    2001-07-01

    We use thin-plate flexural models and high-resolution magnetic field data to constrain magmatic and tectonic processes at fast-spreading mid-ocean ridges, and how these processes evolve over time. Models are constructed to predict axial high topography and gravity for a given thermal structure of the crust and mechanical structure of the lithosphere. Whereas previous models predicted the high is due to a narrow column of buoyant material extending to 10's of kilometers depth in the mantle, we find the high can also be produced by a narrow zone of crustal melt, and lithosphere which thickens rapidly with distance from the axis. We consider the effects of plastic weakening using a yield strength envelope to map bending stresses associated with deflections. Near-surface stresses are extensional at distances which closely resemble regions of normal fault growth at certain axial highs, suggesting bending stresses play a significant role in normal faulting at fast-spreading ridges. We further develop the model to simulate ridge jumps. We fit topography and gravity data of a plume-influenced region which has recently experienced a ridge jump. Steep sides of the new high are best modeled as constructional features. An abandoned ridge remains at the old axis due to plate strengthening associated with crustal cooling. By fitting more than one profile along-axis, we constrain the accretion history at the new ridge. We also predict than an inconsistency between bull's eye mantle Bouguer anomaly lows and a nearly constant along-axis depth can be resolved by assuming a low density zone below the axis widens near the bull's eye center. Finally, we study high-resolution magnetic field data at two regions of the East Pacific Rise with different eruptive histories. The anomalies are used to map relatively fresh pillow mounds, void space created by lava tubes and lobate flows, and dike complexes which extend along the length of recent fissure eruptions. The dikes suggest episodic eruptive histories in these regions, and have implications regarding the migration history of the area.

  19. Escape of asteroids from the main belt

    NASA Astrophysics Data System (ADS)

    Granvik, Mikael; Morbidelli, Alessandro; Vokrouhlický, David; Bottke, William F.; Nesvorný, David; Jedicke, Robert

    2017-02-01

    Aims: We locate escape routes from the main asteroid belt, particularly into the near-Earth-object (NEO) region, and estimate the relative fluxes for different escape routes as a function of object size under the influence of the Yarkovsky semimajor-axis drift. Methods: We integrated the orbits of 78 355 known and 14 094 cloned main-belt objects and Cybele and Hilda asteroids (hereafter collectively called MBOs) for 100 Myr and recorded the characteristics of the escaping objects. The selected sample of MBOs with perihelion distance q > 1.3 au and semimajor axis a < 4.1 au is essentially complete, with an absolute magnitude limit ranging from HV < 15.9 in the inner belt (a < 2.5 au) to HV < 14.4 in the outer belt (2.5 au < a < 4.1 au). We modeled the semimajor-axis drift caused by the Yarkovsky force and assigned four different sizes (diameters of 0.1, 0.3, 1.0, and 3.0 km) and random spin obliquities (either 0 deg or 180 deg) for each test asteroid. Results: We find more than ten obvious escape routes from the asteroid belt to the NEO region, and they typically coincide with low-order mean-motion resonances with Jupiter and secular resonances. The locations of the escape routes are independent of the semimajor-axis drift rate and thus are also independent of the asteroid diameter. The locations of the escape routes are likewise unaffected when we added a model for Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) cycles coupled with secular evolution of the rotation pole as a result of the solar gravitational torque. A Yarkovsky-only model predicts a flux of asteroids entering the NEO region that is too high compared to the observationally constrained flux, and the discrepancy grows larger for smaller asteroids. A combined Yarkovsky and YORP model predicts a flux of small NEOs that is approximately a factor of 5 too low compared to an observationally constrained estimate. This suggests that the characteristic timescale of the YORP cycle is longer than our canonical YORP model predicts.

  20. Proposals for enhanced health risk assessment and stratification in an integrated care scenario

    PubMed Central

    Dueñas-Espín, Ivan; Vela, Emili; Pauws, Steffen; Bescos, Cristina; Cano, Isaac; Cleries, Montserrat; Contel, Joan Carles; de Manuel Keenoy, Esteban; Garcia-Aymerich, Judith; Gomez-Cabrero, David; Kaye, Rachelle; Lahr, Maarten M H; Lluch-Ariet, Magí; Moharra, Montserrat; Monterde, David; Mora, Joana; Nalin, Marco; Pavlickova, Andrea; Piera, Jordi; Ponce, Sara; Santaeugenia, Sebastià; Schonenberg, Helen; Störk, Stefan; Tegner, Jesper; Velickovski, Filip; Westerteicher, Christoph; Roca, Josep

    2016-01-01

    Objectives Population-based health risk assessment and stratification are considered highly relevant for large-scale implementation of integrated care by facilitating services design and case identification. The principal objective of the study was to analyse five health-risk assessment strategies and health indicators used in the five regions participating in the Advancing Care Coordination and Telehealth Deployment (ACT) programme (http://www.act-programme.eu). The second purpose was to elaborate on strategies toward enhanced health risk predictive modelling in the clinical scenario. Settings The five ACT regions: Scotland (UK), Basque Country (ES), Catalonia (ES), Lombardy (I) and Groningen (NL). Participants Responsible teams for regional data management in the five ACT regions. Primary and secondary outcome measures We characterised and compared risk assessment strategies among ACT regions by analysing operational health risk predictive modelling tools for population-based stratification, as well as available health indicators at regional level. The analysis of the risk assessment tool deployed in Catalonia in 2015 (GMAs, Adjusted Morbidity Groups) was used as a basis to propose how population-based analytics could contribute to clinical risk prediction. Results There was consensus on the need for a population health approach to generate health risk predictive modelling. However, this strategy was fully in place only in two ACT regions: Basque Country and Catalonia. We found marked differences among regions in health risk predictive modelling tools and health indicators, and identified key factors constraining their comparability. The research proposes means to overcome current limitations and the use of population-based health risk prediction for enhanced clinical risk assessment. Conclusions The results indicate the need for further efforts to improve both comparability and flexibility of current population-based health risk predictive modelling approaches. Applicability and impact of the proposals for enhanced clinical risk assessment require prospective evaluation. PMID:27084274

  1. Challenging terrestrial biosphere models with data from the long-term multifactor Prairie Heating and CO2 Enrichment experiment

    NASA Astrophysics Data System (ADS)

    De Kauwe, M. G.; Medlyn, B.; Walker, A.; Zaehle, S.; Pendall, E.; Norby, R. J.

    2017-12-01

    Multifactor experiments are often advocated as important for advancing models, yet to date, such models have only been tested against single-factor experiments. We applied 10 models to the multifactor Prairie Heating and CO2 Enrichment (PHACE) experiment in Wyoming, USA. Our goals were to investigate how multifactor experiments can be used to constrain models and to identify a road map for model improvement. We found models performed poorly in ambient conditions: comparison with data highlighted model failures particularly with respect to carbon allocation, phenology, and the impact of water stress on phenology. Performance against the observations from single-factors treatments was also relatively poor. In addition, similar responses were predicted for different reasons across models: there were large differences among models in sensitivity to water stress and, among the nitrogen cycle models, nitrogen availability during the experiment. Models were also unable to capture observed treatment effects on phenology: they overestimated the effect of warming on leaf onset and did not allow CO2-induced water savings to extend the growing season length. Observed interactive (CO2 × warming) treatment effects were subtle and contingent on water stress, phenology, and species composition. As the models did not correctly represent these processes under ambient and single-factor conditions, little extra information was gained by comparing model predictions against interactive responses. We outline a series of key areas in which this and future experiments could be used to improve model predictions of grassland responses to global change.

  2. 3D RNA and functional interactions from evolutionary couplings

    PubMed Central

    Weinreb, Caleb; Riesselman, Adam; Ingraham, John B.; Gross, Torsten; Sander, Chris; Marks, Debora S.

    2016-01-01

    Summary Non-coding RNAs are ubiquitous, but the discovery of new RNA gene sequences far outpaces research on their structure and functional interactions. We mine the evolutionary sequence record to derive precise information about function and structure of RNAs and RNA-protein complexes. As in protein structure prediction, we use maximum entropy global probability models of sequence co-variation to infer evolutionarily constrained nucleotide-nucleotide interactions within RNA molecules, and nucleotide-amino acid interactions in RNA-protein complexes. The predicted contacts allow all-atom blinded 3D structure prediction at good accuracy for several known RNA structures and RNA-protein complexes. For unknown structures, we predict contacts in 160 non-coding RNA families. Beyond 3D structure prediction, evolutionary couplings help identify important functional interactions, e.g., at switch points in riboswitches and at a complex nucleation site in HIV. Aided by accelerating sequence accumulation, evolutionary coupling analysis can accelerate the discovery of functional interactions and 3D structures involving RNA. PMID:27087444

  3. Comparison of full field and anomaly initialisation for decadal climate prediction: towards an optimal consistency between the ocean and sea-ice anomaly initialisation state

    NASA Astrophysics Data System (ADS)

    Volpi, Danila; Guemas, Virginie; Doblas-Reyes, Francisco J.

    2017-08-01

    Decadal prediction exploits sources of predictability from both the internal variability through the initialisation of the climate model from observational estimates, and the external radiative forcings. When a model is initialised with the observed state at the initial time step (Full Field Initialisation—FFI), the forecast run drifts towards the biased model climate. Distinguishing between the climate signal to be predicted and the model drift is a challenging task, because the application of a-posteriori bias correction has the risk of removing part of the variability signal. The anomaly initialisation (AI) technique aims at addressing the drift issue by answering the following question: if the model is allowed to start close to its own attractor (i.e. its biased world), but the phase of the simulated variability is constrained toward the contemporaneous observed one at the initialisation time, does the prediction skill improve? The relative merits of the FFI and AI techniques applied respectively to the ocean component and the ocean and sea ice components simultaneously in the EC-Earth global coupled model are assessed. For both strategies the initialised hindcasts show better skill than historical simulations for the ocean heat content and AMOC along the first two forecast years, for sea ice and PDO along the first forecast year, while for AMO the improvements are statistically significant for the first two forecast years. The AI in the ocean and sea ice components significantly improves the skill of the Arctic sea surface temperature over the FFI.

  4. Using Lidar and Radar measurements to constrain predictions of forest ecosystem structure and function.

    PubMed

    Antonarakis, Alexander S; Saatchi, Sassan S; Chazdon, Robin L; Moorcroft, Paul R

    2011-06-01

    Insights into vegetation and aboveground biomass dynamics within terrestrial ecosystems have come almost exclusively from ground-based forest inventories that are limited in their spatial extent. Lidar and synthetic-aperture Radar are promising remote-sensing-based techniques for obtaining comprehensive measurements of forest structure at regional to global scales. In this study we investigate how Lidar-derived forest heights and Radar-derived aboveground biomass can be used to constrain the dynamics of the ED2 terrestrial biosphere model. Four-year simulations initialized with Lidar and Radar structure variables were compared against simulations initialized from forest-inventory data and output from a long-term potential-vegtation simulation. Both height and biomass initializations from Lidar and Radar measurements significantly improved the representation of forest structure within the model, eliminating the bias of too many large trees that arose in the potential-vegtation-initialized simulation. The Lidar and Radar initializations decreased the proportion of larger trees estimated by the potential vegetation by approximately 20-30%, matching the forest inventory. This resulted in improved predictions of ecosystem-scale carbon fluxes and structural dynamics compared to predictions from the potential-vegtation simulation. The Radar initialization produced biomass values that were 75% closer to the forest inventory, with Lidar initializations producing canopy height values closest to the forest inventory. Net primary production values for the Radar and Lidar initializations were around 6-8% closer to the forest inventory. Correcting the Lidar and Radar initializations for forest composition resulted in improved biomass and basal-area dynamics as well as leaf-area index. Correcting the Lidar and Radar initializations for forest composition and fine-scale structure by combining the remote-sensing measurements with ground-based inventory data further improved predictions, suggesting that further improvements of structural and carbon-flux metrics will also depend on obtaining reliable estimates of forest composition and accurate representation of the fine-scale vertical and horizontal structure of plant canopies.

  5. Whole-lake invasive crayfish removal and qualitative modeling reveal habitat-specific food web topology

    DOE PAGES

    Hansen, Gretchen J. A.; Tunney, Tyler D.; Winslow, Luke A.; ...

    2017-02-10

    Patterning of the presence/absence of food web linkages (hereafter topology) is a fundamental characteristic of ecosystems that can influence species responses to perturbations. However, the insight from food web topology into dynamic effects of perturbations on species is potentially hindered because most described topologies represent data integrated across spatial and temporal scales. We conducted a 10-year, whole-lake experiment in which we removed invasive rusty crayfish ( Orconectes rusticus) from a 64-ha north-temperate lake and monitored responses of multiple trophic levels. We compared species responses observed in two sub-habitats to the responses predicted from all topologies of an integrated, literature-informed basemore » food web model of 32 potential links. Out of 4.3 billion possible topologies, only 308,833 (0.0072%) predicted responses that qualitatively matched observed species responses in cobble habitat, and only 12,673 (0.0003%) matched observed responses in sand habitat. Furthermore, when constrained to predictions that both matched observed responses and were highly reliable (i.e., predictions were robust to link strength values), only 5040 (0.0001%) and 140 (0.000003%) topologies were identified for cobble and sand habitats, respectively. A small number of linkages were nearly always present in these valid, reliable networks in sand, while a greater variety of possible network configurations were possible in cobble. Direct links involving invasive rusty crayfish were more important in cobble, while indirect effects involving Lepomis spp. were more important in sand. Importantly, the importance of individual species linkages differed dramatically among cobble and sand sub-habitats within a single lake, even though species composition was identical. Furthermore the true topology of food webs is difficult to determine, constraining topologies to include spatial resolution that matches observed experimental outcomes may reduce possibilities to a small number of plausible alternatives.« less

  6. Whole-lake invasive crayfish removal and qualitative modeling reveal habitat-specific food web topology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Gretchen J. A.; Tunney, Tyler D.; Winslow, Luke A.

    Patterning of the presence/absence of food web linkages (hereafter topology) is a fundamental characteristic of ecosystems that can influence species responses to perturbations. However, the insight from food web topology into dynamic effects of perturbations on species is potentially hindered because most described topologies represent data integrated across spatial and temporal scales. We conducted a 10-year, whole-lake experiment in which we removed invasive rusty crayfish ( Orconectes rusticus) from a 64-ha north-temperate lake and monitored responses of multiple trophic levels. We compared species responses observed in two sub-habitats to the responses predicted from all topologies of an integrated, literature-informed basemore » food web model of 32 potential links. Out of 4.3 billion possible topologies, only 308,833 (0.0072%) predicted responses that qualitatively matched observed species responses in cobble habitat, and only 12,673 (0.0003%) matched observed responses in sand habitat. Furthermore, when constrained to predictions that both matched observed responses and were highly reliable (i.e., predictions were robust to link strength values), only 5040 (0.0001%) and 140 (0.000003%) topologies were identified for cobble and sand habitats, respectively. A small number of linkages were nearly always present in these valid, reliable networks in sand, while a greater variety of possible network configurations were possible in cobble. Direct links involving invasive rusty crayfish were more important in cobble, while indirect effects involving Lepomis spp. were more important in sand. Importantly, the importance of individual species linkages differed dramatically among cobble and sand sub-habitats within a single lake, even though species composition was identical. Furthermore the true topology of food webs is difficult to determine, constraining topologies to include spatial resolution that matches observed experimental outcomes may reduce possibilities to a small number of plausible alternatives.« less

  7. Extensional fault geometry and its flexural isostatic response during the formation of the Iberia - Newfoundland conjugate rifted margins

    NASA Astrophysics Data System (ADS)

    Gómez-Romeu, Júlia; Kusznir, Nick; Manatschal, Gianreto; Roberts, Alan

    2017-04-01

    Despite magma-poor rifted margins having been extensively studied for the last 20 years, the evolution of extensional fault geometry and the flexural isostatic response to faulting remain still debated topics. We investigate how the flexural isostatic response to faulting controls the structural development of the distal part of rifted margins in the hyper-extended domain and the resulting sedimentary record. In particular we address an important question concerning the geometry and evolution of extensional faults within distal hyper-extended continental crust; are the seismically observed extensional fault blocks in this region allochthons from the upper plate or are they autochthons of the lower plate? In order to achieve our aim we focus on the west Iberian rifted continental margin along the TGS and LG12 seismic profiles. Our strategy is to use a kinematic forward model (RIFTER) to model the tectonic and stratigraphic development of the west Iberia margin along TGS-LG12 and quantitatively test and calibrate the model against breakup paleo-bathymetry, crustal basement thickness and well data. RIFTER incorporates the flexural isostatic response to extensional faulting, crustal thinning, lithosphere thermal loads, sedimentation and erosion. The model predicts the structural and stratigraphic consequences of recursive sequential faulting and sedimentation. The target data used to constrain model predictions consists of two components: (i) gravity anomaly inversion is used to determine Moho depth, crustal basement thickness and continental lithosphere thinning and (ii) reverse post-rift subsidence modelling consisting of flexural backstripping, decompaction and reverse post-rift thermal subsidence modelling is used to give paleo-bathymetry at breakup time. We show that successful modelling of the structural and stratigraphic development of the TGS-LG12 Iberian margin transect also requires the simultaneous modelling of the Newfoundland conjugate margin, which we constrain using target data from the SCREECH 2 seismic profile. We also show that for the successful modelling and quantitative validation of the lithosphere hyper-extension stage it is necessary to first have a good calibrated model of the necking phase. Not surprisingly the evolution of a rifted continental margin cannot be modelled without modelling and calibration of its conjugate margin.

  8. Construction of ground-state preserving sparse lattice models for predictive materials simulations

    NASA Astrophysics Data System (ADS)

    Huang, Wenxuan; Urban, Alexander; Rong, Ziqin; Ding, Zhiwei; Luo, Chuan; Ceder, Gerbrand

    2017-08-01

    First-principles based cluster expansion models are the dominant approach in ab initio thermodynamics of crystalline mixtures enabling the prediction of phase diagrams and novel ground states. However, despite recent advances, the construction of accurate models still requires a careful and time-consuming manual parameter tuning process for ground-state preservation, since this property is not guaranteed by default. In this paper, we present a systematic and mathematically sound method to obtain cluster expansion models that are guaranteed to preserve the ground states of their reference data. The method builds on the recently introduced compressive sensing paradigm for cluster expansion and employs quadratic programming to impose constraints on the model parameters. The robustness of our methodology is illustrated for two lithium transition metal oxides with relevance for Li-ion battery cathodes, i.e., Li2xFe2(1-x)O2 and Li2xTi2(1-x)O2, for which the construction of cluster expansion models with compressive sensing alone has proven to be challenging. We demonstrate that our method not only guarantees ground-state preservation on the set of reference structures used for the model construction, but also show that out-of-sample ground-state preservation up to relatively large supercell size is achievable through a rapidly converging iterative refinement. This method provides a general tool for building robust, compressed and constrained physical models with predictive power.

  9. Assessing spatiotemporal changes in forest carbon turnover times in observational data and models

    NASA Astrophysics Data System (ADS)

    Yu, K.; Smith, W. K.; Trugman, A. T.; van Mantgem, P.; Peng, C.; Condit, R.; Anderegg, W.

    2017-12-01

    Forests influence global carbon and water cycles, biophysical land-atmosphere feedbacks, and atmospheric composition. The capacity of forests to sequester atmospheric CO2 in a changing climate depends not only on the response of carbon uptake (i.e., gross primary productivity) but also on the simultaneous change in carbon residence time. However, changes in carbon residence with climate change are uncertain, impacting the accuracy of predictions of future terrestrial carbon cycle dynamics. Here, we use long-term forest inventory data representative of tropical, temperate, and boreal forests; satellite-based estimates of net primary productivity and vegetation carbon stock; and six models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to investigate spatiotemporal trends in carbon residence time and its relation to climate. Forest inventory and satellite-based estimates of carbon residence time show a pervasive decreasing trend across global forests. In contrast, the CMIP5 models diverge in predicting historical and future trends in carbon residence time. Divergence across CMIP5 models indicate carbon turnover times are not well constrained by observations, which likely contributes to large variability in future carbon cycle projections.

  10. Improved GIA Correction and Antarctic Contribution to Sea-level Rise Observed by GRACE

    NASA Astrophysics Data System (ADS)

    Ivins, Erik; James, Thomas; Wahr, John; Schrama, Ernst; Landerer, Felix; Simon, Karen

    2013-04-01

    Measurement of continent-wide glacial isostatic adjustment (GIA) is needed to interpret satellite-based trends for the grounded ice mass change of the Antarctic ice sheet (AIS). This is especially true for trends determined from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Three data sets have matured to the point where they can be used to shrink the range of possible GIA models for Antarctica: the glacial geological record has expanded to include exposure ages using 10Be,26Al measurements that constrain past thickness of the ice sheet, modelled ice core records now better constrain the temporal variation in past rates of snow accumulation, and Global Positioning System (GPS) vertical rate trends from across the continent are now available. The volume changes associated with Antarctic ice loading and unloading during the past 21 thousand years (21 ka) are smaller than previously thought, generating model present-day uplift rates that are consistent with GPS observations. We construct an ice sheet history that is designed to predict maximum volume changes, and in particular, maximum Holocene change. This ice sheet model drives a forward model prediction of GIA gravity signal, that in turn, should give maximum GIA response predictions. The apparent surface mass change component of GIA is re-evaluated to be +55 ± 13 Gt/yr by considering a revised ice history model and a parameter search for vertical motion predictions that best-fit the GPS observations at 18 high-quality stations. Although the GIA model spans a wide range of possible earth rheological structure values, the data are not yet sufficient for solving for a preferred value of upper and lower mantle viscosity, nor for a preferred lithospheric thickness. GRACE monthly solutions from CSR-RL04 release time series from Jan. 2003 through the beginning of Jan. 2012, uncorrected for GIA, yield an ice mass rate of +2.9 ± 34 Gt/yr. A new rough upper bound to the GIA correction is about 60-65 Gt/yr. The new correction increases the solved-for ice mass imbalance of Antarctica to -57 ± 34 Gt/yr. The revised GIA correction is smaller than past GRACE estimates by about 50 to 90 Gt/yr. The new upper bound to sea-level rise from AIS mass loss averaged over the time span 2003.0 - 2012.0 is about 0.16 ± 0.09 mm/yr. We discuss the differences in spatio-temporal character of the gain-loss regimes of Antarctica over the observing period.

  11. Single neutral pion production by charged-current $$\\bar{\

    DOE PAGES

    Le, T.; Paomino, J. L.; Aliaga, L.; ...

    2015-10-07

    We studied single neutral pion production via muon antineutrino charged-current interactions in plastic scintillator (CH) using the MINERvA detector exposed to the NuMI low-energy, wideband antineutrino beam at Fermilab. Measurement of this process constrains models of neutral pion production in nuclei, which is important because the neutral-current analog is a background for appearance oscillation experiments. Furthermore, the differential cross sections for π 0 momentum and production angle, for events with a single observed π 0 and no charged pions, are presented and compared to model predictions. These results comprise the first measurement of the π 0 kinematics for this process.

  12. ICHEP 2014 Summary: Theory Status after the First LHC Run

    NASA Astrophysics Data System (ADS)

    Pich, Antonio

    2016-04-01

    A brief overview of the main highlights discussed at ICHEP 2014 is presented. The experimental data confirm that the scalar boson discovered at the LHC couples to other particles as predicted in the Standard Model. This constitutes a great success of the present theoretical paradigm, which has been confirmed as the correct description at the electroweak scale. At the same time, the negative searches for signals of new phenomena tightly constrain many new-physics scenarios, challenging previous theoretical wisdom and opening new perspectives in fundamental physics. Fresh ideas are needed to face the many pending questions unanswered within the Standard Model framework.

  13. Single neutral pion production by charged-current $$\\bar{\

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, T.; Paomino, J. L.; Aliaga, L.

    We studied single neutral pion production via muon antineutrino charged-current interactions in plastic scintillator (CH) using the MINERvA detector exposed to the NuMI low-energy, wideband antineutrino beam at Fermilab. Measurement of this process constrains models of neutral pion production in nuclei, which is important because the neutral-current analog is a background for appearance oscillation experiments. Furthermore, the differential cross sections for π 0 momentum and production angle, for events with a single observed π 0 and no charged pions, are presented and compared to model predictions. These results comprise the first measurement of the π 0 kinematics for this process.

  14. Interplay of I-TASSER and QUARK for template-based and ab initio protein structure prediction in CASP10

    PubMed Central

    Zhang, Yang

    2014-01-01

    We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. PMID:23760925

  15. Interplay of I-TASSER and QUARK for template-based and ab initio protein structure prediction in CASP10.

    PubMed

    Zhang, Yang

    2014-02-01

    We develop and test a new pipeline in CASP10 to predict protein structures based on an interplay of I-TASSER and QUARK for both free-modeling (FM) and template-based modeling (TBM) targets. The most noteworthy observation is that sorting through the threading template pool using the QUARK-based ab initio models as probes allows the detection of distant-homology templates which might be ignored by the traditional sequence profile-based threading alignment algorithms. Further template assembly refinement by I-TASSER resulted in successful folding of two medium-sized FM targets with >150 residues. For TBM, the multiple threading alignments from LOMETS are, for the first time, incorporated into the ab initio QUARK simulations, which were further refined by I-TASSER assembly refinement. Compared with the traditional threading assembly refinement procedures, the inclusion of the threading-constrained ab initio folding models can consistently improve the quality of the full-length models as assessed by the GDT-HA and hydrogen-bonding scores. Despite the success, significant challenges still exist in domain boundary prediction and consistent folding of medium-size proteins (especially beta-proteins) for nonhomologous targets. Further developments of sensitive fold-recognition and ab initio folding methods are critical for solving these problems. Copyright © 2013 Wiley Periodicals, Inc.

  16. Comparing models for IMF variation across cosmological time in Milky Way-like galaxies

    NASA Astrophysics Data System (ADS)

    Guszejnov, Dávid; Hopkins, Philip F.; Ma, Xiangcheng

    2017-12-01

    One of the key observations regarding the stellar initial mass function (IMF) is its near-universality in the Milky Way (MW), which provides a powerful way to constrain different star formation models that predict the IMF. However, those models are almost universally 'cloud-scale' or smaller - they take as input or simulate single molecular clouds (GMCs), clumps or cores, and predict the resulting IMF as a function of the cloud properties. Without a model for the progenitor properties of all clouds that formed the stars at different locations in the MW (including ancient stellar populations formed in high redshift, likely gas-rich dwarf progenitor galaxies that looked little like the Galaxy today), the predictions cannot be fully explored nor safely applied to 'live' cosmological calculations of the IMF in different galaxies at different cosmological times. We therefore combine a suite of high-resolution cosmological simulations (from the Feedback In Realistic Environments project), which form MW-like galaxies with reasonable star formation properties and explicitly resolve massive GMCs, with various proposed cloud-scale IMF models. We apply the models independently to every star particle formed in the simulations to synthesize the predicted IMF in the present-day galaxy. We explore models where the IMF depends on Jeans mass, sonic or 'turbulent Bonnor-Ebert' mass, fragmentation with a polytropic equation of state, or where it is self-regulated by protostellar feedback. We show that all of these models, except the feedback-regulated ones, predict far more variation (∼0.6-1 dex 1σ scatter in the IMF turnover mass) in the simulations than is observed in the MW.

  17. The Systematics of Strong Lens Modeling Quantified: The Effects of Constraint Selection and Redshift Information on Magnification, Mass, and Multiple Image Predictability

    NASA Astrophysics Data System (ADS)

    Johnson, Traci L.; Sharon, Keren

    2016-11-01

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  18. Extracting electron transfer coupling elements from constrained density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu Qin; Van Voorhis, Troy

    2006-10-28

    Constrained density functional theory (DFT) is a useful tool for studying electron transfer (ET) reactions. It can straightforwardly construct the charge-localized diabatic states and give a direct measure of the inner-sphere reorganization energy. In this work, a method is presented for calculating the electronic coupling matrix element (H{sub ab}) based on constrained DFT. This method completely avoids the use of ground-state DFT energies because they are known to irrationally predict fractional electron transfer in many cases. Instead it makes use of the constrained DFT energies and the Kohn-Sham wave functions for the diabatic states in a careful way. Test calculationsmore » on the Zn{sub 2}{sup +} and the benzene-Cl atom systems show that the new prescription yields reasonable agreement with the standard generalized Mulliken-Hush method. We then proceed to produce the diabatic and adiabatic potential energy curves along the reaction pathway for intervalence ET in the tetrathiafulvalene-diquinone (Q-TTF-Q) anion. While the unconstrained DFT curve has no reaction barrier and gives H{sub ab}{approx_equal}17 kcal/mol, which qualitatively disagrees with experimental results, the H{sub ab} calculated from constrained DFT is about 3 kcal/mol and the generated ground state has a barrier height of 1.70 kcal/mol, successfully predicting (Q-TTF-Q){sup -} to be a class II mixed-valence compound.« less

  19. Nongrowing season methane emissions-a significant component of annual emissions across northern ecosystems.

    PubMed

    Treat, Claire C; Bloom, A Anthony; Marushchak, Maija E

    2018-03-22

    Wetlands are the single largest natural source of atmospheric methane (CH 4 ), a greenhouse gas, and occur extensively in the northern hemisphere. Large discrepancies remain between "bottom-up" and "top-down" estimates of northern CH 4 emissions. To explore whether these discrepancies are due to poor representation of nongrowing season CH 4 emissions, we synthesized nongrowing season and annual CH 4 flux measurements from temperate, boreal, and tundra wetlands and uplands. Median nongrowing season wetland emissions ranged from 0.9 g/m 2 in bogs to 5.2 g/m 2 in marshes and were dependent on moisture, vegetation, and permafrost. Annual wetland emissions ranged from 0.9 g m -2  year -1 in tundra bogs to 78 g m -2  year -1 in temperate marshes. Uplands varied from CH 4 sinks to CH 4 sources with a median annual flux of 0.0 ± 0.2 g m -2  year -1 . The measured fraction of annual CH 4 emissions during the nongrowing season (observed: 13% to 47%) was significantly larger than that was predicted by two process-based model ensembles, especially between 40° and 60°N (modeled: 4% to 17%). Constraining the model ensembles with the measured nongrowing fraction increased total nongrowing season and annual CH 4 emissions. Using this constraint, the modeled nongrowing season wetland CH 4 flux from >40° north was 6.1 ± 1.5 Tg/year, three times greater than the nongrowing season emissions of the unconstrained model ensemble. The annual wetland CH 4 flux was 37 ± 7 Tg/year from the data-constrained model ensemble, 25% larger than the unconstrained ensemble. Considering nongrowing season processes is critical for accurately estimating CH 4 emissions from high-latitude ecosystems, and necessary for constraining the role of wetland emissions in a warming climate. © 2018 John Wiley & Sons Ltd.

  20. Constraints on geomagnetic secular variation modeling from electromagnetism and fluid dynamics of the Earth's core

    NASA Technical Reports Server (NTRS)

    Benton, E. R.

    1986-01-01

    A spherical harmonic representation of the geomagnetic field and its secular variation for epoch 1980, designated GSFC(9/84), is derived and evaluated. At three epochs (1977.5, 1980.0, 1982.5) this model incorporates conservation of magnetic flux through five selected patches of area on the core/mantle boundary bounded by the zero contours of vertical magnetic field. These fifteen nonlinear constraints are included like data in an iterative least squares parameter estimation procedure that starts with the recently derived unconstrained field model GSFC (12/83). Convergence is approached within three iterations. The constrained model is evaluated by comparing its predictive capability outside the time span of its data, in terms of residuals at magnetic observatories, with that for the unconstrained model.

Top