Sample records for model simulations constrained

  1. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  2. Reflected stochastic differential equation models for constrained animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  3. Neurolinguistically constrained simulation of sentence comprehension: integrating artificial intelligence and brain theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gigley, H.M.

    1982-01-01

    An artificial intelligence approach to the simulation of neurolinguistically constrained processes in sentence comprehension is developed using control strategies for simulation of cooperative computation in associative networks. The desirability of this control strategy in contrast to ATN and production system strategies is explained. A first pass implementation of HOPE, an artificial intelligence simulation model of sentence comprehension, constrained by studies of aphasic performance, psycholinguistics, neurolinguistics, and linguistic theory is described. Claims that the model could serve as a basis for sentence production simulation and for a model of language acquisition as associative learning are discussed. HOPE is a model thatmore » performs in a normal state and includes a lesion simulation facility. HOPE is also a research tool. Its modifiability and use as a tool to investigate hypothesized causes of degradation in comprehension performance by aphasic patients are described. Issues of using behavioral constraints in modelling and obtaining appropriate data for simulated process modelling are discussed. Finally, problems of validation of the simulation results are raised; and issues of how to interpret clinical results to define the evolution of the model are discussed. Conclusions with respect to the feasibility of artificial intelligence simulation process modelling are discussed based on the current state of research.« less

  4. Communication: phase transitions, criticality, and three-phase coexistence in constrained cell models.

    PubMed

    Nayhouse, Michael; Kwon, Joseph Sang-Il; Orkoulas, G

    2012-05-28

    In simulation studies of fluid-solid transitions, the solid phase is usually modeled as a constrained system in which each particle is confined to move in a single Wigner-Seitz cell. The constrained cell model has been used in the determination of fluid-solid coexistence via thermodynamic integration and other techniques. In the present work, the phase diagram of such a constrained system of Lennard-Jones particles is determined from constant-pressure simulations. The pressure-density isotherms exhibit inflection points which are interpreted as the mechanical stability limit of the solid phase. The phase diagram of the constrained system contains a critical and a triple point. The temperature and pressure at the critical and the triple point are both higher than those of the unconstrained system due to the reduction in the entropy caused by the single occupancy constraint.

  5. Freezing Transition Studies Through Constrained Cell Model Simulation

    NASA Astrophysics Data System (ADS)

    Nayhouse, Michael; Kwon, Joseph Sang-Il; Heng, Vincent R.; Amlani, Ankur M.; Orkoulas, G.

    2014-10-01

    In the present work, a simulation method based on cell models is used to deduce the fluid-solid transition of a system of particles that interact via a pair potential, , which is of the form with . The simulations are implemented under constant-pressure conditions on a generalized version of the constrained cell model. The constrained cell model is constructed by dividing the volume into Wigner-Seitz cells and confining each particle in a single cell. This model is a special case of a more general cell model which is formed by introducing an additional field variable that controls the number of particles per cell and, thus, the relative stability of the solid against the fluid phase. High field values force configurations with one particle per cell and thus favor the solid phase. Fluid-solid coexistence on the isotherm that corresponds to a reduced temperature of 2 is determined from constant-pressure simulations of the generalized cell model using tempering and histogram reweighting techniques. The entire fluid-solid phase boundary is determined through a thermodynamic integration technique based on histogram reweighting, using the previous coexistence point as a reference point. The vapor-liquid phase diagram is obtained from constant-pressure simulations of the unconstrained system using tempering and histogram reweighting. The phase diagram of the system is found to contain a stable critical point and a triple point. The phase diagram of the corresponding constrained cell model is also found to contain both a stable critical point and a triple point.

  6. Modeling and simulating networks of interdependent protein interactions.

    PubMed

    Stöcker, Bianca K; Köster, Johannes; Zamir, Eli; Rahmann, Sven

    2018-05-21

    Protein interactions are fundamental building blocks of biochemical reaction systems underlying cellular functions. The complexity and functionality of these systems emerge not only from the protein interactions themselves but also from the dependencies between these interactions, as generated by allosteric effects or mutual exclusion due to steric hindrance. Therefore, formal models for integrating and utilizing information about interaction dependencies are of high interest. Here, we describe an approach for endowing protein networks with interaction dependencies using propositional logic, thereby obtaining constrained protein interaction networks ("constrained networks"). The construction of these networks is based on public interaction databases as well as text-mined information about interaction dependencies. We present an efficient data structure and algorithm to simulate protein complex formation in constrained networks. The efficiency of the model allows fast simulation and facilitates the analysis of many proteins in large networks. In addition, this approach enables the simulation of perturbation effects, such as knockout of single or multiple proteins and changes of protein concentrations. We illustrate how our model can be used to analyze a constrained human adhesome protein network, which is responsible for the formation of diverse and dynamic cell-matrix adhesion sites. By comparing protein complex formation under known interaction dependencies versus without dependencies, we investigate how these dependencies shape the resulting repertoire of protein complexes. Furthermore, our model enables investigating how the interplay of network topology with interaction dependencies influences the propagation of perturbation effects across a large biochemical system. Our simulation software CPINSim (for Constrained Protein Interaction Network Simulator) is available under the MIT license at http://github.com/BiancaStoecker/cpinsim and as a Bioconda package (https://bioconda.github.io).

  7. Tropospheric transport differences between models using the same large-scale meteorological fields

    NASA Astrophysics Data System (ADS)

    Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.

    2017-01-01

    The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.

  8. Constrained Local UniversE Simulations: a Local Group factory

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan; Yepes, Gustavo; Libeskind, Noam I.; Pilipenko, Sergey V.; Knebe, Alexander; Courtois, Hélène; Tully, R. Brent; Steinmetz, Matthias

    2016-05-01

    Near-field cosmology is practised by studying the Local Group (LG) and its neighbourhood. This paper describes a framework for simulating the `near field' on the computer. Assuming the Λ cold dark matter (ΛCDM) model as a prior and applying the Bayesian tools of the Wiener filter and constrained realizations of Gaussian fields to the Cosmicflows-2 (CF2) survey of peculiar velocities, constrained simulations of our cosmic environment are performed. The aim of these simulations is to reproduce the LG and its local environment. Our main result is that the LG is likely a robust outcome of the ΛCDMscenario when subjected to the constraint derived from CF2 data, emerging in an environment akin to the observed one. Three levels of criteria are used to define the simulated LGs. At the base level, pairs of haloes must obey specific isolation, mass and separation criteria. At the second level, the orbital angular momentum and energy are constrained, and on the third one the phase of the orbit is constrained. Out of the 300 constrained simulations, 146 LGs obey the first set of criteria, 51 the second and 6 the third. The robustness of our LG `factory' enables the construction of a large ensemble of simulated LGs. Suitable candidates for high-resolution hydrodynamical simulations of the LG can be drawn from this ensemble, which can be used to perform comprehensive studies of the formation of the LG.

  9. Technical Note: On the use of nudging for aerosol–climate model intercomparison studies

    DOE PAGES

    Zhang, K.; Wan, H.; Liu, X.; ...

    2014-08-26

    Nudging as an assimilation technique has seen increased use in recent years in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5 (CAM5), due to the systematic temperature bias in the standard model and the sensitivity ofmore » simulated ice formation to anthropogenic aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on long-wave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations, and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. Results from both CAM5 and a second aerosol–climate model ECHAM6-HAM2 also indicate that compared to the wind-and-temperature nudging, constraining only winds leads to better agreement with the free-running model in terms of the estimated shortwave cloud forcing and the simulated convective activities. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects since it provides well-constrained meteorology without strongly perturbing the model's mean climate.« less

  10. Technical Note: On the use of nudging for aerosol-climate model intercomparison studies

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Wan, H.; Liu, X.; Ghan, S. J.; Kooperman, G. J.; Ma, P.-L.; Rasch, P. J.; Neubauer, D.; Lohmann, U.

    2014-08-01

    Nudging as an assimilation technique has seen increased use in recent years in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5 (CAM5), due to the systematic temperature bias in the standard model and the sensitivity of simulated ice formation to anthropogenic aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on long-wave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations, and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. Results from both CAM5 and a second aerosol-climate model ECHAM6-HAM2 also indicate that compared to the wind-and-temperature nudging, constraining only winds leads to better agreement with the free-running model in terms of the estimated shortwave cloud forcing and the simulated convective activities. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects since it provides well-constrained meteorology without strongly perturbing the model's mean climate.

  11. Constraining the mass of the Local Group

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  12. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  13. Diagnostic Simulations of the Lunar Exosphere using Coma and Tail

    NASA Astrophysics Data System (ADS)

    Lee, Dong Wook; Kim, Sang J.

    2017-10-01

    The characteristics of the lunar exosphere can be constrained by comparing simulated models with observational data of the coma and tail (Lee et al., JGR, 2011); and thus far a few independent approaches on this issue have been performed and presented in the literature. Since there are two-different observational constraints for the lunar exosphere, it is interesting to find the best exospheric model that can account for the observed characteristics of the coma and tail. Considering various initial conditions of different sources and space weather, we present preliminary time-dependent simulations between the initial and final stages of the development of the lunar tail. Based on an updated 3-D model, we are planning to conduct numerous simulations to constrain the best model parameters from the coma images obtained from coronagraph observations supported by a NASA monitoring program (Morgan, Killen, and Potter, AGU, 2015) and future tail data.

  14. Constraining ammonia dairy emissions during NASA DISCOVER-AQ California: surface and airborne observation comparisons with CMAQ simulations

    NASA Astrophysics Data System (ADS)

    Miller, D. J.; Liu, Z.; Sun, K.; Tao, L.; Nowak, J. B.; Bambha, R.; Michelsen, H. A.; Zondlo, M. A.

    2014-12-01

    Agricultural ammonia (NH3) emissions are highly uncertain in current bottom-up inventories. Ammonium nitrate is a dominant component of fine aerosols in agricultural regions such as the Central Valley of California, especially during winter. Recent high resolution regional modeling efforts in this region have found significant ammonium nitrate and gas-phase NH3 biases during summer. We compare spatially-resolved surface and boundary layer gas-phase NH3 observations during NASA DISCOVER-AQ California with Community Multi-Scale Air Quality (CMAQ) regional model simulations driven by the EPA NEI 2008 inventory to constrain wintertime NH3 model biases. We evaluate model performance with respect to aerosol partitioning, mixing and deposition to constrain contributions to modeled NH3 concentration biases in the Central Valley Tulare dairy region. Ammonia measurements performed with an open-path mobile platform on a vehicle are gridded to 4 km resolution hourly background concentrations. A peak detection algorithm is applied to remove local feedlot emission peaks. Aircraft NH3, NH4+ and NO3- observations are also compared with simulations extracted along the flight tracks. We find NH3 background concentrations in the dairy region are underestimated by three to five times during winter and NH3 simulations are moderately correlated with observations (r = 0.36). Although model simulations capture NH3 enhancements in the dairy region, these simulations are biased low by 30-60 ppbv NH3. Aerosol NH4+ and NO3- are also biased low in CMAQ by three and four times respectively. Unlike gas-phase NH3, CMAQ simulations do not capture typical NH4+ or NO3- enhancements observed in the dairy region. In contrast, boundary layer height simulations agree well with observations within 13%. We also address observational constraints on simulated NH3 deposition fluxes. These comparisons suggest that NEI 2008 wintertime dairy emissions are underestimated by a factor of three to five. We test sensitivity to emissions by increasing the NEI 2008 NH3 emissions uniformly across the dairy region and evaluate the impact on modeled concentrations. These results are applicable to improving predictions of ammoniated aerosol loading and highlight the value of mobile platform spatial NH3 measurements to constrain emission inventories.

  15. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for ecosystem carbon cycle studies

    Treesearch

    Y. He; Q. Zhuang; A.D. McGuire; Y. Liu; M. Chen

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations inmodeling regional carbon dynamics and explore the...

  16. Tribology studies of the natural knee using an animal model in a new whole joint natural knee simulator.

    PubMed

    Liu, Aiqin; Jennings, Louise M; Ingham, Eileen; Fisher, John

    2015-09-18

    The successful development of early-stage cartilage and meniscus repair interventions in the knee requires biomechanical and biotribological understanding of the design of the therapeutic interventions and their tribological function in the natural joint. The aim of this study was to develop and validate a porcine knee model using a whole joint knee simulator for investigation of the tribological function and biomechanical properties of the natural knee, which could then be used to pre-clinically assess the tribological performance of cartilage and meniscal repair interventions prior to in vivo studies. The tribological performance of standard artificial bearings in terms of anterior-posterior (A/P) shear force was determined in a newly developed six degrees of freedom tribological joint simulator. The porcine knee model was then developed and the tribological properties in terms of shear force measurements were determined for the first time for three levels of biomechanical constraints including A/P constrained, spring force semi-constrained and A/P unconstrained conditions. The shear force measurements showed higher values under the A/P constrained condition (predominantly sliding motion) compared to the A/P unconstrained condition (predominantly rolling motion). This indicated that the shear force simulation model was able to differentiate between tribological behaviours when the femoral and tibial bearing was constrained to slide or/and roll. Therefore, this porcine knee model showed the potential capability to investigate the effect of knee structural, biomechanical and kinematic changes, as well as different cartilage substitution therapies on the tribological function of natural knee joints. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Technical Note: On the use of nudging for aerosol-climate model intercomparison studies

    DOE PAGES

    Zhang, K.; Wan, H.; Liu, X.; ...

    2014-04-24

    Nudging is an assimilation technique widely used in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5, due to the systematic temperature bias in the standard model and the sensitivity of simulated ice formation to anthropogenic aerosolmore » concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on longwave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects through ice clouds, since it provides well-constrained meteorology without strongly perturbing the model's mean climate.« less

  18. Technical Note: On the use of nudging for aerosol-climate model intercomparison studies

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Wan, H.; Liu, X.; Ghan, S. J.; Kooperman, G. J.; Ma, P.-L.; Rasch, P. J.

    2014-04-01

    Nudging is an assimilation technique widely used in the development and evaluation of climate models. Constraining the simulated wind and temperature fields using global weather reanalysis facilitates more straightforward comparison between simulation and observation, and reduces uncertainties associated with natural variabilities of the large-scale circulation. On the other hand, the forcing introduced by nudging can be strong enough to change the basic characteristics of the model climate. In the paper we show that for the Community Atmosphere Model version 5, due to the systematic temperature bias in the standard model and the sensitivity of simulated ice formation to anthropogenic aerosol concentration, nudging towards reanalysis results in substantial reductions in the ice cloud amount and the impact of anthropogenic aerosols on longwave cloud forcing. In order to reduce discrepancies between the nudged and unconstrained simulations and meanwhile take the advantages of nudging, two alternative experimentation methods are evaluated. The first one constrains only the horizontal winds. The second method nudges both winds and temperature, but replaces the long-term climatology of the reanalysis by that of the model. Results show that both methods lead to substantially improved agreement with the free-running model in terms of the top-of-atmosphere radiation budget and cloud ice amount. The wind-only nudging is more convenient to apply, and provides higher correlations of the wind fields, geopotential height and specific humidity between simulation and reanalysis. This suggests nudging the horizontal winds but not temperature is a good strategy for the investigation of aerosol indirect effects through ice clouds, since it provides well-constrained meteorology without strongly perturbing the model's mean climate.

  19. Using Real and Simulated TNOs to Constrain the Outer Solar System

    NASA Astrophysics Data System (ADS)

    Kaib, Nathan

    2018-04-01

    Over the past 2-3 decades our understanding of the outer solar system’s history and current state has evolved dramatically. An explosion in the number of detected trans-Neptunian objects (TNOs) coupled with simultaneous advances in numerical models of orbital dynamics has driven this rapid evolution. However, successfully constraining the orbital architecture and evolution of the outer solar system requires accurately comparing simulation results with observational datasets. This process is challenging because observed datasets are influenced by orbital discovery biases as well as TNO size and albedo distributions. Meanwhile, such influences are generally absent from numerical results. Here I will review recent work I and others have undertaken using numerical simulations in concert with catalogs of observed TNOs to constrain the outer solar system’s current orbital architecture and past evolution.

  20. Describing litho-constrained layout by a high-resolution model filter

    NASA Astrophysics Data System (ADS)

    Tsai, Min-Chun

    2008-05-01

    A novel high-resolution model (HRM) filtering technique was proposed to describe litho-constrained layouts. Litho-constrained layouts are layouts that have difficulties to pattern or are highly sensitive to process-fluctuations under current lithography technologies. HRM applies a short-wavelength (or high NA) model simulation directly on the pre-OPC, original design layout to filter out low spatial-frequency regions, and retain high spatial-frequency components which are litho-constrained. Since no OPC neither mask-synthesis steps are involved, this new technique is highly efficient in run time and can be used in design stage to detect and fix litho-constrained patterns. This method has successfully captured all the hot-spots with less than 15% overshoots on a realistic 80 mm2 full-chip M1 layout in 65nm technology node. A step by step derivation of this HRM technique is presented in this paper.

  1. APEX Model Simulation for Row Crop Watersheds with Agroforestry and Grass Buffers

    USDA-ARS?s Scientific Manuscript database

    Watershed model simulation has become an important tool in studying ways and means to reduce transport of agricultural pollutants. Conducting field experiments to assess buffer influences on water quality are constrained by the large-scale nature of watersheds, high experimental costs, private owner...

  2. Distributed Soil Moisture Estimation in a Mountainous Semiarid Basin: Constraining Soil Parameter Uncertainty through Field Studies

    NASA Astrophysics Data System (ADS)

    Yatheendradas, S.; Vivoni, E.

    2007-12-01

    A common practice in distributed hydrological modeling is to assign soil hydraulic properties based on coarse textural datasets. For semiarid regions with poor soil information, the performance of a model can be severely constrained due to the high model sensitivity to near-surface soil characteristics. Neglecting the uncertainty in soil hydraulic properties, their spatial variation and their naturally-occurring horizonation can potentially affect the modeled hydrological response. In this study, we investigate such effects using the TIN-based Real-time Integrated Basin Simulator (tRIBS) applied to the mid-sized (100 km2) Sierra Los Locos watershed in northern Sonora, Mexico. The Sierra Los Locos basin is characterized by complex mountainous terrain leading to topographic organization of soil characteristics and ecosystem distributions. We focus on simulations during the 2004 North American Monsoon Experiment (NAME) when intensive soil moisture measurements and aircraft- based soil moisture retrievals are available in the basin. Our experiments focus on soil moisture comparisons at the point, topographic transect and basin scales using a range of different soil characterizations. We compare the distributed soil moisture estimates obtained using (1) a deterministic simulation based on soil texture from coarse soil maps, (2) a set of ensemble simulations that capture soil parameter uncertainty and their spatial distribution, and (3) a set of simulations that conditions the ensemble on recent soil profile measurements. Uncertainties considered in near-surface soil characterization provide insights into their influence on the modeled uncertainty, into the value of soil profile observations, and into effective use of on-going field observations for constraining the soil moisture response uncertainty.

  3. Fixman compensating potential for general branched molecules

    NASA Astrophysics Data System (ADS)

    Jain, Abhinandan; Kandel, Saugat; Wagner, Jeffrey; Larsen, Adrien; Vaidehi, Nagarajan

    2013-12-01

    The technique of constraining high frequency modes of molecular motion is an effective way to increase simulation time scale and improve conformational sampling in molecular dynamics simulations. However, it has been shown that constraints on higher frequency modes such as bond lengths and bond angles stiffen the molecular model, thereby introducing systematic biases in the statistical behavior of the simulations. Fixman proposed a compensating potential to remove such biases in the thermodynamic and kinetic properties calculated from dynamics simulations. Previous implementations of the Fixman potential have been limited to only short serial chain systems. In this paper, we present a spatial operator algebra based algorithm to calculate the Fixman potential and its gradient within constrained dynamics simulations for branched topology molecules of any size. Our numerical studies on molecules of increasing complexity validate our algorithm by demonstrating recovery of the dihedral angle probability distribution function for systems that range in complexity from serial chains to protein molecules. We observe that the Fixman compensating potential recovers the free energy surface of a serial chain polymer, thus annulling the biases caused by constraining the bond lengths and bond angles. The inclusion of Fixman potential entails only a modest increase in the computational cost in these simulations. We believe that this work represents the first instance where the Fixman potential has been used for general branched systems, and establishes the viability for its use in constrained dynamics simulations of proteins and other macromolecules.

  4. Whole Atmosphere Modeling and Data Analysis: Success Stories, Challenges and Perspectives

    NASA Astrophysics Data System (ADS)

    Yudin, V. A.; Akmaev, R. A.; Goncharenko, L. P.; Fuller-Rowell, T. J.; Matsuo, T.; Ortland, D. A.; Maute, A. I.; Solomon, S. C.; Smith, A. K.; Liu, H.; Wu, Q.

    2015-12-01

    At the end of the 20-th century Raymond Roble suggested an ambitious target of developing an atmospheric general circulation model (GCM) that spans from the surface to the thermosphere for modeling the coupled atmosphere-ionosphere with drivers from terrestrial meteorology and solar-geomagnetic inputs. He pointed out several areas of research and applications that would benefit highly from the development and improvement of whole atmosphere modeling. At present several research groups using middle and whole atmosphere models have attempted to perform coupled ionosphere-thermosphere predictions to interpret the "unexpected" anomalies in the electron content, ions and plasma drifts observed during recent stratospheric warming events. The recent whole atmosphere inter-comparison case studies also displayed striking differences in simulations of prevailing flows, planetary waves and dominant tidal modes even when the lower atmosphere domain of those models were constrained by similar meteorological analyses. We will present the possible reasons of such differences between data-constrained whole atmosphere simulations when analyses with 6-hour time resolution are used and discuss the potential model-data and model-model differences above the stratopause. The possible shortcomings of the whole atmosphere simulations associated with model physics, dynamical cores and resolutions will be discussed. With the increased confidence in the space-borne temperature, winds and ozone observations and extensive collections of ground-based upper atmosphere observational facilities, the whole atmosphere modelers will be able to quantify annual and year-to-variability of the zonal mean flows, planetary wave and tides. We will demonstrate the value of tidal and planetary wave variability deduced from the space-borne data and ground-based systems for evaluation and tune-up of whole atmosphere simulations including corrections of systematic model errors. Several success stories on the middle and whole atmosphere simulations coupled with the ionosphere models will be highlighted, and future perspectives for links of the space and terrestrial weather predictions constrained by current and scheduled ionosphere-thermosphere-mesosphere satellite missions will be presented

  5. Evaluating the accuracy of VEMAP daily weather data for application in crop simulations on a regional scale

    USDA-ARS?s Scientific Manuscript database

    Weather plays a critical role in eco-environmental and agricultural systems. Limited availability of meteorological records often constrains the applications of simulation models and related decision support tools. The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) provides daily weather...

  6. Multiscale Data Assimilation for Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Li, Z.; Cheng, X.; Gustafson, W. I., Jr.; Xiao, H.; Vogelmann, A. M.; Endo, S.; Toto, T.

    2017-12-01

    Large-eddy simulation (LES) is a powerful tool for understanding atmospheric turbulence, boundary layer physics and cloud development, and there is a great need for developing data assimilation methodologies that can constrain LES models. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) User Facility has been developing the capability to routinely generate ensembles of LES. The LES ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/modeling/lasso) is generating simulations for shallow convection days at the ARM Southern Great Plains site in Oklahoma. One of major objectives of LASSO is to develop the capability to observationally constrain LES using a hierarchy of ARM observations. We have implemented a multiscale data assimilation (MSDA) scheme, which allows data assimilation to be implemented separately for distinct spatial scales, so that the localized observations can be effectively assimilated to constrain the mesoscale fields in the LES area of about 15 km in width. The MSDA analysis is used to produce forcing data that drive LES. With such LES workflow we have examined 13 days with shallow convection selected from the period May-August 2016. We will describe the implementation of MSDA, present LES results, and address challenges and opportunities for applying data assimilation to LES studies.

  7. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  8. Trajectory optimization and guidance law development for national aerospace plane applications

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1988-01-01

    The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.

  9. Analytical Dynamics and Nonrigid Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1974-01-01

    Application to the simulation of idealized spacecraft are considered both for multiple-rigid-body models and for models consisting of combination of rigid bodies and elastic bodies, with the elastic bodies being defined either as continua, as finite-element systems, or as a collection of given modal data. Several specific examples are developed in detail by alternative methods of analytical mechanics, and results are compared to a Newton-Euler formulation. The following methods are developed from d'Alembert's principle in vector form: (1) Lagrange's form of d'Alembert's principle for independent generalized coordinates; (2) Lagrange's form of d'Alembert's principle for simply constrained systems; (3) Kane's quasi-coordinate formulation of D'Alembert's principle; (4) Lagrange's equations for independent generalized coordinates; (5) Lagrange's equations for simply constrained systems; (6) Lagrangian quasi-coordinate equations (or the Boltzmann-Hamel equations); (7) Hamilton's equations for simply constrained systems; and (8) Hamilton's equations for independent generalized coordinates.

  10. Constraining the Timing of Lobate Debris Apron Emplacement at Martian Mid-Latitudes Using a Numerical Model of Ice Flow

    NASA Astrophysics Data System (ADS)

    Parsons, R. A.; Nimmo, F.

    2010-03-01

    SHARAD observations constrain the thickness and dust content of lobate debris aprons (LDAs). Simulations of dust-free ice-sheet flow over a flat surface at 205 K for 10-100 m.y. give LDA lengths and thicknesses that are consistent with observations.

  11. Numerical Analysis of Constrained Dynamical Systems, with Applications to Dynamic Contact of Solids, Nonlinear Elastodynamics and Fluid-Structure Interactions

    DTIC Science & Technology

    2000-12-01

    Numerical Simulations ..... ................. .... 42 1.4.1. Impact of a rod on a rigid wall ..... ................. .... 42 1.4.2. Impact of two...dissipative properties of the proposed scheme . . . . 81 II.4. Representative Numerical Simulations ...... ................. ... 84 11.4.1. Forging of...Representative numerical simulations ...... ............. .. 123 111.3. Model Problem II: a Simplified Model of Thin Beams ... ......... ... 127 III

  12. Weighting climate model projections using observational constraints.

    PubMed

    Gillett, Nathan P

    2015-11-13

    Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.

  13. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Treesearch

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  14. Constraining gross primary production and ecosystem respiration estimates for North America using atmospheric observations of carbonyl sulfide (OCS) and CO2

    NASA Astrophysics Data System (ADS)

    He, W.; Ju, W.; Chen, H.; Peters, W.; van der Velde, I.; Baker, I. T.; Andrews, A. E.; Zhang, Y.; Launois, T.; Campbell, J. E.; Suntharalingam, P.; Montzka, S. A.

    2016-12-01

    Carbonyl sulfide (OCS) is a promising novel atmospheric tracer for studying carbon cycle processes. OCS shares a similar pathway as CO2 during photosynthesis but not released through a respiration-like process, thus could be used to partition Gross Primary Production (GPP) from Net Ecosystem-atmosphere CO2 Exchange (NEE). This study uses joint atmospheric observations of OCS and CO2 to constrain GPP and ecosystem respiration (Re). Flask data from tower and aircraft sites over North America are collected. We employ our recently developed CarbonTracker (CT)-Lagrange carbon assimilation system, which is based on the CT framework and the Weather Research and Forecasting - Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) model, and the Simple Biosphere model with simulated OCS (SiB3-OCS) that provides prior GPP, Re and plant uptake fluxes of OCS. Derived plant OCS fluxes from both process model and GPP-scaled model are tested in our inversion. To investigate the ability of OCS to constrain GPP and understand the uncertainty propagated from OCS modeling errors to constrained fluxes in a dual-tracer system including OCS and CO2, two inversion schemes are implemented and compared: (1) a two-step scheme, which firstly optimizes GPP using OCS observations, and then simultaneously optimizes GPP and Re using CO2 observations with OCS-constrained GPP in the first step as prior; (2) a joint scheme, which simultaneously optimizes GPP and Re using OCS and CO2 observations. We will evaluate the result using an estimated GPP from space-borne solar-induced fluorescence observations and a data-driven GPP upscaled from FLUXNET data with a statistical model (Jung et al., 2011). Preliminary result for the year 2010 shows the joint inversion makes simulated mole fractions more consistent with observations for both OCS and CO2. However, the uncertainty of OCS simulation is larger than that of CO2. The two-step and joint schemes perform similarly in improving the consistence with observations for OCS, implicating that OCS could provide independent constraint in joint inversion. Optimization makes less total GPP and Re but more NEE, when testing with prior CO2 fluxes from two biosphere models. This study gives an in-depth insight into the role of joint atmospheric OCS and CO2 observations in constraining CO2 fluxes.

  15. Constraining a land-surface model with multiple observations by application of the MPI-Carbon Cycle Data Assimilation System V1.0

    NASA Astrophysics Data System (ADS)

    Schürmann, Gregor J.; Kaminski, Thomas; Köstler, Christoph; Carvalhais, Nuno; Voßbeck, Michael; Kattge, Jens; Giering, Ralf; Rödenbeck, Christian; Heimann, Martin; Zaehle, Sönke

    2016-09-01

    We describe the Max Planck Institute Carbon Cycle Data Assimilation System (MPI-CCDAS) built around the tangent-linear version of the JSBACH land-surface scheme, which is part of the MPI-Earth System Model v1. The simulated phenology and net land carbon balance were constrained by globally distributed observations of the fraction of absorbed photosynthetically active radiation (FAPAR, using the TIP-FAPAR product) and atmospheric CO2 at a global set of monitoring stations for the years 2005 to 2009. When constrained by FAPAR observations alone, the system successfully, and computationally efficiently, improved simulated growing-season average FAPAR, as well as its seasonality in the northern extra-tropics. When constrained by atmospheric CO2 observations alone, global net and gross carbon fluxes were improved, despite a tendency of the system to underestimate tropical productivity. Assimilating both data streams jointly allowed the MPI-CCDAS to match both observations (TIP-FAPAR and atmospheric CO2) equally well as the single data stream assimilation cases, thereby increasing the overall appropriateness of the simulated biosphere dynamics and underlying parameter values. Our study thus demonstrates the value of multiple-data-stream assimilation for the simulation of terrestrial biosphere dynamics. It further highlights the potential role of remote sensing data, here the TIP-FAPAR product, in stabilising the strongly underdetermined atmospheric inversion problem posed by atmospheric transport and CO2 observations alone. Notwithstanding these advances, the constraint of the observations on regional gross and net CO2 flux patterns on the MPI-CCDAS is limited through the coarse-scale parametrisation of the biosphere model. We expect improvement through a refined initialisation strategy and inclusion of further biosphere observations as constraints.

  16. Constraining heat-transport models by comparison to experimental data in a NIF hohlraum

    NASA Astrophysics Data System (ADS)

    Farmer, W. A.; Jones, O. S.; Barrios Garcia, M. A.; Koning, J. M.; Kerbel, G. D.; Strozzi, D. J.; Hinkel, D. E.; Moody, J. D.; Suter, L. J.; Liedahl, D. A.; Moore, A. S.; Landen, O. L.

    2017-10-01

    The accurate simulation of hohlraum plasma conditions is important for predicting the partition of energy and the symmetry of the x-ray field within a hohlraum. Electron heat transport within the hohlraum plasma is difficult to model due to the complex interaction of kinetic plasma effects, magnetic fields, laser-plasma interactions, and microturbulence. Here, we report simulation results using the radiation-hydrodynamic code, HYDRA, utilizing various physics packages (e.g., nonlocal Schurtz model, MHD, flux limiters) and compare to data from hohlraum plasma experiments which contain a Mn-Co tracer dot. In these experiments, the dot is placed in various positions in the hohlraum in order to assess the spatial variation of plasma conditions. Simulated data is compared to a variety of experimental diagnostics. Conclusions are given concerning how the experimental data does and does not constrain the physics models examined. This work was supported by the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  17. [3-D finite element modeling of internal fixation of mandibular mental fracture and the design of boundary constraints].

    PubMed

    Luo, Xiaohui; Wang, Hang; Fan, Yubo

    2007-04-01

    This study was aimed to develop a 3-D finite element (3-D FE) model of the mental fractured mandible and design the boundary constrains. The CT images from a health volunteer were used as the original information and put into ANSYS program to build a 3-D FE model. The model of the miniplate and screw which were used for the internal fixation was established by Pro/E. The boundary constrains of different muscle loadings were used to simulate the 3 functional conditions of the mandible. A 3-D FE model of mental fractured mandible under the miniplate-screw internal fixation system was constructed. And by the boundary constraints, the 3 biting conditions were simulated and the model could serve as a foundation on which to analyze the biomechanical behavior of the fractured mandible.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jennings, Elise; Wechsler, Risa H.

    We present the nonlinear 2D galaxy power spectrum, P(k, µ), in redshift space, measured from the Dark Sky simulations, using galaxy catalogs constructed with both halo occupation distribution and subhalo abundance matching methods, chosen to represent an intermediate redshift sample of luminous red galaxies. We find that the information content in individual µ (cosine of the angle to the line of sight) bins is substantially richer then multipole moments, and show that this can be used to isolate the impact of nonlinear growth and redshift space distortion (RSD) effects. Using the µ < 0.2 simulation data, which we show ismore » not impacted by RSD effects, we can successfully measure the nonlinear bias to an accuracy of ~ 5% at k < 0.6hMpc-1 . This use of individual µ bins to extract the nonlinear bias successfully removes a large parameter degeneracy when constraining the linear growth rate of structure. We carry out a joint parameter estimation, using the low µ simulation data to constrain the nonlinear bias, and µ > 0.2 to constrain the growth rate and show that f can be constrained to ~ 26(22)% to a kmax < 0.4(0.6)hMpc-1 from clustering alone using a simple dispersion model, for a range of galaxy models. Our analysis of individual µ bins also reveals interesting physical effects which arise simply from different methods of populating halos with galaxies. We also find a prominent turnaround scale, at which RSD damping effects are greater then the nonlinear growth, which differs not only for each µ bin but also for each galaxy model. These features may provide unique signatures which could be used to shed light on the galaxy–dark matter connection. Furthermore, the idea of separating nonlinear growth and RSD effects making use of the full information in the 2D galaxy power spectrum yields significant improvements in constraining cosmological parameters and may be a promising probe of galaxy formation models.« less

  19. Disentangling Redshift-Space Distortions and Nonlinear Bias using the 2D Power Spectrum

    DOE PAGES

    Jennings, Elise; Wechsler, Risa H.

    2015-08-07

    We present the nonlinear 2D galaxy power spectrum, P(k, µ), in redshift space, measured from the Dark Sky simulations, using galaxy catalogs constructed with both halo occupation distribution and subhalo abundance matching methods, chosen to represent an intermediate redshift sample of luminous red galaxies. We find that the information content in individual µ (cosine of the angle to the line of sight) bins is substantially richer then multipole moments, and show that this can be used to isolate the impact of nonlinear growth and redshift space distortion (RSD) effects. Using the µ < 0.2 simulation data, which we show ismore » not impacted by RSD effects, we can successfully measure the nonlinear bias to an accuracy of ~ 5% at k < 0.6hMpc-1 . This use of individual µ bins to extract the nonlinear bias successfully removes a large parameter degeneracy when constraining the linear growth rate of structure. We carry out a joint parameter estimation, using the low µ simulation data to constrain the nonlinear bias, and µ > 0.2 to constrain the growth rate and show that f can be constrained to ~ 26(22)% to a kmax < 0.4(0.6)hMpc-1 from clustering alone using a simple dispersion model, for a range of galaxy models. Our analysis of individual µ bins also reveals interesting physical effects which arise simply from different methods of populating halos with galaxies. We also find a prominent turnaround scale, at which RSD damping effects are greater then the nonlinear growth, which differs not only for each µ bin but also for each galaxy model. These features may provide unique signatures which could be used to shed light on the galaxy–dark matter connection. Furthermore, the idea of separating nonlinear growth and RSD effects making use of the full information in the 2D galaxy power spectrum yields significant improvements in constraining cosmological parameters and may be a promising probe of galaxy formation models.« less

  20. Dynamical computation of constrained flexible systems using a modal Udwadia-Kalaba formulation: Application to musical instruments.

    PubMed

    Antunes, J; Debut, V

    2017-02-01

    Most musical instruments consist of dynamical subsystems connected at a number of constraining points through which energy flows. For physical sound synthesis, one important difficulty deals with enforcing these coupling constraints. While standard techniques include the use of Lagrange multipliers or penalty methods, in this paper, a different approach is explored, the Udwadia-Kalaba (U-K) formulation, which is rooted on analytical dynamics but avoids the use of Lagrange multipliers. This general and elegant formulation has been nearly exclusively used for conceptual systems of discrete masses or articulated rigid bodies, namely, in robotics. However its natural extension to deal with continuous flexible systems is surprisingly absent from the literature. Here, such a modeling strategy is developed and the potential of combining the U-K equation for constrained systems with the modal description is shown, in particular, to simulate musical instruments. Objectives are twofold: (1) Develop the U-K equation for constrained flexible systems with subsystems modelled through unconstrained modes; and (2) apply this framework to compute string/body coupled dynamics. This example complements previous work [Debut, Antunes, Marques, and Carvalho, Appl. Acoust. 108, 3-18 (2016)] on guitar modeling using penalty methods. Simulations show that the proposed technique provides similar results with a significant improvement in computational efficiency.

  1. T-COMP—A suite of programs for extracting transmissivity from MODFLOW models

    USGS Publications Warehouse

    Halford, Keith J.

    2016-02-12

    Simulated transmissivities are constrained poorly by assigning permissible ranges of hydraulic conductivities from aquifer-test results to hydrogeologic units in groundwater-flow models. These wide ranges are derived from interpretations of many aquifer tests that are categorized by hydrogeologic unit. Uncertainty is added where contributing thicknesses differ between field estimates and numerical models. Wide ranges of hydraulic conductivities and discordant thicknesses result in simulated transmissivities that frequently are much greater than aquifer-test results. Multiple orders of magnitude differences frequently occur between simulated and observed transmissivities where observed transmissivities are less than 1,000 feet squared per day.Transmissivity observations from individual aquifer tests can constrain model calibration as head and flow observations do. This approach is superior to diluting aquifer-test results into generalized ranges of hydraulic conductivities. Observed and simulated transmissivities can be compared directly with T-COMP, a suite of three FORTRAN programs. Transmissivity observations require that simulated hydraulic conductivities and thicknesses in the volume investigated by an aquifer test be extracted and integrated into a simulated transmissivity. Transmissivities of MODFLOW model cells are sampled within the volume affected by an aquifer test as defined by a well-specific, radial-flow model of each aquifer test. Sampled transmissivities of model cells are averaged within a layer and summed across layers. Accuracy of the approach was tested with hypothetical, multiple-aquifer models where specified transmissivities ranged between 250 and 20,000 feet squared per day. More than 90 percent of simulated transmissivities were within a factor of 2 of specified transmissivities.

  2. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    PubMed

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  3. Constrained evolution in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  4. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  5. Large historical growth in global terrestrial gross primary production

    DOE PAGES

    Campbell, J. E.; Berry, J. A.; Seibt, U.; ...

    2017-04-05

    Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less

  6. Large historical growth in global terrestrial gross primary production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, J. E.; Berry, J. A.; Seibt, U.

    Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less

  7. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2016-01-05

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  8. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamurthy, Dheepak

    This paper is an overview of Power System Simulation Toolbox (psst). psst is an open-source Python application for the simulation and analysis of power system models. psst simulates the wholesale market operation by solving a DC Optimal Power Flow (DCOPF), Security Constrained Unit Commitment (SCUC) and a Security Constrained Economic Dispatch (SCED). psst also includes models for the various entities in a power system such as Generator Companies (GenCos), Load Serving Entities (LSEs) and an Independent System Operator (ISO). psst features an open modular object oriented architecture that will make it useful for researchers to customize, expand, experiment beyond solvingmore » traditional problems. psst also includes a web based Graphical User Interface (GUI) that allows for user friendly interaction and for implementation on remote High Performance Computing (HPCs) clusters for parallelized operations. This paper also provides an illustrative application of psst and benchmarks with standard IEEE test cases to show the advanced features and the performance of toolbox.« less

  10. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    NASA Astrophysics Data System (ADS)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  11. Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation

    NASA Astrophysics Data System (ADS)

    Du, Jiaoman; Yu, Lean; Li, Xiang

    2016-04-01

    Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.

  12. Alternative ways of using field-based estimates to calibrate ecosystem models and their implications for carbon cycle studies

    USGS Publications Warehouse

    He, Yujie; Zhuang, Qianlai; McGuire, David; Liu, Yaling; Chen, Min

    2013-01-01

    Model-data fusion is a process in which field observations are used to constrain model parameters. How observations are used to constrain parameters has a direct impact on the carbon cycle dynamics simulated by ecosystem models. In this study, we present an evaluation of several options for the use of observations in modeling regional carbon dynamics and explore the implications of those options. We calibrated the Terrestrial Ecosystem Model on a hierarchy of three vegetation classification levels for the Alaskan boreal forest: species level, plant-functional-type level (PFT level), and biome level, and we examined the differences in simulated carbon dynamics. Species-specific field-based estimates were directly used to parameterize the model for species-level simulations, while weighted averages based on species percent cover were used to generate estimates for PFT- and biome-level model parameterization. We found that calibrated key ecosystem process parameters differed substantially among species and overlapped for species that are categorized into different PFTs. Our analysis of parameter sets suggests that the PFT-level parameterizations primarily reflected the dominant species and that functional information of some species were lost from the PFT-level parameterizations. The biome-level parameterization was primarily representative of the needleleaf PFT and lost information on broadleaf species or PFT function. Our results indicate that PFT-level simulations may be potentially representative of the performance of species-level simulations while biome-level simulations may result in biased estimates. Improved theoretical and empirical justifications for grouping species into PFTs or biomes are needed to adequately represent the dynamics of ecosystem functioning and structure.

  13. Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?

    NASA Astrophysics Data System (ADS)

    Lin, Guangxing; Wan, Hui; Zhang, Kai; Qian, Yun; Ghan, Steven J.

    2016-09-01

    Efficient simulation strategies are crucial for the development and evaluation of high-resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity of the constrained simulations depends on the detailed implementation of nudging and the mechanism through which the perturbed parameter affects precipitation and cloud. The relative computational costs of nudged and free-running simulations are determined by the magnitude of internal variability in the physical quantities of interest, as well as the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature, and/or wind nudging with a 6 h relaxation time scale leads to nonnegligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while 1 year free-running simulations can satisfactorily capture the annual mean precipitation and cloud forcing sensitivities. In the case of a relatively weak perturbation in the large-scale condensation scheme, results from 1 year free-running simulations are strongly affected by natural noise, while nudging winds effectively reduces the noise, and reasonably reproduces the sensitivities. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.

  14. Effect of land model ensemble versus coupled model ensemble on the simulation of precipitation climatology and variability

    NASA Astrophysics Data System (ADS)

    Wei, Jiangfeng; Dirmeyer, Paul A.; Yang, Zong-Liang; Chen, Haishan

    2017-10-01

    Through a series of model simulations with an atmospheric general circulation model coupled to three different land surface models, this study investigates the impacts of land model ensembles and coupled model ensemble on precipitation simulation. It is found that coupling an ensemble of land models to an atmospheric model has a very minor impact on the improvement of precipitation climatology and variability, but a simple ensemble average of the precipitation from three individually coupled land-atmosphere models produces better results, especially for precipitation variability. The generally weak impact of land processes on precipitation should be the main reason that the land model ensembles do not improve precipitation simulation. However, if there are big biases in the land surface model or land surface data set, correcting them could improve the simulated climate, especially for well-constrained regional climate simulations.

  15. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    NASA Astrophysics Data System (ADS)

    Volk, Brent L.; Lagoudas, Dimitris C.; Maitland, Duncan J.

    2011-09-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5-4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data.

  16. Projected strengthening of Amazonian dry season by constrained climate model simulations

    NASA Astrophysics Data System (ADS)

    Boisier, Juan P.; Ciais, Philippe; Ducharne, Agnès; Guimberteau, Matthieu

    2015-07-01

    The vulnerability of Amazonian rainforest, and the ecological services it provides, depends on an adequate supply of dry-season water, either as precipitation or stored soil moisture. How the rain-bearing South American monsoon will evolve across the twenty-first century is thus a question of major interest. Extensive savanization, with its loss of forest carbon stock and uptake capacity, is an extreme although very uncertain scenario. We show that the contrasting rainfall projections simulated for Amazonia by 36 global climate models (GCMs) can be reproduced with empirical precipitation models, calibrated with historical GCM data as functions of the large-scale circulation. A set of these simple models was therefore calibrated with observations and used to constrain the GCM simulations. In agreement with the current hydrologic trends, the resulting projection towards the end of the twenty-first century is for a strengthening of the monsoon seasonal cycle, and a dry-season lengthening in southern Amazonia. With this approach, the increase in the area subjected to lengthy--savannah-prone--dry seasons is substantially larger than the GCM-simulated one. Our results confirm the dominant picture shown by the state-of-the-art GCMs, but suggest that the `model democracy' view of these impacts can be significantly underestimated.

  17. ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics

    PubMed Central

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.

    2014-01-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156

  18. Water Budget Estimation by Assimilating Multiple Observations and Hydrological Modeling Using Constrained Ensemble Kalman Filtering

    NASA Astrophysics Data System (ADS)

    Pan, M.; Wood, E. F.

    2004-05-01

    This study explores a method to estimate various components of the water cycle (ET, runoff, land storage, etc.) based on a number of different info sources, including both observations and observation-enhanced model simulations. Different from existing data assimilations, this constrained Kalman filtering approach keeps the water budget perfectly closed while updating the states of the underlying model (VIC model) optimally using observations. Assimilating different data sources in this way has several advantages: (1) physical model is included to make estimation time series smooth, missing-free, and more physically consistent; (2) uncertainties in the model and observations are properly addressed; (3) model is constrained by observation thus to reduce model biases; (4) balance of water is always preserved along the assimilation. Experiments are carried out in Southern Great Plain region where necessary observations have been collected. This method may also be implemented in other applications with physical constraints (e.g. energy cycles) and at different scales.

  19. Constraining Earth System Models in the Tropics with Multiple Satellite Observations

    NASA Astrophysics Data System (ADS)

    Shi, M.; Liu, J.; Saatchi, S. S.; Chan, S.; Yu, Y.; Zhao, M.

    2016-12-01

    Because of the impacts of cloud and atmospheric aerosol on spectral observations and the saturation of spectral observations over dense forests, the current spectral observations (e.g., Moderate Resolution Imaging Spectroradiometer) have large uncertainties in the tropics. Nevertheless, the backscatter observations from the SeaWinds Scatterometer onboard QuikSCAT (QSCAT) are sensitive to the variations of canopy water content and structure of forest canopy, and are not affected by clouds and atmospheric aerosols. In addition, the lack of sensitivity of the Soil Moisture Active Passive (SMAP) Level 1C brightness temperature (TB) to soil moisture under dense forest canopies (e.g., forests in tropics) makes the SMAP TB data a direct indicator of canopy properties. In this study, we use a variety of new satellite observations, including the QSCAT backscatter observations, the Gravity Recovery and Climate Experiment (GRACE) satellite's observed temporal gravity field variations, and the SMAP Level 1C TB, to constrain the carbon (C) cycle simulated by the Community Land Model version 4.5 BGC (CLM4.5) for the 2005 Amazonia drought and 2015 El Nino. Our results show that the leaf C pool size simulated by CLM4.5 decreases dramatically in southwest Amazonia in the 2005 drought, and recovers slowly afterward (after about 3 years). This result is consistent with the long-term C-recovery after the 2005 Amazonia drought observed by QSCAT. The slow C pool recovery is associated with large fire disturbance and the slow water storage recovery simulated by CLM4.5 and observed by GRACE. We will also discuss the impact of the 2015 El Nino on the tropical C dynamics constrained by SMAP Level 1C data. This study represents an innovative way of using satellite microwave observations to constrain C cycle in an Earth system model.

  20. Ten years of multiple data stream assimilation with the ORCHIDEE land surface model to improve regional to global simulated carbon budgets: synthesis and perspectives on directions for the future

    NASA Astrophysics Data System (ADS)

    Peylin, P. P.; Bacour, C.; MacBean, N.; Maignan, F.; Bastrikov, V.; Chevallier, F.

    2017-12-01

    Predicting the fate of carbon stocks and their sensitivity to climate change and land use/management strongly relies on our ability to accurately model net and gross carbon fluxes. However, simulated carbon and water fluxes remain subject to large uncertainties, partly because of unknown or poorly calibrated parameters. Over the past ten years, the carbon cycle data assimilation system at the Laboratoire des Sciences du Climat et de l'Environnement has investigated the benefit of assimilating multiple carbon cycle data streams into the ORCHIDEE LSM, the land surface component of the Institut Pierre Simon Laplace Earth System Model. These datasets have included FLUXNET eddy covariance data (net CO2 flux and latent heat flux) to constrain hourly to seasonal time-scale carbon cycle processes, remote sensing of the vegetation activity (MODIS NDVI) to constrain the leaf phenology, biomass data to constrain "slow" (yearly to decadal) processes of carbon allocation, and atmospheric CO2 concentrations to provide overall large scale constraints on the land carbon sink. Furthermore, we have investigated technical issues related to multiple data stream assimilation and choice of optimization algorithm. This has provided a wide-ranging perspective on the challenges we face in constraining model parameters and thus better quantifying, and reducing, model uncertainty in projections of the future global carbon sink. We review our past studies in terms of the impact of the optimization on key characteristics of the carbon cycle, e.g. the partition of the northern latitudes vs tropical land carbon sink, and compare to the classic atmospheric flux inversion approach. Throughout, we discuss our work in context of the abovementioned challenges, and propose solutions for the community going forward, including the potential of new observations such as atmospheric COS concentrations and satellite-derived Solar Induced Fluorescence to constrain the gross carbon fluxes of the ORCHIDEE model.

  1. 1D-Var multilayer assimilation of X-band SAR data into a detailed snowpack model

    NASA Astrophysics Data System (ADS)

    Phan, X. V.; Ferro-Famil, L.; Gay, M.; Durand, Y.; Dumont, M.; Morin, S.; Allain, S.; D'Urso, G.; Girard, A.

    2014-10-01

    The structure and physical properties of a snowpack and their temporal evolution may be simulated using meteorological data and a snow metamorphism model. Such an approach may meet limitations related to potential divergences and accumulated errors, to a limited spatial resolution, to wind or topography-induced local modulations of the physical properties of a snow cover, etc. Exogenous data are then required in order to constrain the simulator and improve its performance over time. Synthetic-aperture radars (SARs) and, in particular, recent sensors provide reflectivity maps of snow-covered environments with high temporal and spatial resolutions. The radiometric properties of a snowpack measured at sufficiently high carrier frequencies are known to be tightly related to some of its main physical parameters, like its depth, snow grain size and density. SAR acquisitions may then be used, together with an electromagnetic backscattering model (EBM) able to simulate the reflectivity of a snowpack from a set of physical descriptors, in order to constrain a physical snowpack model. In this study, we introduce a variational data assimilation scheme coupling TerraSAR-X radiometric data into the snowpack evolution model Crocus. The physical properties of a snowpack, such as snow density and optical diameter of each layer, are simulated by Crocus, fed by the local reanalysis of meteorological data (SAFRAN) at a French Alpine location. These snowpack properties are used as inputs of an EBM based on dense media radiative transfer (DMRT) theory, which simulates the total backscattering coefficient of a dry snow medium at X and higher frequency bands. After evaluating the sensitivity of the EBM to snowpack parameters, a 1D-Var data assimilation scheme is implemented in order to minimize the discrepancies between EBM simulations and observations obtained from TerraSAR-X acquisitions by modifying the physical parameters of the Crocus-simulated snowpack. The algorithm then re-initializes Crocus with the modified snowpack physical parameters, allowing it to continue the simulation of snowpack evolution, with adjustments based on remote sensing information. This method is evaluated using multi-temporal TerraSAR-X images acquired over the specific site of the Argentière glacier (Mont-Blanc massif, French Alps) to constrain the evolution of Crocus. Results indicate that X-band SAR data can be taken into account to modify the evolution of snowpack simulated by Crocus.

  2. Use of remote-sensing reflectance to constrain a data assimilating marine biogeochemical model of the Great Barrier Reef

    NASA Astrophysics Data System (ADS)

    Jones, Emlyn M.; Baird, Mark E.; Mongin, Mathieu; Parslow, John; Skerratt, Jenny; Lovell, Jenny; Margvelashvili, Nugzar; Matear, Richard J.; Wild-Allen, Karen; Robson, Barbara; Rizwi, Farhan; Oke, Peter; King, Edward; Schroeder, Thomas; Steven, Andy; Taylor, John

    2016-12-01

    Skillful marine biogeochemical (BGC) models are required to understand a range of coastal and global phenomena such as changes in nitrogen and carbon cycles. The refinement of BGC models through the assimilation of variables calculated from observed in-water inherent optical properties (IOPs), such as phytoplankton absorption, is problematic. Empirically derived relationships between IOPs and variables such as chlorophyll-a concentration (Chl a), total suspended solids (TSS) and coloured dissolved organic matter (CDOM) have been shown to have errors that can exceed 100 % of the observed quantity. These errors are greatest in shallow coastal regions, such as the Great Barrier Reef (GBR), due to the additional signal from bottom reflectance. Rather than assimilate quantities calculated using IOP algorithms, this study demonstrates the advantages of assimilating quantities calculated directly from the less error-prone satellite remote-sensing reflectance (RSR). To assimilate the observed RSR, we use an in-water optical model to produce an equivalent simulated RSR and calculate the mismatch between the observed and simulated quantities to constrain the BGC model with a deterministic ensemble Kalman filter (DEnKF). The traditional assumption that simulated surface Chl a is equivalent to the remotely sensed OC3M estimate of Chl a resulted in a forecast error of approximately 75 %. We show this error can be halved by instead using simulated RSR to constrain the model via the assimilation system. When the analysis and forecast fields from the RSR-based assimilation system are compared with the non-assimilating model, a comparison against independent in situ observations of Chl a, TSS and dissolved inorganic nutrients (NO3, NH4 and DIP) showed that errors are reduced by up to 90 %. In all cases, the assimilation system improves the simulation compared to the non-assimilating model. Our approach allows for the incorporation of vast quantities of remote-sensing observations that have in the past been discarded due to shallow water and/or artefacts introduced by terrestrially derived TSS and CDOM or the lack of a calibrated regional IOP algorithm.

  3. Forward Modeling of Atmospheric Carbon Dioxide in GEOS-5: Uncertainties Related to Surface Fluxes and Sub-Grid Transport

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Ott, Lesley E.; Zhu, Zhengxin; Bowman, Kevin; Brix, Holger; Collatz, G. James; Dutkiewicz, Stephanie; Fisher, Joshua B.; Gregg, Watson W.; Hill, Chris; hide

    2011-01-01

    Forward GEOS-5 AGCM simulations of CO2, with transport constrained by analyzed meteorology for 2009-2010, are examined. The CO2 distributions are evaluated using AIRS upper tropospheric CO2 and ACOS-GOSAT total column CO2 observations. Different combinations of surface C02 fluxes are used to generate ensembles of runs that span some uncertainty in surface emissions and uptake. The fluxes are specified in GEOS-5 from different inventories (fossil and biofuel), different data-constrained estimates of land biological emissions, and different data-constrained ocean-biology estimates. One set of fluxes is based on the established "Transcom" database and others are constructed using contemporary satellite observations to constrain land and ocean process models. Likewise, different approximations to sub-grid transport are employed, to construct an ensemble of CO2 distributions related to transport variability. This work is part of NASA's "Carbon Monitoring System Flux Pilot Project,"

  4. Explicit Simulation of Networks of Outlet Glaciers to Constrain Greenland's Sea Level Contribution

    NASA Astrophysics Data System (ADS)

    Ultee, E.; Bassis, J. N.

    2017-12-01

    Ice from the Greenland Ice Sheet drains to the ocean through hundreds of outlet glaciers, many of which are too small to be accurately resolved in continental-scale ice sheet models. Moreover, despite the fact that dynamic changes in Greenland outlet glaciers are currently responsible for about half of the ice sheet's contribution to global sea level, all but the largest are often excluded from major sea level assessments. We have previously developed and validated a simple model that simulates advance and retreat of networks of marine-terminating glaciers based on the perfect plastic approximation. Here we apply this model to a selection of forcing scenarios, representing both climate persistence and extreme scenarios, to constrain changes in calving flux from the most significant Greenland outlet glaciers. Our model can be implemented in standalone mode or as the calving module in a more sophisticated large-scale model, providing constraints on Greenland's future contribution to global sea level rise under a range of scenarios.

  5. Longitudinal train dynamics model for a rail transit simulation system

    DOE PAGES

    Wang, Jinghui; Rakha, Hesham A.

    2018-01-01

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  6. Longitudinal train dynamics model for a rail transit simulation system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jinghui; Rakha, Hesham A.

    The paper develops a longitudinal train dynamics model in support of microscopic railway transportation simulation. The model can be calibrated without any mechanical data making it ideal for implementation in transportation simulators. The calibration and validation work is based on data collected from the Portland light rail train fleet. The calibration procedure is mathematically formulated as a constrained non-linear optimization problem. The validity of the model is assessed by comparing instantaneous model predictions against field observations, and also evaluated in the domains of acceleration/deceleration versus speed and acceleration/deceleration versus distance. A test is conducted to investigate the adequacy of themore » model in simulation implementation. The results demonstrate that the proposed model can adequately capture instantaneous train dynamics, and provides good performance in the simulation test. Thus, the model provides a simple theoretical foundation for microscopic simulators and will significantly support the planning, management and control of railway transportation systems.« less

  7. Thermodynamically Constrained Averaging Theory (TCAT) Two-Phase Flow Model: Derivation, Closure, and Simulation Results

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Miller, C. T.; Dye, A. L.; Gray, W. G.; McClure, J. E.; Rybak, I.

    2015-12-01

    The thermodynamically constrained averaging theory (TCAT) has been usedto formulate general classes of porous medium models, including newmodels for two-fluid-phase flow. The TCAT approach provides advantagesthat include a firm connection between the microscale, or pore scale,and the macroscale; a thermodynamically consistent basis; explicitinclusion of factors such as interfacial areas, contact angles,interfacial tension, and curvatures; and dynamics of interface movementand relaxation to an equilibrium state. In order to render the TCATmodel solvable, certain closure relations are needed to relate fluidpressure, interfacial areas, curvatures, and relaxation rates. In thiswork, we formulate and solve a TCAT-based two-fluid-phase flow model. We detail the formulation of the model, which is a specific instancefrom a hierarchy of two-fluid-phase flow models that emerge from thetheory. We show the closure problem that must be solved. Using recentresults from high-resolution microscale simulations, we advance a set ofclosure relations that produce a closed model. Lastly, we solve the model using a locally conservative numerical scheme and compare the TCAT model to the traditional model.

  8. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    PubMed Central

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling are performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigate the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles are performed during each test. The material is observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5 MPa to 4.2 MPa is observed for the constrained displacement recovery experiments. After performing the experiments, the Chen and Lagoudas model is used to simulate and predict the experimental results. The material properties used in the constitutive model – namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction – are calibrated from a single 10% extension free recovery experiment. The model is then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data. PMID:22003272

  9. Stability analysis in tachyonic potential chameleon cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farajollahi, H.; Salehi, A.; Tayebi, F.

    2011-05-01

    We study general properties of attractors for tachyonic potential chameleon scalar-field model which possess cosmological scaling solutions. An analytic formulation is given to obtain fixed points with a discussion on their stability. The model predicts a dynamical equation of state parameter with phantom crossing behavior for an accelerating universe. We constrain the parameters of the model by best fitting with the recent data-sets from supernovae and simulated data points for redshift drift experiment generated by Monte Carlo simulations.

  10. An Observationally Constrained Evaluation of the Oxidative Capacity in the Tropical Western Pacific Troposphere

    NASA Technical Reports Server (NTRS)

    Nicely, Julie M.; Anderson, Daniel C.; Canty, Timothy P.; Salawitch, Ross J.; Wolfe, Glenn M.; Apel, Eric C.; Arnold, Steve R.; Atlas, Elliot L.; Blake, Nicola J.; Bresch, James F.; hide

    2016-01-01

    Hydroxyl radical (OH) is the main daytime oxidant in the troposphere and determines the atmospheric lifetimes of many compounds. We use aircraft measurements of O3, H2O, NO, and other species from the Convective Transport of Active Species in the Tropics (CONTRAST) field campaign, which occurred in the tropical western Pacific (TWP) during January-February 2014, to constrain a photochemical box model and estimate concentrations of OH throughout the troposphere. We find that tropospheric column OH (OHCOL) inferred from CONTRAST observations is 12 to 40% higher than found in chemical transport models (CTMs), including CAM-chem-SD run with 2014 meteorology as well as eight models that participated in POLMIP (2008 meteorology). Part of this discrepancy is due to a clear-sky sampling bias that affects CONTRAST observations; accounting for this bias and also for a small difference in chemical mechanism results in our empirically based value of OHCOL being 0 to 20% larger than found within global models. While these global models simulate observed O3 reasonably well, they underestimate NOx (NO +NO2) by a factor of 2, resulting in OHCOL approx.30% lower than box model simulations constrained by observed NO. Underestimations by CTMs of observed CH3CHO throughout the troposphere and of HCHO in the upper troposphere further contribute to differences between our constrained estimates of OH and those calculated by CTMs. Finally, our calculations do not support the prior suggestion of the existence of a tropospheric OH minimum in the TWP, because during January-February 2014 observed levels of O3 and NO were considerably larger than previously reported values in the TWP.

  11. Model Predictive Control Based Motion Drive Algorithm for a Driving Simulator

    NASA Astrophysics Data System (ADS)

    Rehmatullah, Faizan

    In this research, we develop a model predictive control based motion drive algorithm for the driving simulator at Toronto Rehabilitation Institute. Motion drive algorithms exploit the limitations of the human vestibular system to formulate a perception of motion within the constrained workspace of a simulator. In the absence of visual cues, the human perception system is unable to distinguish between acceleration and the force of gravity. The motion drive algorithm determines control inputs to displace the simulator platform, and by using the resulting inertial forces and angular rates, creates the perception of motion. By using model predictive control, we can optimize the use of simulator workspace for every maneuver while simulating the vehicle perception. With the ability to handle nonlinear constraints, the model predictive control allows us to incorporate workspace limitations.

  12. The Detection and Attribution Model Intercomparison Project (DAMIP v1.0)contribution to CMIP6

    DOE PAGES

    Gillett, Nathan P.; Shiogama, Hideo; Funke, Bernd; ...

    2016-10-18

    Detection and attribution (D&A) simulations were important components of CMIP5 and underpinned the climate change detection and attribution assessments of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. The primary goals of the Detection and Attribution Model Intercomparison Project (DAMIP) are to facilitate improved estimation of the contributions of anthropogenic and natural forcing changes to observed global warming as well as to observed global and regional changes in other climate variables; to contribute to the estimation of how historical emissions have altered and are altering contemporary climate risk; and to facilitate improved observationally constrained projections of futuremore » climate change. D&A studies typically require unforced control simulations and historical simulations including all major anthropogenic and natural forcings. Such simulations will be carried out as part of the DECK and the CMIP6 historical simulation. In addition D&A studies require simulations covering the historical period driven by individual forcings or subsets of forcings only: such simulations are proposed here. Key novel features of the experimental design presented here include firstly new historical simulations with aerosols-only, stratospheric-ozone-only, CO2-only, solar-only, and volcanic-only forcing, facilitating an improved estimation of the climate response to individual forcing, secondly future single forcing experiments, allowing observationally constrained projections of future climate change, and thirdly an experimental design which allows models with and without coupled atmospheric chemistry to be compared on an equal footing.« less

  13. The Detection and Attribution Model Intercomparison Project (DAMIP v1.0) contribution to CMIP6

    NASA Astrophysics Data System (ADS)

    Gillett, Nathan P.; Shiogama, Hideo; Funke, Bernd; Hegerl, Gabriele; Knutti, Reto; Matthes, Katja; Santer, Benjamin D.; Stone, Daithi; Tebaldi, Claudia

    2016-10-01

    Detection and attribution (D&A) simulations were important components of CMIP5 and underpinned the climate change detection and attribution assessments of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. The primary goals of the Detection and Attribution Model Intercomparison Project (DAMIP) are to facilitate improved estimation of the contributions of anthropogenic and natural forcing changes to observed global warming as well as to observed global and regional changes in other climate variables; to contribute to the estimation of how historical emissions have altered and are altering contemporary climate risk; and to facilitate improved observationally constrained projections of future climate change. D&A studies typically require unforced control simulations and historical simulations including all major anthropogenic and natural forcings. Such simulations will be carried out as part of the DECK and the CMIP6 historical simulation. In addition D&A studies require simulations covering the historical period driven by individual forcings or subsets of forcings only: such simulations are proposed here. Key novel features of the experimental design presented here include firstly new historical simulations with aerosols-only, stratospheric-ozone-only, CO2-only, solar-only, and volcanic-only forcing, facilitating an improved estimation of the climate response to individual forcing, secondly future single forcing experiments, allowing observationally constrained projections of future climate change, and thirdly an experimental design which allows models with and without coupled atmospheric chemistry to be compared on an equal footing.

  14. OCO-2 Column Carbon Dioxide and Biometric Data Jointly Constrain Parameterization and Projection of a Global Land Model

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III

    2015-12-01

    Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.

  15. Analysis and Simulation of Far-Field Seismic Data from the Source Physics Experiment

    DTIC Science & Technology

    2012-09-01

    ANALYSIS AND SIMULATION OF FAR-FIELD SEISMIC DATA FROM THE SOURCE PHYSICS EXPERIMENT Arben Pitarka, Robert J. Mellors, Arthur J. Rodgers, Sean...Security Site (NNSS) provides new data for investigating the excitation and propagation of seismic waves generated by buried explosions. A particular... seismic model. The 3D seismic model includes surface topography. It is based on regional geological data, with material properties constrained by shallow

  16. Elastic Model Transitions: a Hybrid Approach Utilizing Quadratic Inequality Constrained Least Squares (LSQI) and Direct Shape Mapping (DSM)

    NASA Technical Reports Server (NTRS)

    Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.

    2014-01-01

    A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.

  17. Modeling the Land Use/Cover Change in an Arid Region Oasis City Constrained by Water Resource and Environmental Policy Change using Cellular Automata Model

    NASA Astrophysics Data System (ADS)

    Hu, X.; Li, X.; Lu, L.

    2017-12-01

    Land use/cover change (LUCC) is an important subject in the research of global environmental change and sustainable development, while spatial simulation on land use/cover change is one of the key content of LUCC and is also difficult due to the complexity of the system. The cellular automata (CA) model had an irreplaceable role in simulating of land use/cover change process due to the powerful spatial computing power. However, the majority of current CA land use/cover models were binary-state model that could not provide more general information about the overall spatial pattern of land use/cover change. Here, a multi-state logistic-regression-based Markov cellular automata (MLRMCA) model and a multi-state artificial-neural-network-based Markov cellular automata (MANNMCA) model were developed and were used to simulate complex land use/cover evolutionary process in an arid region oasis city constrained by water resource and environmental policy change, the Zhangye city during the period of 1990-2010. The results indicated that the MANNMCA model was superior to MLRMCA model in simulated accuracy. These indicated that by combining the artificial neural network with CA could more effectively capture the complex relationships between the land use/cover change and a set of spatial variables. Although the MLRMCA model were also some advantages, the MANNMCA model was more appropriate for simulating complex land use/cover dynamics. The two proposed models were effective and reliable, and could reflect the spatial evolution of regional land use/cover changes. These have also potential implications for the impact assessment of water resources, ecological restoration, and the sustainable urban development in arid areas.

  18. The Molecular Structure of a Phosphatidylserine Bilayer Determined by Scattering and Molecular Dynamics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Jianjun; Cheng, Xiaolin; Monticelli, Luca

    2014-01-01

    Phosphatidylserine (PS) lipids play essential roles in biological processes, including enzyme activation and apoptosis. We report on the molecular structure and atomic scale interactions of a fluid bilayer composed of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphatidylserine (POPS). A scattering density profile model, aided by molecular dynamics (MD) simulations, was developed to jointly refine different contrast small-angle neutron and X-ray scattering data, which yielded a lipid area of 62.7 A2 at 25 C. MD simulations with POPS lipid area constrained at different values were also performed using all-atom and aliphatic united-atom models. The optimal simulated bilayer was obtained using a model-free comparison approach. Examination of themore » simulated bilayer, which agrees best with the experimental scattering data, reveals a preferential interaction between Na+ ions and the terminal serine and phosphate moieties. Long-range inter-lipid interactions were identified, primarily between the positively charged ammonium, and the negatively charged carboxylic and phosphate oxygens. The area compressibility modulus KA of the POPS bilayer was derived by quantifying lipid area as a function of surface tension from area-constrained MD simulations. It was found that POPS bilayers possess a much larger KA than that of neutral phosphatidylcholine lipid bilayers. We propose that the unique molecular features of POPS bilayers may play an important role in certain physiological functions.« less

  19. Observed and Simulated Eddy Diffusivity Upstream of the Drake Passage

    NASA Astrophysics Data System (ADS)

    Tulloch, R.; Ferrari, R. M.; Marshall, J.

    2012-12-01

    Estimates of eddy diffusivity in the Southern Ocean are poorly constrained due to lack of observations. We compare the first direct estimate of isopycnal eddy diffusivity upstream of the Drake Passage (from Ledwell et al. 2011) with a numerical simulation. The estimate is computed from a point tracer release as part of the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES). We find that the observational diffusivity estimate of about 500m^2/s at 1500m depth is close to that computed in a data-constrained, 1/20th of a degree simulation of the Drake Passage region. This tracer estimate also agrees with Lagrangian float calculations in the model. The role of mean flow suppression of eddy diffusivity at shallower depths will also be discussed.

  20. A flexible open-source toolkit for lava flow simulations

    NASA Astrophysics Data System (ADS)

    Mossoux, Sophie; Feltz, Adelin; Poppe, Sam; Canters, Frank; Kervyn, Matthieu

    2014-05-01

    Lava flow hazard modeling is a useful tool for scientists and stakeholders confronted with imminent or long term hazard from basaltic volcanoes. It can improve their understanding of the spatial distribution of volcanic hazard, influence their land use decisions and improve the city evacuation during a volcanic crisis. Although a range of empirical, stochastic and physically-based lava flow models exists, these models are rarely available or require a large amount of physical constraints. We present a GIS toolkit which models lava flow propagation from one or multiple eruptive vents, defined interactively on a Digital Elevation Model (DEM). It combines existing probabilistic (VORIS) and deterministic (FLOWGO) models in order to improve the simulation of lava flow spatial spread and terminal length. Not only is this toolkit open-source, running in Python, which allows users to adapt the code to their needs, but it also allows users to combine the models included in different ways. The lava flow paths are determined based on the probabilistic steepest slope (VORIS model - Felpeto et al., 2001) which can be constrained in order to favour concentrated or dispersed flow fields. Moreover, the toolkit allows including a corrective factor in order for the lava to overcome small topographical obstacles or pits. The lava flow terminal length can be constrained using a fixed length value, a Gaussian probability density function or can be calculated based on the thermo-rheological properties of the open-channel lava flow (FLOWGO model - Harris and Rowland, 2001). These slope-constrained properties allow estimating the velocity of the flow and its heat losses. The lava flow stops when its velocity is zero or the lava temperature reaches the solidus. Recent lava flows of Karthala volcano (Comoros islands) are here used to demonstrate the quality of lava flow simulations with the toolkit, using a quantitative assessment of the match of the simulation with the real lava flows. The influence of the different input parameters on the quality of the simulations is discussed. REFERENCES: Felpeto et al. (2001), Assessment and modelling of lava flow hazard on Lanzarote (Canary islands), Nat. Hazards, 23, 247-257. Harris and Rowland (2001), FLOWGO: a kinematic thermo-rheological model for lava flowing in a channel, Bull. Volcanol., 63, 20-44.

  1. Modelling spatial and temporal vegetation variability with the Climate Constrained Vegetation Index: evidence of CO2 fertilisation and of water stress in continental interiors

    NASA Astrophysics Data System (ADS)

    Los, S. O.

    2015-06-01

    A model was developed to simulate spatial, seasonal and interannual variations in vegetation in response to temperature, precipitation and atmospheric CO2 concentrations; the model addresses shortcomings in current implementations. The model uses the minimum of 12 temperature and precipitation constraint functions to simulate NDVI. Functions vary based on the Köppen-Trewartha climate classification to take adaptations of vegetation to climate into account. The simulated NDVI, referred to as the climate constrained vegetation index (CCVI), captured the spatial variability (0.82 < r <0.87), seasonal variability (median r = 0.83) and interannual variability (median global r = 0.24) in NDVI. The CCVI simulated the effects of adverse climate on vegetation during the 1984 drought in the Sahel and during dust bowls of the 1930s and 1950s in the Great Plains in North America. A global CO2 fertilisation effect was found in NDVI data, similar in magnitude to that of earlier estimates (8 % for the 20th century). This effect increased linearly with simple ratio, a transformation of the NDVI. Three CCVI scenarios, based on climate simulations using the representative concentration pathway RCP4.5, showed a greater sensitivity of vegetation towards precipitation in Northern Hemisphere mid latitudes than is currently implemented in climate models. This higher sensitivity is of importance to assess the impact of climate variability on vegetation, in particular on agricultural productivity.

  2. Diffusion of a Concentrated Lattice Gas in a Regular Comb Structure

    NASA Astrophysics Data System (ADS)

    Garcia, Paul; Wentworth, Christopher

    2008-10-01

    Understanding diffusion in constrained geometries is of interest in a variety of contexts as varied as mass transport in disordered solids, such as a percolation cluster, or intercellular transport of water molecules in biological tissue. In this investigation we explore diffusion in a very simple constrained geometry: a comb-like structure involving a one-dimensional backbone of lattice sites with regularly spaced teeth of fixed length. The model considered assumes a fixed concentration of diffusing particles can hop to nearest-neighbor sites only, and they do not interact with each other except that double occupancy is not allowed. The system is simulated using a Monte Carlo simulation procedure. The mean-square displacement of a tagged particle is calculated from the simulation as a function of time. The simulation shows normal diffusive behavior after a period of anomalous diffusion that increases as the tooth size increases.

  3. Evaluation of NASA satellite- and assimilation model-derived long-term daily temperature date over the continental US

    USDA-ARS?s Scientific Manuscript database

    Agricultural research increasingly is expected to provide precise, quantitative information with an explicit geographic coverage. Limited availability of continuous daily meteorological records often constrains efforts to provide such information through integrated use of simulation models, spatial ...

  4. Methodologies for simulating impacts of climate change on crop production

    USDA-ARS?s Scientific Manuscript database

    Ecophysiological models of crop growth have seen wide use in IPCC and related assessments. However, the diversity of modeling approaches constrains cross-study syntheses and increases potential for bias. We reviewed 139 peer-reviewed papers dealing with climate change and agriculture, considering si...

  5. On the nullspace of TLS multi-station adjustment

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen

    2018-07-01

    In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.

  6. Coupling Poisson rectangular pulse and multiplicative microcanonical random cascade models to generate sub-daily precipitation timeseries

    NASA Astrophysics Data System (ADS)

    Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph

    2018-07-01

    To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.

  7. Modeling and Simulation of the Gonghe geothermal field (Qinghai, China) Constrained by Geophysical

    NASA Astrophysics Data System (ADS)

    Zeng, Z.; Wang, K.; Zhao, X.; Huai, N.; He, R.

    2017-12-01

    The Gonghe geothermal field in Qinghai is important because of its variety of geothermal resource types. Now, the Gonghe geothermal field has been a demonstration area of geothermal development and utilization in China. It has been the topic of numerous geophysical investigations conducted to determine the depth to and the nature of the heat source, and to image the channel of heat flow. This work focuses on the causes of geothermal fields used numerical simulation method constrained by geophysical data. At first, by analyzing and inverting an magnetotelluric (MT) measurements profile across this area we obtain the deep resistivity distribution. Using the gravity anomaly inversion constrained by the resistivity profile, the density of the basins and the underlying rocks can be calculated. Combined with the measured parameters of rock thermal conductivity, the 2D geothermal conceptual model of Gonghe area is constructed. Then, the unstructured finite element method is used to simulate the heat conduction equation and the geothermal field. Results of this model were calibrated with temperature data for the observation well. A good match was achieved between the measured values and the model's predicted values. At last, geothermal gradient and heat flow distribution of this model are calculated(fig.1.). According to the results of geophysical exploration, there is a low resistance and low density region (d5) below the geothermal field. We recognize that this anomaly is generated by tectonic motion, and this tectonic movement creates a mantle-derived heat upstream channel. So that the anomalous basement heat flow values are higher than in other regions. The model's predicted values simulated using that boundary condition has a good match with the measured values. The simulated heat flow values show that the mantle-derived heat flow migrates through the boundary of the low-resistance low-density anomaly area to the Gonghe geothermal field, with only a small fraction moving to other regions. Therefore, the mantle-derived heat flow across the tectonic channel to the cohesive continuous supply heat for Gonghe geothermal field, is the main the main causes of abundant geothermal resources.

  8. Experimental evaluation of model predictive control and inverse dynamics control for spacecraft proximity and docking maneuvers

    NASA Astrophysics Data System (ADS)

    Virgili-Llop, Josep; Zagaris, Costantinos; Park, Hyeongjun; Zappulla, Richard; Romano, Marcello

    2018-03-01

    An experimental campaign has been conducted to evaluate the performance of two different guidance and control algorithms on a multi-constrained docking maneuver. The evaluated algorithms are model predictive control (MPC) and inverse dynamics in the virtual domain (IDVD). A linear-quadratic approach with a quadratic programming solver is used for the MPC approach. A nonconvex optimization problem results from the IDVD approach, and a nonlinear programming solver is used. The docking scenario is constrained by the presence of a keep-out zone, an entry cone, and by the chaser's maximum actuation level. The performance metrics for the experiments and numerical simulations include the required control effort and time to dock. The experiments have been conducted in a ground-based air-bearing test bed, using spacecraft simulators that float over a granite table.

  9. The added value of remote sensing products in constraining hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.

  10. Cloud-radiative effects on implied oceanic energy transport as simulated by atmospheric general circulation models

    NASA Technical Reports Server (NTRS)

    Gleckler, P. J.; Randall, D. A.; Boer, G.; Colman, R.; Dix, M.; Galin, V.; Helfand, M.; Kiehl, J.; Kitoh, A.; Lau, W.

    1995-01-01

    This paper summarizes the ocean surface net energy flux simulated by fifteen atmospheric general circulation models constrained by realistically-varying sea surface temperatures and sea ice as part of the Atmospheric Model Intercomparison Project. In general, the simulated energy fluxes are within the very large observational uncertainties. However, the annual mean oceanic meridional heat transport that would be required to balance the simulated surface fluxes is shown to be critically sensitive to the radiative effects of clouds, to the extent that even the sign of the Southern Hemisphere ocean heat transport can be affected by the errors in simulated cloud-radiation interactions. It is suggested that improved treatment of cloud radiative effects should help in the development of coupled atmosphere-ocean general circulation models.

  11. SCM-Forcing Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Shaocheng; Tang, Shuaiqi; Zhang, Yunyan

    2016-07-01

    Single-Column Model (SCM) Forcing Data are derived from the ARM facility observational data using the constrained variational analysis approach (Zhang and Lin 1997 and Zhang et al., 2001). The resulting products include both the large-scale forcing terms and the evaluation fields, which can be used for driving the SCMs and Cloud Resolving Models (CRMs) and validating model simulations.

  12. Western Lake Erie Basin: Soft-data-constrained, NHDPlus resolution watershed modeling and exploration of applicable conservation scenarios

    USDA-ARS?s Scientific Manuscript database

    Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relat...

  13. Supporting New Missions by Observing Simulation Experiments in WACCM-X/GEOS-5 and TIME-GCM: Initial Design, Challenges and Perspectives

    NASA Astrophysics Data System (ADS)

    Yudin, V. A.; England, S.; Liu, H.; Solomon, S. C.; Immel, T. J.; Maute, A. I.; Burns, A. G.; Foster, B.; Wu, Q.; Goncharenko, L. P.

    2013-12-01

    We examine the capability of novel configurations of community models, WACCM-X and TIME-GCM, to support current and forthcoming space-borne missions to monitor the dynamics and composition of the Mesosphere-Thermosphere-Ionosphere (MTI) system. In these configurations the lower atmosphere of WACCM-X is constrained by operational analyses and/or short-term forecasts provided by the Goddard Earth Observing System (GEOS-5) of Global Modeling and Assimilation Office at NASA/GSFC. With the terrestrial weather of GEOS-5 and updated model physics the simulations in the MTI are capable to reproduce observed signatures of the perturbed wave dynamics and ion-neutral coupling during recent stratospheric warming events, short-term, annual and year-to-year variability of prevailing flows, planetary waves, tides, and composition. These 'terrestrial-weather' driven simulations with day-to-day variable solar and geomagnetic inputs can provide background state (first guess) and error statistics for the inverse algorithms of new NASA missions, ICON and GOLD at locations and time of observations in the MTI region. With two different viewing geometries (sun-synchronous and geostationary) of instruments, ICON and GOLD will provide complimentary global observations of temperature, winds and constituents to constrain the first-principle forecast models. This paper will discuss initial design of Observing Simulation Experiments (OSE) in WACCM-X/GEOS-5 and TIME-GCM. As recognized, OSE represent an excellent learning tool for designing and evaluating observing capabilities of novel sensors. They can guide on how to integrate/combine information from different instruments. The choice of assimilation schemes, forecast and observational errors will be discussed along with challenges and perspectives to constrain fast-varying tidal dynamics and their effects in models by combination of sun-synchronous and geostationary observations of ICON and GOLD. We will also discuss how correlative space-borne and ground-based observations can verify OSE results in the observable and non-observable regions of the MTI.

  14. Nonlinear model predictive control of a wave energy converter based on differential flatness parameterisation

    NASA Astrophysics Data System (ADS)

    Li, Guang

    2017-01-01

    This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.

  15. Simulation of Two-Phase Flow Based on a Thermodynamically Constrained Averaging Theory Flow Model

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Dye, A. L.; McClure, J. E.; Farthing, M. W.; Gray, W. G.; Miller, C. T.

    2014-12-01

    The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for two-fluid-phase flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as interfacial areas, contact angles, interfacial tension, and curvatures; and dynamics of interface movement and relaxation to an equilibrium state. In order to render the TCAT model solvable, certain closure relations are needed to relate fluid pressure, interfacial areas, curvatures, and relaxation rates. In this work, we formulate and solve a TCAT-based two-fluid-phase flow model. We detail the formulation of the model, which is a specific instance from a hierarchy of two-fluid-phase flow models that emerge from the theory. We show the closure problem that must be solved. Using recent results from high-resolution microscale simulations, we advance a set of closure relations that produce a closed model. Lastly, we use locally conservative spatial discretization and higher order temporal discretization methods to approximate the solution to this new model and compare the solution to the traditional model.

  16. Using High Resolution Simulations with WRF/SSiB Regional Climate Model Constrained by In Situ Observations to Assess the Impacts of Dust in Snow in the Upper Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Oaida, C. M.; Skiles, M.; Painter, T. H.; Xue, Y.

    2015-12-01

    The mountain snowpack is an essential resource for both the environment as well as society. Observational and energy balance modeling work have shown that dust on snow (DOS) in western U.S. (WUS) is a major contributor to snow processes, including snowmelt timing and runoff amount in regions like the Upper Colorado River Basin (UCRB). In order to accurately estimate the impact of DOS to the hydrologic cycle and water resources, now and under a changing climate, we need to be able to (1) adequately simulate the snowpack (accumulation), and (2) realistically represent DOS processes in models. Energy balance models do not capture the impact on a broader local or regional scale, nor the land-atmosphere feedbacks, while GCM studies cannot resolve orographic-related precipitation processes, and therefore snowpack accumulation, owing to coarse spatial resolution and smoother terrain. All this implies the impacts of dust on snow on the mountain snowpack and other hydrologic processes are likely not well captured in current modeling studies. Recent increase in computing power allows for RCMs to be used at higher spatial resolutions, while recent in situ observations of dust in snow properties can help constrain modeling simulations. Therefore, in the work presented here, we take advantage of these latest resources to address the some of the challenges outlined above. We employ the newly enhanced WRF/SSiB regional climate model at 4 km horizontal resolution. This scale has been shown by others to be adequate in capturing orographic processes over WUS. We also constrain the magnitude of dust deposition provided by a global chemistry and transport model, with in situ measurements taken at sites in the UCRB. Furthermore, we adjust the dust absorptive properties based on observed values at these sites, as opposed to generic global ones. This study aims to improve simulation of the impact of dust in snow on the hydrologic cycle and related water resources.

  17. A Strategy for Autogeneration of Space Shuttle Ground Processing Simulation Models for Project Makespan Estimations

    NASA Technical Reports Server (NTRS)

    Madden, Michael G.; Wyrick, Roberta; O'Neill, Dale E.

    2005-01-01

    Space Shuttle Processing is a complicated and highly variable project. The planning and scheduling problem, categorized as a Resource Constrained - Stochastic Project Scheduling Problem (RC-SPSP), has a great deal of variability in the Orbiter Processing Facility (OPF) process flow from one flight to the next. Simulation Modeling is a useful tool in estimation of the makespan of the overall process. However, simulation requires a model to be developed, which itself is a labor and time consuming effort. With such a dynamic process, often the model would potentially be out of synchronization with the actual process, limiting the applicability of the simulation answers in solving the actual estimation problem. Integration of TEAMS model enabling software with our existing schedule program software is the basis of our solution. This paper explains the approach used to develop an auto-generated simulation model from planning and schedule efforts and available data.

  18. Evaluation of satellite-based, modeled-derived daily solar radiation data for the continental U.S.

    USDA-ARS?s Scientific Manuscript database

    Many applications of simulation models and related decision support tools for agriculture and natural resource management require daily meteorological data as inputs. Availability and quality of such data, however, often constrain research and decision support activities that require use of these to...

  19. Satellite retrievals of leaf chlorophyll and photosynthetic capacity for improved modeling of GPP

    USDA-ARS?s Scientific Manuscript database

    This study investigates the utility of in-situ and satellite-based leaf chlorophyll (Chl) estimates for quantifying leaf photosynthetic capacity and for constraining model simulations of Gross Primary Productivity (GPP) over a corn field in Maryland, U.S.A. The maximum rate of carboxylation (Vmax) r...

  20. Simulation of Constrained Musculoskeletal Systems in Task Space.

    PubMed

    Stanev, Dimitar; Moustakas, Konstantinos

    2018-02-01

    This paper proposes an operational task space formalization of constrained musculoskeletal systems, motivated by its promising results in the field of robotics. The change of representation requires different algorithms for solving the inverse and forward dynamics simulation in the task space domain. We propose an extension to the direct marker control and an adaptation of the computed muscle control algorithms for solving the inverse kinematics and muscle redundancy problems, respectively. Experimental evaluation demonstrates that this framework is not only successful in dealing with the inverse dynamics problem, but also provides an intuitive way of studying and designing simulations, facilitating assessment prior to any experimental data collection. The incorporation of constraints in the derivation unveils an important extension of this framework toward addressing systems that use absolute coordinates and topologies that contain closed kinematic chains. Task space projection reveals a more intuitive encoding of the motion planning problem, allows for better correspondence between observed and estimated variables, provides the means to effectively study the role of kinematic redundancy, and most importantly, offers an abstract point of view and control, which can be advantageous toward further integration with high level models of the precommand level. Task-based approaches could be adopted in the design of simulation related to the study of constrained musculoskeletal systems.

  1. RX J1856-3754: Evidence for a Stiff Equation of State

    NASA Astrophysics Data System (ADS)

    Braje, Timothy M.; Romani, Roger W.

    2002-12-01

    We have examined the soft X-ray plus optical/UV spectrum of the nearby isolated neutron star RX J1856-3754, comparing it with detailed models of a thermally emitting surface. Like previous investigators, we find that the spectrum is best fitted by a two-temperature blackbody model. In addition, our simulations constrain the allowed viewing geometry from the observed pulse fraction upper limits. These simulations show that RX J1856-3754 is very likely to be a normal young pulsar, with the nonthermal radio beam missing Earth's line of sight. The spectral energy distribution limits on the model parameter space put a strong constraint on the star's M/R. At the measured parallax distance, the allowed range for MNS=1.5Msolar is RNS=13.7+/-0.6km. Under this interpretation, the equation of state (EOS) is relatively stiff near nuclear density, and the quark star EOS posited in some previous studies is strongly excluded. The data also constrain the surface T distribution over the polar cap.

  2. How well can future CMB missions constrain cosmic inflation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Jérôme; Vennin, Vincent; Ringeval, Christophe, E-mail: jmartin@iap.fr, E-mail: christophe.ringeval@uclouvain.be, E-mail: vennin@iap.fr

    2014-10-01

    We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from 10{sup -1} down to 10{sup -7}. We then compute the Bayesian evidences and complexities of all Encyclopædia Inflationaris models in order to assess the constraining power of PRISM alone andmore » LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a violation of slow roll at second order. Finally, our results suggest that describing an inflationary model by its potential shape only, without specifying a reheating temperature, will no longer be possible given the accuracy level reached by the future CMB missions.« less

  3. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    PubMed

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  4. Lidar Penetration Depth Observations for Constraining Cloud Longwave Feedbacks

    NASA Astrophysics Data System (ADS)

    Vaillant de Guelis, T.; Chepfer, H.; Noel, V.; Guzman, R.; Winker, D. M.; Kay, J. E.; Bonazzola, M.

    2017-12-01

    Satellite-borne active remote sensing Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations [CALIPSO; Winker et al., 2010] and CloudSat [Stephens et al., 2002] provide direct measurements of the cloud vertical distribution, with a very high vertical resolution. The penetration depth of the laser of the lidar Z_Opaque is directly linked to the LongWave (LW) Cloud Radiative Effect (CRE) at Top Of Atmosphere (TOA) [Vaillant de Guélis et al., in review]. In addition, this measurement is extremely stable in time making it an excellent observational candidate to verify and constrain the cloud LW feedback mechanism [Chepfer et al., 2014]. In this work, we present a method to decompose the variations of the LW CRE at TOA using cloud properties observed by lidar [GOCCP v3.0; Guzman et al., 2017]. We decompose these variations into contributions due to changes in five cloud properties: opaque cloud cover, opaque cloud altitude, thin cloud cover, thin cloud altitude, and thin cloud emissivity [Vaillant de Guélis et al., in review]. We apply this method, in the real world, to the CRE variations of CALIPSO 2008-2015 record, and, in climate model, to LMDZ6 and CESM simulations of the CRE variations of 2008-2015 period and of the CRE difference between a warm climate and the current climate. In climate model simulations, the same cloud properties as those observed by CALIOP are extracted from the CFMIP Observation Simulator Package (COSP) [Bodas-Salcedo et al., 2011] lidar simulator [Chepfer et al., 2008], which mimics the observations that would be performed by the lidar on board CALIPSO satellite. This method, when applied on multi-model simulations of current and future climate, could reveal the altitude of cloud opacity level observed by lidar as a strong constrain for cloud LW feedback, since the altitude feedback mechanism is physically explainable and the altitude of cloud opacity accurately observed by lidar.

  5. Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.

    PubMed

    Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei

    2015-08-01

    In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.

  6. Specifications of a Simulation Model for a Local Area Network Design in Support of a Stock Point Logistics Integrated Communication Environment (SPLICE).

    DTIC Science & Technology

    1983-06-01

    constrained at each step. Use of dis- crete simulation can be a powerful tool in this process if its role is carefully planned. The gross behavior of the...by projecting: - the arrival of units of work at SPLICE processing facilities (workload analysis) . - the amount of processing resources comsumed in

  7. Protein simulation using coarse-grained two-bead multipole force field with polarizable water models.

    PubMed

    Li, Min; Zhang, John Z H

    2017-02-14

    A recently developed two-bead multipole force field (TMFF) is employed in coarse-grained (CG) molecular dynamics (MD) simulation of proteins in combination with polarizable CG water models, the Martini polarizable water model, and modified big multipole water model. Significant improvement in simulated structures and dynamics of proteins is observed in terms of both the root-mean-square deviations (RMSDs) of the structures and residue root-mean-square fluctuations (RMSFs) from the native ones in the present simulation compared with the simulation result with Martini's non-polarizable water model. Our result shows that TMFF simulation using CG water models gives much stable secondary structures of proteins without the need for adding extra interaction potentials to constrain the secondary structures. Our result also shows that by increasing the MD time step from 2 fs to 6 fs, the RMSD and RMSF results are still in excellent agreement with those from all-atom simulations. The current study demonstrated clearly that the application of TMFF together with a polarizable CG water model significantly improves the accuracy and efficiency for CG simulation of proteins.

  8. Protein simulation using coarse-grained two-bead multipole force field with polarizable water models

    NASA Astrophysics Data System (ADS)

    Li, Min; Zhang, John Z. H.

    2017-02-01

    A recently developed two-bead multipole force field (TMFF) is employed in coarse-grained (CG) molecular dynamics (MD) simulation of proteins in combination with polarizable CG water models, the Martini polarizable water model, and modified big multipole water model. Significant improvement in simulated structures and dynamics of proteins is observed in terms of both the root-mean-square deviations (RMSDs) of the structures and residue root-mean-square fluctuations (RMSFs) from the native ones in the present simulation compared with the simulation result with Martini's non-polarizable water model. Our result shows that TMFF simulation using CG water models gives much stable secondary structures of proteins without the need for adding extra interaction potentials to constrain the secondary structures. Our result also shows that by increasing the MD time step from 2 fs to 6 fs, the RMSD and RMSF results are still in excellent agreement with those from all-atom simulations. The current study demonstrated clearly that the application of TMFF together with a polarizable CG water model significantly improves the accuracy and efficiency for CG simulation of proteins.

  9. Evaluation of Diagnostic CO2 Flux and Transport Modeling in NU-WRF and GEOS-5

    NASA Astrophysics Data System (ADS)

    Kawa, S. R.; Collatz, G. J.; Tao, Z.; Wang, J. S.; Ott, L. E.; Liu, Y.; Andrews, A. E.; Sweeney, C.

    2015-12-01

    We report on recent diagnostic (constrained by observations) model simulations of atmospheric CO2 flux and transport using a newly developed facility in the NASA Unified-Weather Research and Forecast (NU-WRF) model. The results are compared to CO2 data (ground-based, airborne, and GOSAT) and to corresponding simulations from a global model that uses meteorology from the NASA GEOS-5 Modern Era Retrospective analysis for Research and Applications (MERRA). The objective of these intercomparisons is to assess the relative strengths and weaknesses of the respective models in pursuit of an overall carbon process improvement at both regional and global scales. Our guiding hypothesis is that the finer resolution and improved land surface representation in NU-WRF will lead to better comparisons with CO2 data than those using global MERRA, which will, in turn, inform process model development in global prognostic models. Initial intercomparison results, however, have generally been mixed: NU-WRF is better at some sites and times but not uniformly. We are examining the model transport processes in detail to diagnose differences in the CO2 behavior. These comparisons are done in the context of a long history of simulations from the Parameterized Chemistry and Transport Model, based on GEOS-5 meteorology and Carnegie Ames-Stanford Approach-Global Fire Emissions Database (CASA-GFED) fluxes, that capture much of the CO2 variation from synoptic to seasonal to global scales. We have run the NU-WRF model using unconstrained, internally generated meteorology within the North American domain, and with meteorological 'nudging' from Global Forecast System and North American Regional Reanalysis (NARR) in an effort to optimize the CO2 simulations. Output results constrained by NARR show the best comparisons to data. Discrepancies, of course, may arise either from flux or transport errors and compensating errors are possible. Resolving their interplay is also important to using the data in inverse models. Recent analysis is focused on planetary boundary depth, which can be significantly different between MERRA and NU-WRF, along with subgrid transport differences. Characterization of transport differences between the models will allow us to better constrain the CO2 fluxes, which is the major objective of this work.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Bo; Yeh, Sang -Wook; Sohn, Byung -Ju

    Observational evidence shows that the Walker circulation (WC) in the tropical Pacific has strengthened in recent decades. In this study, we examine the WC trend for 1979–2005 and its relationship with the precipitation associated with the El Niño Southern Oscillation (ENSO) using the sea surface temperature (SST)-constrained Atmospheric Model Intercomparison Project (AMIP) simulations of the Coupled Model Intercomparison Project Phase 5 (CMIP5) climate models. All of the 29 models show a strengthening of the WC trend in response to an increase in the SST zonal gradient along the equator. Despite the same SST-constrained AMIP simulations, however, a large diversity ismore » found among the CMIP5 climate models in the magnitude of the WC trend. The relationship between the WC trend and precipitation anomalies (PRCPAs) associated with ENSO (ENSO-related PRCPAs) shows that the longitudinal position of the ENSO-related PRCPAs in the western tropical Pacific is closely related to the magnitude of the WC trend. Specifically, it is found that the strengthening of the WC trend is large (small) in the CMIP5 AMIP simulations in which the ENSO-related PRCPAs are located relatively westward (eastward) in the western tropical Pacific. Furthermore, the zonal shift of the ENSO-related precipitation in the western tropical Pacific, which is associated with the climatological mean precipitation in the tropical Pacific, could play an important role in modifying the WC trend in the CMIP5 climate models.« less

  11. Cosmicflows Constrained Local UniversE Simulations

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  12. Developing a particle tracking surrogate model to improve inversion of ground water - Surface water models

    NASA Astrophysics Data System (ADS)

    Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain

    2018-03-01

    The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.

  13. Supporting ITM Missions by Observing System Simulation Experiments: Initial Design, Challenges and Perspectives

    NASA Astrophysics Data System (ADS)

    Yudin, V. A.; England, S.; Matsuo, T.; Wang, H.; Immel, T. J.; Eastes, R.; Akmaev, R. A.; Goncharenko, L. P.; Fuller-Rowell, T. J.; Liu, H.; Solomon, S. C.; Wu, Q.

    2014-12-01

    We review and discuss the capability of novel configurations of global community (WACCM-X and TIME-GCM) and planned-operational (WAM) models to support current and forthcoming space-borne missions to monitor the dynamics and composition of the Ionosphere-Thermosphere-Mesosphere (ITM) system. In the specified meteorology model configuration of WACCM-X, the lower atmosphere is constrained by operational analyses and/or short-term forecasts provided by the Goddard Earth Observing System (GEOS-5) of GMAO/NASA/GSFC. With the terrestrial weather of GEOS-5 and updated model physics, WACCM-X simulations are capable to reproduce the observed signatures of the perturbed wave dynamics and ion-neutral coupling during recent (2006-2013) stratospheric warming events, short-term, annual and year-to-year variability of prevailing flows, planetary waves, tides, and composition. With assimilation of the NWP data in the troposphere and stratosphere the planned-operational configuration of WAM can also recreate the observed features of the ITM day-to-day variability. These "terrestrial-weather" driven whole atmosphere simulations, with day-to-day variable solar and geomagnetic inputs, can provide specification of the background state (first guess) and errors for the inverse algorithms of forthcoming NASA ITM missions, such as ICON and GOLD. With two different viewing geometries (sun-synchronous, for ICON and geostationary for GOLD) these missions promise to perform complimentary global observations of temperature, winds and constituents to constrain the first-principle space weather forecast models. The paper will discuss initial designs of Observing System Simulation Experiments (OSSE) in the coupled simulations of TIME-GCM/WACCM-X/GEOS5 and WAM/GIP. As recognized, OSSE represent an excellent learning tool for designing and evaluating observing capabilities of novel sensors. The choice of assimilation schemes, forecast and observational errors will be discussed along with challenges and perspectives to constrain fast-varying dynamics of tides and planetary waves by observations made from sun-synchronous and geostationary space-borne platforms. We will also discuss how correlative space-borne and ground-based observations can evaluate OSSE results.

  14. Modeling and Simulation at NASA

    NASA Technical Reports Server (NTRS)

    Steele, Martin J.

    2009-01-01

    This slide presentation is composed of two topics. The first reviews the use of modeling and simulation (M&S) particularly as it relates to the Constellation program and discrete event simulation (DES). DES is defined as a process and system analysis, through time-based and resource constrained probabilistic simulation models, that provide insight into operation system performance. The DES shows that the cycles for a launch from manufacturing and assembly to launch and recovery is about 45 days and that approximately 4 launches per year are practicable. The second topic reviews a NASA Standard for Modeling and Simulation. The Columbia Accident Investigation Board made some recommendations related to models and simulations. Some of the ideas inherent in the new standard are the documentation of M&S activities, an assessment of the credibility, and reporting to decision makers, which should include the analysis of the results, a statement as to the uncertainty in the results,and the credibility of the results. There is also discussion about verification and validation (V&V) of models. There is also discussion about the different types of models and simulation.

  15. Mechanisms of Diurnal Precipitation over the United States Great Plains: A Cloud-Resolving Model Simulation

    NASA Technical Reports Server (NTRS)

    Lee, M.-I.; Choi, I.; Tao, W.-K.; Schubert, S. D.; Kang, I.-K.

    2010-01-01

    The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.

  16. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches.

    PubMed

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository.

  17. Reconsidering Cluster Bias in Multilevel Data: A Monte Carlo Comparison of Free and Constrained Baseline Approaches

    PubMed Central

    Guenole, Nigel

    2018-01-01

    The test for item level cluster bias examines the improvement in model fit that results from freeing an item's between level residual variance from a baseline model with equal within and between level factor loadings and between level residual variances fixed at zero. A potential problem is that this approach may include a misspecified unrestricted model if any non-invariance is present, but the log-likelihood difference test requires that the unrestricted model is correctly specified. A free baseline approach where the unrestricted model includes only the restrictions needed for model identification should lead to better decision accuracy, but no studies have examined this yet. We ran a Monte Carlo study to investigate this issue. When the referent item is unbiased, compared to the free baseline approach, the constrained baseline approach led to similar true positive (power) rates but much higher false positive (Type I error) rates. The free baseline approach should be preferred when the referent indicator is unbiased. When the referent assumption is violated, the false positive rate was unacceptably high for both free and constrained baseline approaches, and the true positive rate was poor regardless of whether the free or constrained baseline approach was used. Neither the free or constrained baseline approach can be recommended when the referent indicator is biased. We recommend paying close attention to ensuring the referent indicator is unbiased in tests of cluster bias. All Mplus input and output files, R, and short Python scripts used to execute this simulation study are uploaded to an open access repository. PMID:29551985

  18. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  19. Using OCO-2 Observations and Lagrangian Modeling to Constrain Urban Carbon Dioxide Emissions in the Middle East

    NASA Astrophysics Data System (ADS)

    Yang, E. G.; Kort, E. A.; Ware, J.; Ye, X.; Lauvaux, T.; Wu, D.; Lin, J. C.; Oda, T.

    2017-12-01

    Anthropogenic carbon dioxide (CO2) emissions are greatly perturbing the Earth's carbon cycle. Rising emissions from the developing world are increasing uncertainties in global CO2 emissions. With the rapid urbanization of developing regions, methods of constraining urban CO2 emissions in these areas can address critical uncertainties in the global carbon budget. In this study, we work toward constraining urban CO2 emissions in the Middle East by comparing top-down observations and bottom-up simulations of total column CO2 (XCO2) in four cities (Riyadh, Cairo, Baghdad, and Doha), both separately and in aggregate. This comparison involves quantifying the relationship for all available data in the period of September 2014 until March 2016 between observations of XCO2 from the Orbiting Carbon Observatory-2 (OCO-2) satellite and simulations of XCO2 using the Stochastic Time-Inverted Lagrangian Transport (STILT) model coupled with Global Data Assimilation System (GDAS) reanalysis products and multiple CO2 emissions inventories. We discuss the extent to which our observation/model framework can distinguish between the different emissions representations and determine optimized emissions estimates for this domain. We also highlight the implications of our comparisons on the fidelity of the bottom-up inventories used, and how these implications may inform the use of OCO-2 data for urban regions around the world.

  20. Satellite Perspective of Aerosol Intercontinental Transport: From Qualitative Tracking to Quantitative Characterization

    NASA Technical Reports Server (NTRS)

    Yu, Hongbin; Remer, Lorraine A.; Kahn, Ralph A.; Chin, Mian; Zhang, Yan

    2012-01-01

    Evidence of aerosol intercontinental transport (ICT) is both widespread and compelling. Model simulations suggest that ICT could significantly affect regional air quality and climate, but the broad inter-model spread of results underscores a need of constraining model simulations with measurements. Satellites have inherent advantages over in situ measurements to characterize aerosol ICT, because of their spatial and temporal coverage. Significant progress in satellite remote sensing of aerosol properties during the Earth Observing System (EOS) era offers opportunity to increase quantitative characterization and estimates of aerosol ICT, beyond the capability of pre-EOS era satellites that could only qualitatively track aerosol plumes. EOS satellites also observe emission strengths and injection heights of some aerosols, aerosol precursors, and aerosol-related gases, which can help characterize aerosol ICT. After an overview of these advances, we review how the current generation of satellite measurements have been used to (1) characterize the evolution of aerosol plumes (e.g., both horizontal and vertical transport, and properties) on an episodic basis, (2) understand the seasonal and inter-annual variations of aerosol ICT and their control factors, (3) estimate the export and import fluxes of aerosols, and (4) evaluate and constrain model simulations. Substantial effort is needed to further explore an integrated approach using measurements from on-orbit satellites (e.g., A-Train synergy) for observational characterization and model constraint of aerosol intercontinental transport and to develop advanced sensors for future missions.

  1. Configuration of the thermal landscape determines thermoregulatory performance of ectotherms

    PubMed Central

    Sears, Michael W.; Angilletta, Michael J.; Schuler, Matthew S.; Borchert, Jason; Dilliplane, Katherine F.; Stegman, Monica; Rusch, Travis W.; Mitchell, William A.

    2016-01-01

    Although most organisms thermoregulate behaviorally, biologists still cannot easily predict whether mobile animals will thermoregulate in natural environments. Current models fail because they ignore how the spatial distribution of thermal resources constrains thermoregulatory performance over space and time. To overcome this limitation, we modeled the spatially explicit movements of animals constrained by access to thermal resources. Our models predict that ectotherms thermoregulate more accurately when thermal resources are dispersed throughout space than when these resources are clumped. This prediction was supported by thermoregulatory behaviors of lizards in outdoor arenas with known distributions of environmental temperatures. Further, simulations showed how the spatial structure of the landscape qualitatively affects responses of animals to climate. Biologists will need spatially explicit models to predict impacts of climate change on local scales. PMID:27601639

  2. Using Coronal Hole Maps to Constrain MHD Models

    NASA Astrophysics Data System (ADS)

    Caplan, Ronald M.; Downs, Cooper; Linker, Jon A.; Mikic, Zoran

    2017-08-01

    In this presentation, we explore the use of coronal hole maps (CHMs) as a constraint for thermodynamic MHD models of the solar corona. Using our EUV2CHM software suite (predsci.com/chd), we construct CHMs from SDO/AIA 193Å and STEREO-A/EUVI 195Å images for multiple Carrington rotations leading up to the August 21st, 2017 total solar eclipse. We then contruct synoptic CHMs from synthetic EUV images generated from global thermodynamic MHD simulations of the corona for each rotation. Comparisons of apparent coronal hole boundaries and estimates of the net open flux are used to benchmark and constrain our MHD model leading up to the eclipse. Specifically, the comparisons are used to find optimal parameterizations of our wave turbulence dissipation (WTD) coronal heating model.

  3. Constraining continuous rainfall simulations for derived design flood estimation

    NASA Astrophysics Data System (ADS)

    Woldemeskel, F. M.; Sharma, A.; Mehrotra, R.; Westra, S.

    2016-11-01

    Stochastic rainfall generation is important for a range of hydrologic and water resources applications. Stochastic rainfall can be generated using a number of models; however, preserving relevant attributes of the observed rainfall-including rainfall occurrence, variability and the magnitude of extremes-continues to be difficult. This paper develops an approach to constrain stochastically generated rainfall with an aim of preserving the intensity-durationfrequency (IFD) relationships of the observed data. Two main steps are involved. First, the generated annual maximum rainfall is corrected recursively by matching the generated intensity-frequency relationships to the target (observed) relationships. Second, the remaining (non-annual maximum) rainfall is rescaled such that the mass balance of the generated rain before and after scaling is maintained. The recursive correction is performed at selected storm durations to minimise the dependence between annual maximum values of higher and lower durations for the same year. This ensures that the resulting sequences remain true to the observed rainfall as well as represent the design extremes that may have been developed separately and are needed for compliance reasons. The method is tested on simulated 6 min rainfall series across five Australian stations with different climatic characteristics. The results suggest that the annual maximum and the IFD relationships are well reproduced after constraining the simulated rainfall. While our presentation focusses on the representation of design rainfall attributes (IFDs), the proposed approach can also be easily extended to constrain other attributes of the generated rainfall, providing an effective platform for post-processing of stochastic rainfall generators.

  4. ENSO-Related Precipitation and Its Statistical Relationship with the Walker Circulation Trend in CMIP5 AMIP Models

    DOE PAGES

    Yim, Bo; Yeh, Sang -Wook; Sohn, Byung -Ju

    2016-01-29

    Observational evidence shows that the Walker circulation (WC) in the tropical Pacific has strengthened in recent decades. In this study, we examine the WC trend for 1979–2005 and its relationship with the precipitation associated with the El Niño Southern Oscillation (ENSO) using the sea surface temperature (SST)-constrained Atmospheric Model Intercomparison Project (AMIP) simulations of the Coupled Model Intercomparison Project Phase 5 (CMIP5) climate models. All of the 29 models show a strengthening of the WC trend in response to an increase in the SST zonal gradient along the equator. Despite the same SST-constrained AMIP simulations, however, a large diversity ismore » found among the CMIP5 climate models in the magnitude of the WC trend. The relationship between the WC trend and precipitation anomalies (PRCPAs) associated with ENSO (ENSO-related PRCPAs) shows that the longitudinal position of the ENSO-related PRCPAs in the western tropical Pacific is closely related to the magnitude of the WC trend. Specifically, it is found that the strengthening of the WC trend is large (small) in the CMIP5 AMIP simulations in which the ENSO-related PRCPAs are located relatively westward (eastward) in the western tropical Pacific. Furthermore, the zonal shift of the ENSO-related precipitation in the western tropical Pacific, which is associated with the climatological mean precipitation in the tropical Pacific, could play an important role in modifying the WC trend in the CMIP5 climate models.« less

  5. Refined Use of Satellite Aerosol Optical Depth Snapshots to Constrain Biomass Burning Emissions in the GOCART Model

    NASA Technical Reports Server (NTRS)

    Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James

    2017-01-01

    Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength of burning aerosol sources. Our previous work (Petrenko et al., 2012) shows that satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the assumed source strength. We now refine the satellite-snapshot method and investigate applying simple multiplicative emission correction factors for the widely used Global Fire Emission Database version 3 (GFEDv3) emission inventory can achieve regional-scale consistency between MODIS AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport (GOCART) model. The model and satellite AOD are compared over a set of more than 900 BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. The AOD comparison presented here shows that regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. Additional analysis of including small fire emission correction shows the complimentary nature of correcting for source strength and adding missing sources, and also indicates that in some regions other factors may be significant in explaining model-satellite discrepancies. This work sets the stage for a larger intercomparison within the Aerosol Inter-comparisons between Observations and Models (AeroCom) multi-model biomass burning experiment. We discuss here some of the other possible factors affecting the remaining discrepancies between model simulations and observations, but await comparisons with other AeroCom models to draw further conclusions.

  6. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  7. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  8. Constraining 3-PG with a new δ13C submodel: a test using the δ13C of tree rings.

    PubMed

    Wei, Liang; Marshall, John D; Link, Timothy E; Kavanagh, Kathleen L; DU, Enhao; Pangle, Robert E; Gag, Peter J; Ubierna, Nerea

    2014-01-01

    A semi-mechanistic forest growth model, 3-PG (Physiological Principles Predicting Growth), was extended to calculate δ(13)C in tree rings. The δ(13)C estimates were based on the model's existing description of carbon assimilation and canopy conductance. The model was tested in two ~80-year-old natural stands of Abies grandis (grand fir) in northern Idaho. We used as many independent measurements as possible to parameterize the model. Measured parameters included quantum yield, specific leaf area, soil water content and litterfall rate. Predictions were compared with measurements of transpiration by sap flux, stem biomass, tree diameter growth, leaf area index and δ(13)C. Sensitivity analysis showed that the model's predictions of δ(13)C were sensitive to key parameters controlling carbon assimilation and canopy conductance, which would have allowed it to fail had the model been parameterized or programmed incorrectly. Instead, the simulated δ(13)C of tree rings was no different from measurements (P > 0.05). The δ(13)C submodel provides a convenient means of constraining parameter space and avoiding model artefacts. This δ(13)C test may be applied to any forest growth model that includes realistic simulations of carbon assimilation and transpiration. © 2013 John Wiley & Sons Ltd.

  9. Modeling slow-slip segmentation in Cascadia subduction zone constrained by tremor locations and gravity anomalies

    NASA Astrophysics Data System (ADS)

    Li, Duo; Liu, Yajing

    2017-04-01

    Along-strike segmentation of slow-slip events (SSEs) and nonvolcanic tremors in Cascadia may reflect heterogeneities of the subducting slab or overlying continental lithosphere. However, the nature behind this segmentation is not fully understood. We develop a 3-D model for episodic SSEs in northern and central Cascadia, incorporating both seismological and gravitational observations to constrain the heterogeneities in the megathrust fault properties. The 6 year automatically detected tremors are used to constrain the rate-state friction parameters. The effective normal stress at SSE depths is constrained by along-margin free-air and Bouguer gravity anomalies. The along-strike variation in the long-term plate convergence rate is also taken into consideration. Simulation results show five segments of ˜Mw6.0 SSEs spontaneously appear along the strike, correlated to the distribution of tremor epicenters. Modeled SSE recurrence intervals are equally comparable to GPS observations using both types of gravity anomaly constraints. However, the model constrained by free-air anomaly does a better job in reproducing the cumulative slip as well as more consistent surface displacements with GPS observations. The modeled along-strike segmentation represents the averaged slip release over many SSE cycles, rather than permanent barriers. Individual slow-slip events can still propagate across the boundaries, which may cause interactions between adjacent SSEs, as observed in time-dependent GPS inversions. In addition, the moment-duration scaling is sensitive to the selection of velocity criteria for determining when SSEs occur. Hence, the detection ability of the current GPS network should be considered in the interpretation of slow earthquake source parameter scaling relations.

  10. Using palaeoclimate data to improve models of the Antarctic Ice Sheet

    NASA Astrophysics Data System (ADS)

    Phipps, Steven; King, Matt; Roberts, Jason; White, Duanne

    2017-04-01

    Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modelling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how palaeoclimate data can improve our ability to predict the future evolution of the AIS. A 50-member perturbed-physics ensemble is generated, spanning uncertainty in the parameterisations of three key physical processes within the model: (i) the stress balance within the ice sheet, (ii) basal sliding and (iii) calving of ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Palaeoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.

  11. Using paleoclimate data to improve models of the Antarctic Ice Sheet

    NASA Astrophysics Data System (ADS)

    King, M. A.; Phipps, S. J.; Roberts, J. L.; White, D.

    2016-12-01

    Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modeling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how paleoclimate data can improve our ability to predict the future evolution of the AIS. A large, perturbed-physics ensemble is generated, spanning uncertainty in the parameterizations of four key physical processes within ice sheet models: ice rheology, ice shelf calving, and the stress balances within ice sheets and ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Paleoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.

  12. How wild is your model fire? Constraining WRF-Chem wildfire smoke simulations with satellite observations

    NASA Astrophysics Data System (ADS)

    Fischer, E. V.; Ford, B.; Lassman, W.; Pierce, J. R.; Pfister, G.; Volckens, J.; Magzamen, S.; Gan, R.

    2015-12-01

    Exposure to high concentrations of particulate matter (PM) present during acute pollution events is associated with adverse health effects. While many anthropogenic pollution sources are regulated in the United States, emissions from wildfires are difficult to characterize and control. With wildfire frequency and intensity in the western U.S. projected to increase, it is important to more precisely determine the effect that wildfire emissions have on human health, and whether improved forecasts of these air pollution events can mitigate the health risks associated with wildfires. One of the challenges associated with determining health risks associated with wildfire emissions is that the low spatial resolution of surface monitors means that surface measurements may not be representative of a population's exposure, due to steep concentration gradients. To obtain better estimates of ambient exposure levels for health studies, a chemical transport model (CTM) can be used to simulate the evolution of a wildfire plume as it travels over populated regions downwind. Improving the performance of a CTM would allow the development of a new forecasting framework that could better help decision makers estimate and potentially mitigate future health impacts. We use the Weather Research and Forecasting model with online chemistry (WRF-Chem) to simulate wildfire plume evolution. By varying the model resolution, meteorology reanalysis initial conditions, and biomass burning inventories, we are able to explore the sensitivity of model simulations to these various parameters. Satellite observations are used first to evaluate model skill, and then to constrain the model results. These data are then used to estimate population-level exposure, with the aim of better characterizing the effects that wildfire emissions have on human health.

  13. Estimating radiative feedbacks from stochastic fluctuations in surface temperature and energy imbalance

    NASA Astrophysics Data System (ADS)

    Proistosescu, C.; Donohoe, A.; Armour, K.; Roe, G.; Stuecker, M. F.; Bitz, C. M.

    2017-12-01

    Joint observations of global surface temperature and energy imbalance provide for a unique opportunity to empirically constrain radiative feedbacks. However, the satellite record of Earth's radiative imbalance is relatively short and dominated by stochastic fluctuations. Estimates of radiative feedbacks obtained by regressing energy imbalance against surface temperature depend strongly on sampling choices and on assumptions about whether the stochastic fluctuations are primarily forced by atmospheric or oceanic variability (e.g. Murphy and Forster 2010, Dessler 2011, Spencer and Braswell 2011, Forster 2016). We develop a framework around a stochastic energy balance model that allows us to parse the different contributions of atmospheric and oceanic forcing based on their differing impacts on the covariance structure - or lagged regression - of temperature and radiative imbalance. We validate the framework in a hierarchy of general circulation models: the impact of atmospheric forcing is examined in unforced control simulations of fixed sea-surface temperature and slab ocean model versions; the impact of oceanic forcing is examined in coupled simulations with prescribed ENSO variability. With the impact of atmospheric and oceanic forcing constrained, we are able to predict the relationship between temperature and radiative imbalance in a fully coupled control simulation, finding that both forcing sources are needed to explain the structure of the lagged-regression. We further model the dependence of feedback estimates on sampling interval by considering the effects of a finite equilibration time for the atmosphere, and issues of smoothing and aliasing. Finally, we develop a method to fit the stochastic model to the short timeseries of temperature and radiative imbalance by performing a Bayesian inference based on a modified version of the spectral Whittle likelihood. We are thus able to place realistic joint uncertainty estimates on both stochastic forcing and radiative feedbacks derived from observational records. We find that these records are, as of yet, too short to be useful in constraining radiative feedbacks, and we provide estimates of how the uncertainty narrows as a function of record length.

  14. Developments in Stochastic Fuel Efficient Cruise Control and Constrained Control with Applications to Aircraft

    NASA Astrophysics Data System (ADS)

    McDonough, Kevin K.

    The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of these sets for aircraft longitudinal and lateral aircraft dynamics are reported, and it is shown that these sets can be larger in size compared to the more commonly used safe sets. An approach to constrained maneuver planning based on chaining recoverable sets or integral safe sets is described and illustrated with a simulation example. To facilitate the application of this maneuver planning approach in aircraft loss of control (LOC) situations when the model is only identified at the current trim condition but when these sets need to be predicted at other flight conditions, the dependence trends of the safe and recoverable sets on aircraft flight conditions are characterized. The scaling procedure to estimate subsets of safe and recoverable sets at one trim condition based on their knowledge at another trim condition is defined. Finally, two control schemes that exploit integral safe sets are proposed. The first scheme, referred to as the controller state governor (CSG), resets the controller state (typically an integrator) to enforce the constraints and enlarge the set of plant states that can be recovered without constraint violation. The second scheme, referred to as the controller state and reference governor (CSRG), combines the controller state governor with the reference governor control architecture and provides the capability of simultaneously modifying the reference command and the controller state to enforce the constraints. Theoretical results that characterize the response properties of both schemes are presented. Examples are reported that illustrate the operation of these schemes on aircraft flight dynamics models and gas turbine engine dynamic models.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saide, Pablo E.; Peterson, David A.; de Silva, Arlindo

    We couple airborne, ground-based, and satellite observations; conduct regional simulations; and develop and apply an inversion technique to constrain hourly smoke emissions from the Rim Fire, the third largest observed in California, USA. Emissions constrained with multiplatform data show notable nocturnal enhancements (sometimes over a factor of 20), correlate better with daily burned area data, and are a factor of 2–4 higher than a priori estimates, highlighting the need for improved characterization of diurnal profiles and day-to-day variability when modeling extreme fires. Constraining only with satellite data results in smaller enhancements mainly due to missing retrievals near the emissions source,more » suggesting that top-down emission estimates for these events could be underestimated and a multiplatform approach is required to resolve them. Predictions driven by emissions constrained with multiplatform data present significant variations in downwind air quality and in aerosol feedback on meteorology, emphasizing the need for improved emissions estimates during exceptional events.« less

  16. Degree-constrained multicast routing for multimedia communications

    NASA Astrophysics Data System (ADS)

    Wang, Yanlin; Sun, Yugeng; Li, Guidan

    2005-02-01

    Multicast services have been increasingly used by many multimedia applications. As one of the key techniques to support multimedia applications, the rational and effective multicast routing algorithms are very important to networks performance. When switch nodes in networks have different multicast capability, multicast routing problem is modeled as the degree-constrained Steiner problem. We presented two heuristic algorithms, named BMSTA and BSPTA, for the degree-constrained case in multimedia communications. Both algorithms are used to generate degree-constrained multicast trees with bandwidth and end to end delay bound. Simulations over random networks were carried out to compare the performance of the two proposed algorithms. Experimental results show that the proposed algorithms have advantages in traffic load balancing, which can avoid link blocking and enhance networks performance efficiently. BMSTA has better ability in finding unsaturated links and (or) unsaturated nodes to generate multicast trees than BSPTA. The performance of BMSTA is affected by the variation of degree constraints.

  17. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE PAGES

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.; ...

    2018-03-26

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  18. Phase field benchmark problems for dendritic growth and linear elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jokisaari, Andrea M.; Voorhees, P. W.; Guyer, Jonathan E.

    We present the second set of benchmark problems for phase field models that are being jointly developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST) along with input from other members in the phase field community. As the integrated computational materials engineering (ICME) approach to materials design has gained traction, there is an increasing need for quantitative phase field results. New algorithms and numerical implementations increase computational capabilities, necessitating standard problems to evaluate their impact on simulated microstructure evolution as well as their computational performance. We propose one benchmark problem formore » solidifiication and dendritic growth in a single-component system, and one problem for linear elasticity via the shape evolution of an elastically constrained precipitate. We demonstrate the utility and sensitivity of the benchmark problems by comparing the results of 1) dendritic growth simulations performed with different time integrators and 2) elastically constrained precipitate simulations with different precipitate sizes, initial conditions, and elastic moduli. As a result, these numerical benchmark problems will provide a consistent basis for evaluating different algorithms, both existing and those to be developed in the future, for accuracy and computational efficiency when applied to simulate physics often incorporated in phase field models.« less

  19. Status of GRMHD simulations and radiative models of Sgr A*

    NASA Astrophysics Data System (ADS)

    Mościbrodzka, Monika

    2017-01-01

    The Galactic center is a perfect laboratory for testing various theoretical models of accretion flows onto a supermassive black hole. Here, I review general relativistic magnetohydrodynamic simulations that were used to model emission from the central object - Sgr A*. These models predict dynamical and radiative properties of hot, magnetized, thick accretion disks with jets around a Kerr black hole. Models are compared to radio-VLBI, mm-VLBI, NIR, and X-ray observations of Sgr A*. I present the recent constrains on the free parameters of the model such as accretion rate onto the black hole, the black hole angular momentum, and orientation of the system with respect to our line of sight.

  20. Thermomechanical Fatigue of Ductile Cast Iron and Its Life Prediction

    NASA Astrophysics Data System (ADS)

    Wu, Xijia; Quan, Guangchun; MacNeil, Ryan; Zhang, Zhong; Liu, Xiaoyang; Sloss, Clayton

    2015-06-01

    Thermomechanical fatigue (TMF) behaviors of ductile cast iron (DCI) were investigated under out-of-phase (OP), in-phase (IP), and constrained strain-control conditions with temperature hold in various temperature ranges: 573 K to 1073 K, 723 K to 1073 K, and 433 K to 873 K (300 °C to 800 °C, 450 °C to 800 °C, and 160 °C to 600 °C). The integrated creep-fatigue theory (ICFT) model was incorporated into the finite element method to simulate the hysteresis behavior and predict the TMF life of DCI under those test conditions. With the consideration of four deformation/damage mechanisms: (i) plasticity-induced fatigue, (ii) intergranular embrittlement, (iii) creep, and (iv) oxidation, as revealed from the previous study on low cycle fatigue of the material, the model delineates the contributions of these physical mechanisms in the asymmetrical hysteresis behavior and the damage accumulation process leading to final TMF failure. This study shows that the ICFT model can simulate the stress-strain response and life of DCI under complex TMF loading profiles (OP and IP, and constrained with temperature hold).

  1. Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network

    PubMed Central

    2012-01-01

    Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945

  2. GROWTH AND INEQUALITY: MODEL EVALUATION BASED ON AN ESTIMATION-CALIBRATION STRATEGY

    PubMed Central

    Jeong, Hyeok; Townsend, Robert

    2010-01-01

    This paper evaluates two well-known models of growth with inequality that have explicit micro underpinnings related to household choice. With incomplete markets or transactions costs, wealth can constrain investment in business and the choice of occupation and also constrain the timing of entry into the formal financial sector. Using the Thai Socio-Economic Survey (SES), we estimate the distribution of wealth and the key parameters that best fit cross-sectional data on household choices and wealth. We then simulate the model economies for two decades at the estimated initial wealth distribution and analyze whether the model economies at those micro-fit parameter estimates can explain the observed macro and sectoral aspects of income growth and inequality change. Both models capture important features of Thai reality. Anomalies and comparisons across the two distinct models yield specific suggestions for improved research on the micro foundations of growth and inequality. PMID:20448833

  3. Comparing proxy and model estimates of hydroclimate variability and change over the Common Era

    NASA Astrophysics Data System (ADS)

    Hydro2k Consortium, Pages

    2017-12-01

    Water availability is fundamental to societies and ecosystems, but our understanding of variations in hydroclimate (including extreme events, flooding, and decadal periods of drought) is limited because of a paucity of modern instrumental observations that are distributed unevenly across the globe and only span parts of the 20th and 21st centuries. Such data coverage is insufficient for characterizing hydroclimate and its associated dynamics because of its multidecadal to centennial variability and highly regionalized spatial signature. High-resolution (seasonal to decadal) hydroclimatic proxies that span all or parts of the Common Era (CE) and paleoclimate simulations from climate models are therefore important tools for augmenting our understanding of hydroclimate variability. In particular, the comparison of the two sources of information is critical for addressing the uncertainties and limitations of both while enriching each of their interpretations. We review the principal proxy data available for hydroclimatic reconstructions over the CE and highlight the contemporary understanding of how these proxies are interpreted as hydroclimate indicators. We also review the available last-millennium simulations from fully coupled climate models and discuss several outstanding challenges associated with simulating hydroclimate variability and change over the CE. A specific review of simulated hydroclimatic changes forced by volcanic events is provided, as is a discussion of expected improvements in estimated radiative forcings, models, and their implementation in the future. Our review of hydroclimatic proxies and last-millennium model simulations is used as the basis for articulating a variety of considerations and best practices for how to perform proxy-model comparisons of CE hydroclimate. This discussion provides a framework for how best to evaluate hydroclimate variability and its associated dynamics using these comparisons and how they can better inform interpretations of both proxy data and model simulations. We subsequently explore means of using proxy-model comparisons to better constrain and characterize future hydroclimate risks. This is explored specifically in the context of several examples that demonstrate how proxy-model comparisons can be used to quantitatively constrain future hydroclimatic risks as estimated from climate model projections.

  4. Comparing Proxy and Model Estimates of Hydroclimate Variability and Change over the Common Era

    NASA Technical Reports Server (NTRS)

    Smerdon, Jason E.; Luterbacher, Jurg; Phipps, Steven J.; Anchukaitis, Kevin J.; Ault, Toby; Coats, Sloan; Cobb, Kim M.; Cook, Benjamin I.; Colose, Chris; Felis, Thomas; hide

    2017-01-01

    Water availability is fundamental to societies and ecosystems, but our understanding of variations in hydroclimate (including extreme events, flooding, and decadal periods of drought) is limited because of a paucity of modern instrumental observations that are distributed unevenly across the globe and only span parts of the 20th and 21st centuries. Such data coverage is insufficient for characterizing hydroclimate and its associated dynamics because of its multidecadal to centennial variability and highly regionalized spatial signature. High-resolution (seasonal to decadal) hydroclimatic proxies that span all or parts of the Common Era (CE) and paleoclimate simulations from climate models are therefore important tools for augmenting our understanding of hydroclimate variability. In particular, the comparison of the two sources of information is critical for addressing the uncertainties and limitations of both while enriching each of their interpretations. We review the principal proxy data available for hydroclimatic reconstructions over the CE and highlight the contemporary understanding of how these proxies are interpreted as hydroclimate indicators. We also review the available last-millennium simulations from fully coupled climate models and discuss several outstanding challenges associated with simulating hydroclimate variability and change over the CE. A specific review of simulated hydroclimatic changes forced by volcanic events is provided, as is a discussion of expected improvements in estimated radiative forcings, models, and their implementation in the future. Our review of hydroclimatic proxies and last-millennium model simulations is used as the basis for articulating a variety of considerations and best practices for how to perform proxy-model comparisons of CE hydroclimate. This discussion provides a framework for how best to evaluate hydroclimate variability and its associated dynamics using these comparisons and how they can better inform interpretations of both proxy data and model simulations.We subsequently explore means of using proxy-model comparisons to better constrain and characterize future hydroclimate risks. This is explored specifically in the context of several examples that demonstrate how proxy-model comparisons can be used to quantitatively constrain future hydroclimatic risks as estimated from climate model projections.

  5. NetFlow Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corbet Jr., Thomas F; Beyeler, Walter E; Vanwestrienen, Dirk

    NetFlow Dynamics is a web-accessible analysis environment for simulating dynamic flows of materials on model networks. Performing a simulation requires both the NetFlow Dynamics application and a network model which is a description of the structure of the nodes and edges of a network including the flow capacity of each edge and the storage capacity of each node, and the sources and sinks of the material flowing on the network. NetFlow Dynamics consists of databases for storing network models, algorithms to calculate flows on networks, and a GIS-based graphical interface for performing simulations and viewing simulation results. Simulated flows aremore » dynamic in the sense that flows on each edge of the network and inventories at each node change with time and can be out of equilibrium with boundary conditions. Any number of network models could be simulated using Net Flow Dynamics. To date, the models simulated have been models of petroleum infrastructure. The main model has been the National Transportation Fuels Model (NTFM), a network of U.S. oil fields, transmission pipelines, rail lines, refineries, tank farms, and distribution terminals. NetFlow Dynamics supports two different flow algorithms, the Gradient Flow algorithm and the Inventory Control algorithm, that were developed specifically for the NetFlow Dynamics application. The intent is to add additional algorithms in the future as needed. The ability to select from multiple algorithms is desirable because a single algorithm never covers all analysis needs. The current algorithms use a demand-driven capacity-constrained formulation which means that the algorithms strive to use all available capacity and stored inventory to meet desired flows to sinks, subject to the capacity constraints of each network component. The current flow algorithms are best suited for problems in which a material flows on a capacity-constrained network representing a supply chain in which the material supplied can be stored at each node of the network. In the petroleum models, the flowing materials are crude oil and refined products that can be stored at tank farms, refineries, or terminals (i.e. the nodes of the network). Examples of other network models that could be simulated are currency flowing in a financial network, agricultural products moving to market, or natural gas flowing on a pipeline network.« less

  6. Lunar and Planetary Science XXXV: Mars: Wind, Dust Sand, and Debris

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The session "Mars: Wind, Dust Sand, and Debris" included: Mars Exploration Rovers: Laboratory Simulations of Aeolian Interactions; Thermal and Spectral Analysis of an Intracrater Dune Field in Amazonis Planitia; How High is that Dune? A Comparison of Methods Used to Constrain the Morphometry of Aeolian Bedforms on Mars; Dust Devils on Mars: Scaling of Dust Flux Based on Laboratory Simulations; A Close Encounter with a Terrestrial Dust Devil; Interpretation of Wind Direction from Eolian Features: Herschel Crater, Mars Erosion Rates at the Viking 2 Landing Site; Mars Dust: Characterization of Particle Size and Electrostatic Charge Distributions; Simple Non-fluvial Models of Planetary Surface Modification, with Application to Mars; Comparison of Geomorphically Determined Winds with a General Circulation Model: Herschel Crater, Mars; Analysis of Martian Debris Aprons in Eastern Hellas Using THEMIS; Origin of Martian Northern Hemisphere Mid-Latitude Lobate Debris Aprons; Debris Aprons in the Tempe/Mareotis Region of Mars;and Constraining Flow Dynamics of Mass Movements on Earth and Mars.

  7. Modelling and Vibration Control of Beams with Partially Debonded Active Constrained Layer Damping Patch

    NASA Astrophysics Data System (ADS)

    SUN, D.; TONG, L.

    2002-05-01

    A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.

  8. A conditional approach to determining the effect of anthropogenic climate change on very rare events.

    NASA Astrophysics Data System (ADS)

    Wehner, Michael; Pall, Pardeep; Zarzycki, Colin; Stone, Daithi

    2016-04-01

    Probabilistic extreme event attribution is especially difficult for weather events that are caused by extremely rare large-scale meteorological patterns. Traditional modeling techniques have involved using ensembles of climate models, either fully coupled or with prescribed ocean and sea ice. Ensemble sizes for the latter case ranges from several 100 to tens of thousand. However, even if the simulations are constrained by the observed ocean state, the requisite large-scale meteorological pattern may not occur frequently enough or even at all in free running climate model simulations. We present a method to ensure that simulated events similar to the observed event are modeled with enough fidelity that robust statistics can be determined given the large scale meteorological conditions. By initializing suitably constrained short term ensemble hindcasts of both the actual weather system and a counterfactual weather system where the human interference in the climate system is removed, the human contribution to the magnitude of the event can be determined. However, the change (if any) in the probability of an event of the observed magnitude is conditional not only on the state of the ocean/sea ice system but also on the prescribed initial conditions determined by the causal large scale meteorological pattern. We will discuss the implications of this technique through two examples; the 2013 Colorado flood and the 2014 Typhoon Haiyan.

  9. Satellite-based emission constraint for nitrogen oxides: Capability and uncertainty

    NASA Astrophysics Data System (ADS)

    Lin, J.; McElroy, M. B.; Boersma, F.; Nielsen, C.; Zhao, Y.; Lei, Y.; Liu, Y.; Zhang, Q.; Liu, Z.; Liu, H.; Mao, J.; Zhuang, G.; Roozendael, M.; Martin, R.; Wang, P.; Spurr, R. J.; Sneep, M.; Stammes, P.; Clemer, K.; Irie, H.

    2013-12-01

    Vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) retrieved from satellite remote sensing have been employed widely to constrain emissions of nitrogen oxides (NOx). A major strength of satellite-based emission constraint is analysis of emission trends and variability, while a crucial limitation is errors both in satellite NO2 data and in model simulations relating NOx emissions to NO2 columns. Through a series of studies, we have explored these aspects over China. We separate anthropogenic from natural sources of NOx by exploiting their different seasonality. We infer trends of NOx emissions in recent years and effects of a variety of socioeconomic events at different spatiotemporal scales including the general economic growth, global financial crisis, Chinese New Year, and Beijing Olympics. We further investigate the impact of growing NOx emissions on particulate matter (PM) pollution in China. As part of recent developments, we identify and correct errors in both satellite NO2 retrieval and model simulation that ultimately affect NOx emission constraint. We improve the treatments of aerosol optical effects, clouds and surface reflectance in the NO2 retrieval process, using as reference ground-based MAX-DOAS measurements to evaluate the improved retrieval results. We analyze the sensitivity of simulated NO2 to errors in the model representation of major meteorological and chemical processes with a subsequent correction of model bias. Future studies will implement these improvements to re-constrain NOx emissions.

  10. Local dynamic subgrid-scale models in channel flow

    NASA Technical Reports Server (NTRS)

    Cabot, William H.

    1994-01-01

    The dynamic subgrid-scale (SGS) model has given good results in the large-eddy simulation (LES) of homogeneous isotropic or shear flow, and in the LES of channel flow, using averaging in two or three homogeneous directions (the DA model). In order to simulate flows in general, complex geometries (with few or no homogeneous directions), the dynamic SGS model needs to be applied at a local level in a numerically stable way. Channel flow, which is inhomogeneous and wall-bounded flow in only one direction, provides a good initial test for local SGS models. Tests of the dynamic localization model were performed previously in channel flow using a pseudospectral code and good results were obtained. Numerical instability due to persistently negative eddy viscosity was avoided by either constraining the eddy viscosity to be positive or by limiting the time that eddy viscosities could remain negative by co-evolving the SGS kinetic energy (the DLk model). The DLk model, however, was too expensive to run in the pseudospectral code due to a large near-wall term in the auxiliary SGS kinetic energy (k) equation. One objective was then to implement the DLk model in a second-order central finite difference channel code, in which the auxiliary k equation could be integrated implicitly in time at great reduction in cost, and to assess its performance in comparison with the plane-averaged dynamic model or with no model at all, and with direct numerical simulation (DNS) and/or experimental data. Other local dynamic SGS models have been proposed recently, e.g., constrained dynamic models with random backscatter, and with eddy viscosity terms that are averaged in time over material path lines rather than in space. Another objective was to incorporate and test these models in channel flow.

  11. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  12. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  13. Implementation of remote sensing data for flood forecasting

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2016-12-01

    Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.

  14. Coseismic slip on the southern Cascadia megathrust implied by tsunami deposits in an Oregon lake and earthquake-triggered marine turbidites

    NASA Astrophysics Data System (ADS)

    Witter, Robert C.; Zhang, Yinglong; Wang, Kelin; Goldfinger, Chris; Priest, George R.; Allan, Jonathan C.

    2012-10-01

    We test hypothetical tsunami scenarios against a 4,600-year record of sandy deposits in a southern Oregon coastal lake that offer minimum inundation limits for prehistoric Cascadia tsunamis. Tsunami simulations constrain coseismic slip estimates for the southern Cascadia megathrust and contrast with slip deficits implied by earthquake recurrence intervals from turbidite paleoseismology. We model the tsunamigenic seafloor deformation using a three-dimensional elastic dislocation model and test three Cascadia earthquake rupture scenarios: slip partitioned to a splay fault; slip distributed symmetrically on the megathrust; and slip skewed seaward. Numerical tsunami simulations use the hydrodynamic finite element model, SELFE, that solves nonlinear shallow-water wave equations on unstructured grids. Our simulations of the 1700 Cascadia tsunami require >12-13 m of peak slip on the southern Cascadia megathrust offshore southern Oregon. The simulations account for tidal and shoreline variability and must crest the ˜6-m-high lake outlet to satisfy geological evidence of inundation. Accumulating this slip deficit requires ≥360-400 years at the plate convergence rate, exceeding the 330-year span of two earthquake cycles preceding 1700. Predecessors of the 1700 earthquake likely involved >8-9 m of coseismic slip accrued over >260 years. Simple slip budgets constrained by tsunami simulations allow an average of 5.2 m of slip per event for 11 additional earthquakes inferred from the southern Cascadia turbidite record. By comparison, slip deficits inferred from time intervals separating earthquake-triggered turbidites are poor predictors of coseismic slip because they meet geological constraints for only 4 out of 12 (˜33%) Cascadia tsunamis.

  15. Uncertainty in the fate of soil organic carbon: A comparison of three conceptually different soil decomposition models

    USGS Publications Warehouse

    He, Yujie; Yang, Jinyan; Zhuang, Qianlai; McGuire, A. David; Zhu, Qing; Liu, Yaling; Teskey, Robert O.

    2014-01-01

    Conventional Q10 soil organic matter decomposition models and more complex microbial models are available for making projections of future soil carbon dynamics. However, it is unclear (1) how well the conceptually different approaches can simulate observed decomposition and (2) to what extent the trajectories of long-term simulations differ when using the different approaches. In this study, we compared three structurally different soil carbon (C) decomposition models (one Q10 and two microbial models of different complexity), each with a one- and two-horizon version. The models were calibrated and validated using 4 years of measurements of heterotrophic soil CO2 efflux from trenched plots in a Dahurian larch (Larix gmelinii Rupr.) plantation. All models reproduced the observed heterotrophic component of soil CO2 efflux, but the trajectories of soil carbon dynamics differed substantially in 100 year simulations with and without warming and increased litterfall input, with microbial models that produced better agreement with observed changes in soil organic C in long-term warming experiments. Our results also suggest that both constant and varying carbon use efficiency are plausible when modeling future decomposition dynamics and that the use of a short-term (e.g., a few years) period of measurement is insufficient to adequately constrain model parameters that represent long-term responses of microbial thermal adaption. These results highlight the need to reframe the representation of decomposition models and to constrain parameters with long-term observations and multiple data streams. We urge caution in interpreting future soil carbon responses derived from existing decomposition models because both conceptual and parameter uncertainties are substantial.

  16. Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.

    2016-12-01

    The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.

  17. Waves, Plumes and Bubbles from Jupiter Comet Impacts

    NASA Astrophysics Data System (ADS)

    Palotai, Csaba J.; Sankar, Ramanakumar; McCabe, Tyler; Korycansky, Donald

    2017-10-01

    We present results from our numerical simulations of jovian comet impacts that investigate various phases of the Shoemaker-Levy 9 (SL9) and the 2009 impacts into Jupiter's atmosphere. Our work includes a linked series of observationally constrained, three-dimensional radiative-hydrodynamic simulations to model the impact, plume blowout, plume flight/splash, and wave-propagation phases of those impact events. Studying these stages using a single model is challenging because the spatial and temporal scales and the temperature range of those phases may differ by orders of magnitudes (Harrington et al. 2004). In our simulations we model subsequent phases starting with the interpolation of the results of previous simulations onto a new, larger grid that is optimized for capturing all key physics of the relevant phenomena while maintaining computational efficiency. This enables us to carry out end-to-end simulations that require no ad-hoc initial conditions. In this work, we focus on the waves generated by various phenomena during the impact event and study the temporal evolution of their position and speed. In particular, we investigate the shocks generated by the impactor during atmospheric entry, the expansion of the ejected plume and the ascent of the hot bubble of material from terminal depth. These results are compared to the observed characteristics of the expanding SL9 rings (Hammel et al. 1995). Additionally, we present results from our sensitivity tests that focus on studying the differences in the ejecta plume generation using various impactor parameters (e.g., impact angle, impactor size, material, etc.). These simulations are used to explain various phenomena related to the SL9 event and to constrain the characteristics of the unknown 2009 impactor body. This research was supported by National Science Foundation Grant AST-1627409.

  18. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    NASA Astrophysics Data System (ADS)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.

  19. Uncertainty assessment and implications for data acquisition in support of integrated hydrologic models

    NASA Astrophysics Data System (ADS)

    Brunner, Philip; Doherty, J.; Simmons, Craig T.

    2012-07-01

    The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dierickx, Marion I. P.; Loeb, Abraham, E-mail: mdierickx@cfa.harvard.edu, E-mail: aloeb@cfa.harvard.edu

    The extensive span of the Sagittarius (Sgr) stream makes it a promising tool for studying the gravitational potential of the Milky Way (MW). Characterizing its stellar kinematics can constrain halo properties and provide a benchmark for the paradigm of galaxy formation from cold dark matter. Accurate models of the disruption dynamics of the Sgr progenitor are necessary to employ this tool. Using a combination of analytic modeling and N -body simulations, we build a new model of the Sgr orbit and resulting stellar stream. In contrast to previous models, we simulate the full infall trajectory of the Sgr progenitor frommore » the time it first crossed the MW virial radius 8 Gyr ago. An exploration of the parameter space of initial phase-space conditions yields tight constraints on the angular momentum of the Sgr progenitor. Our best-fit model is the first to accurately reproduce existing data on the 3D positions and radial velocities of the debris detected 100 kpc away in the MW halo. In addition to replicating the mapped stream, the simulation also predicts the existence of several arms of the Sgr stream extending to hundreds of kiloparsecs. The two most distant stars known in the MW halo coincide with the predicted structure. Additional stars in the newly predicted arms can be found with future data from the Large Synoptic Survey Telescope. Detecting a statistical sample of stars in the most distant Sgr arms would provide an opportunity to constrain the MW potential out to unprecedented Galactocentric radii.« less

  1. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    PubMed

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  2. Using Remote Sensing Data to Constrain Models of Fault Interactions and Plate Boundary Deformation

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Donnellan, A.; Lyzenga, G. A.; Parker, J. W.; Milliner, C. W. D.

    2016-12-01

    Determining the distribution of slip and behavior of fault interactions at plate boundaries is a complex problem. Field and remotely sensed data often lack the necessary coverage to fully resolve fault behavior. However, realistic physical models may be used to more accurately characterize the complex behavior of faults constrained with observed data, such as GPS, InSAR, and SfM. These results will improve the utility of using combined models and data to estimate earthquake potential and characterize plate boundary behavior. Plate boundary faults exhibit complex behavior, with partitioned slip and distributed deformation. To investigate what fraction of slip becomes distributed deformation off major faults, we examine a model fault embedded within a damage zone of reduced elastic rigidity that narrows with depth and forward model the slip and resulting surface deformation. The fault segments and slip distributions are modeled using the JPL GeoFEST software. GeoFEST (Geophysical Finite Element Simulation Tool) is a two- and three-dimensional finite element software package for modeling solid stress and strain in geophysical and other continuum domain applications [Lyzenga, et al., 2000; Glasscoe, et al., 2004; Parker, et al., 2008, 2010]. New methods to advance geohazards research using computer simulations and remotely sensed observations for model validation are required to understand fault slip, the complex nature of fault interaction and plate boundary deformation. These models help enhance our understanding of the underlying processes, such as transient deformation and fault creep, and can aid in developing observation strategies for sUAV, airborne, and upcoming satellite missions seeking to determine how faults behave and interact and assess their associated hazard. Models will also help to characterize this behavior, which will enable improvements in hazard estimation. Validating the model results against remotely sensed observations will allow us to better constrain fault zone rheology and physical properties, having implications for the overall understanding of earthquake physics, fault interactions, plate boundary deformation and earthquake hazard, preparedness and risk reduction.

  3. Characterizing Transiting Planets with JWST Spectra: Simulations and Retrievals

    NASA Technical Reports Server (NTRS)

    Greene, Tom; Line, Michael; Fortney, Jonathan

    2015-01-01

    There are now well over a thousand confirmed exoplanets, ranging from hot to cold and large to small worlds. JWST spectra will provide much more detailed information on the molecular constituents, chemical compositions, and thermal properties of the atmospheres of transiting planets than is now known. We explore this by modeling clear, cloudy,and high mean molecular weight atmospheres of typical hot Jupiter, warm Neptune, warm sub-Neptune, and cool super-Earth planets and then simulating their JWST transmission and emission spectra. These simulations were performed for several JWST instrument modes over 1 - 11 microns and incorporate realistic signal and noise components. We then performed state-of the art retrievals to determine how well temperatures and abundances (CO, CO2, H2O, NH3) will be constrained and over what pressures for these different planet types. Using these results, we appraise what instrument modes will be most useful for determining what properties of the different planets, and we assess how well we can constrain their compositions, CO ratios, and temperature profiles.

  4. The tangential velocity of M31: CLUES from constrained simulations

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Courtois, Hélène; Tully, R. Brent

    2016-07-01

    Determining the precise value of the tangential component of the velocity of M31 is a non-trivial astrophysical issue that relies on complicated modelling. This has recently lead to conflicting estimates, obtained by several groups that used different methodologies and assumptions. This Letter addresses the issue by computing a Bayesian posterior distribution function of this quantity, in order to measure the compatibility of those estimates with Λ cold dark matter (ΛCDM). This is achieved using an ensemble of Local Group (LG) look-alikes collected from a set of constrained simulations (CSs) of the local Universe, and a standard unconstrained ΛCDM. The latter allows us to build a control sample of LG-like pairs and to single out the influence of the environment in our results. We find that neither estimate is at odds with ΛCDM; however, whereas CSs favour higher values of vtan, the reverse is true for estimates based on LG samples gathered from unconstrained simulations, overlooking the environmental element.

  5. Heat as a tracer to estimate dissolved organic carbon flux from a restored wetland

    USGS Publications Warehouse

    Burow, K.R.; Constantz, J.; Fujii, R.

    2005-01-01

    Heat was used as a natural tracer to characterize shallow ground water flow beneath a complex wetland system. Hydrogeologic data were combined with measured vertical temperature profiles to constrain a series of two-dimensional, transient simulations of ground water flow and heat transport using the model code SUTRA (Voss 1990). The measured seasonal temperature signal reached depths of 2.7 m beneath the pond. Hydraulic conductivity was varied in each of the layers in the model in a systematic manual calibration of the two-dimensional model to obtain the best fit to the measured temperature and hydraulic head. Results of a series of representative best-fit simulations represent a range in hydraulic conductivity values that had the best agreement between simulated and observed temperatures and that resulted in simulated pond seepage values within 1 order of magnitude of pond seepage estimated from the water budget. Resulting estimates of ground water discharge to an adjacent agricultural drainage ditch were used to estimate potential dissolved organic carbon (DOC) loads resulting from the restored wetland. Estimated DOC loads ranged from 45 to 1340 g C/(m2 year), which is higher than estimated DOC loads from surface water. In spite of the complexity in characterizing ground water flow in peat soils, using heat as a tracer provided a constrained estimate of subsurface flow from the pond to the agricultural drainage ditch. Copyright ?? 2005 National Ground Water Association.

  6. Constraining the Intergalactic and Circumgalactic Media with Lyman-Alpha Absorption

    NASA Astrophysics Data System (ADS)

    Sorini, Daniele; Onorbe, Jose; Hennawi, Joseph F.; Lukic, Zarija

    2018-01-01

    Lyman-alpha (Ly-a) absorption features detected in quasar spectra in the redshift range 02Mpc, the simulations asymptotically match the observations, because the ΛCDM model successfully describes the ambient IGM. This represents a critical advantage of studying the mean absorption profile. However, significant differences between the simulations, and between simulations and observations are present on scales 20kpc-2Mpc, illustrating the challenges of accurately modeling and resolving galaxy formation physics. It is noteworthy that these differences are observed as far out as ~2Mpc, indicating that the `sphere-of-influence' of galaxies could extend to approximately ~20 times the halo virial radius (~100kpc). Current observations are very precise on these scales and can thus strongly discriminate between different galaxy formation models. I demonstrate that the Ly-a absorption profile is primarily sensitive to the underlying temperature-density relationship of diffuse gas around galaxies, and argue that it thus provides a fundamental test of galaxy formation models. With near-future high-precision observations of Ly-a absorption, the tools developed in my thesis set the stage for even stronger constraints on models of galaxy formation and cosmology.

  7. Exploring JWST's Capability to Constrain Habitability on Simulated Terrestrial TESS Planets

    NASA Astrophysics Data System (ADS)

    Tremblay, Luke; Britt, Amber; Batalha, Natasha; Schwieterman, Edward; Arney, Giada; Domagal-Goldman, Shawn; Mandell, Avi; Planetary Systems Laboratory; Virtual Planetary Laboratory

    2017-01-01

    In the following, we have worked to develop a flexible "observability" scale of biologically relevant molecules in the atmospheres of newly discovered exoplanets for the instruments aboard NASA's next flagship mission, the James Webb Space Telescope (JWST). We sought to create such a scale in order to provide the community with a tool with which to optimize target selection for JWST observations based on detections of the upcoming Transiting Exoplanet Satellite Survey (TESS). Current literature has laid the groundwork for defining both biologically relevant molecules as well as what characteristics would make a new world "habitable", but it has so far lacked a cohesive analysis of JWST's capabilities to observe these molecules in exoplanet atmospheres and thereby constrain habitability. In developing our Observability Scale, we utilized a range of hypothetical planets (over planetary radii and stellar insolation) and generated three self-consistent atmospheric models (of dierent molecular compositions) for each of our simulated planets. With these planets and their corresponding atmospheres, we utilized the most accurate JWST instrument simulator, created specically to process transiting exoplanet spectra. Through careful analysis of these simulated outputs, we were able to determine the relevant parameters that effected JWST's ability to constrain each individual molecular bands with statistical accuracy and therefore generate a scale based on those key parameters. As a preliminary test of our Observability Scale, we have also applied it to the list of TESS candidate stars in order to determine JWST's observational capabilities for any soon-to-be-detected planet in those solar systems.

  8. Explorations in dark energy

    NASA Astrophysics Data System (ADS)

    Bozek, Brandon

    This dissertation describes three research projects on the topic of dark energy. The first project is an analysis of a scalar field model of dark energy with an exponential potential using the Dark Energy Task Force (DETF) simulated data models. Using Markov Chain Monte Carlo sampling techniques we examine the ability of each simulated data set to constrain the parameter space of the exponential potential for data sets based on a cosmological constant and a specific exponential scalar field model. We compare our results with the constraining power calculated by the DETF using their "w 0--wa" parameterization of the dark energy. We find that respective increases in constraining power from one stage to the next produced by our analysis give results consistent with DETF results. To further investigate the potential impact of future experiments, we also generate simulated data for an exponential model background cosmology which can not be distinguished from a cosmological constant at DETF Stage 2, and show that for this cosmology good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The second project details this analysis on a Inverse Power Law (IPL) or "Ratra-Peebles" (RP) model. This model is a member of a popular subset of scalar field quintessence models that exhibit "tracking" behavior that make this model particularly theoretically interesting. We find that the relative increase in constraining power on the parameter space of this model is consistent to what was found in the first project and the DETF report. We also show, using a background cosmology based on an IPL scalar field model that is consistent with a cosmological constant with Stage 2 data, that good DETF Stage 4 data would exclude a cosmological constant by better than 3sigma. The third project extends the Causal Entropic Principle to predict the preferred curvature within the "multiverse". The Causal Entropic Principle (Bousso, et al.) provides an alternative approach to anthropic attempts to predict our observed value of the cosmological constant by calculating the entropy created within a causal diamond. We have found that values larger than rhok = 40rho m are disfavored by more than 99.99% and a peak value at rho Λ = 7.9 x 10-123 and rho k = 4.3rhom for open universes. For universes that allow only positive curvature or both positive and negative curvature, we find a correlation between curvature and dark energy that leads to an extended region of preferred values. Our universe is found to be disfavored to an extent depending the priors on curvature. We also provide a comparison to previous anthropic constraints on open universes and discuss future directions for this work.

  9. A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng

    2009-11-01

    Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.

  10. Using whole disease modeling to inform resource allocation decisions: economic evaluation of a clinical guideline for colorectal cancer using a single model.

    PubMed

    Tappenden, Paul; Chilcott, Jim; Brennan, Alan; Squires, Hazel; Glynne-Jones, Rob; Tappenden, Janine

    2013-06-01

    To assess the feasibility and value of simulating whole disease and treatment pathways within a single model to provide a common economic basis for informing resource allocation decisions. A patient-level simulation model was developed with the intention of being capable of evaluating multiple topics within National Institute for Health and Clinical Excellence's colorectal cancer clinical guideline. The model simulates disease and treatment pathways from preclinical disease through to detection, diagnosis, adjuvant/neoadjuvant treatments, follow-up, curative/palliative treatments for metastases, supportive care, and eventual death. The model parameters were informed by meta-analyses, randomized trials, observational studies, health utility studies, audit data, costing sources, and expert opinion. Unobservable natural history parameters were calibrated against external data using Bayesian Markov chain Monte Carlo methods. Economic analysis was undertaken using conventional cost-utility decision rules within each guideline topic and constrained maximization rules across multiple topics. Under usual processes for guideline development, piecewise economic modeling would have been used to evaluate between one and three topics. The Whole Disease Model was capable of evaluating 11 of 15 guideline topics, ranging from alternative diagnostic technologies through to treatments for metastatic disease. The constrained maximization analysis identified a configuration of colorectal services that is expected to maximize quality-adjusted life-year gains without exceeding current expenditure levels. This study indicates that Whole Disease Model development is feasible and can allow for the economic analysis of most interventions across a disease service within a consistent conceptual and mathematical infrastructure. This disease-level modeling approach may be of particular value in providing an economic basis to support other clinical guidelines. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Pushing the Frontier of Data-Oriented Geodynamic Modeling: from Qualitative to Quantitative to Predictive

    NASA Astrophysics Data System (ADS)

    Liu, L.; Hu, J.; Zhou, Q.

    2016-12-01

    The rapid accumulation of geophysical and geological data sets poses an increasing demand for the development of geodynamic models to better understand the evolution of the solid Earth. Consequently, the earlier qualitative physical models are no long satisfying. Recent efforts are focusing on more quantitative simulations and more efficient numerical algorithms. Among these, a particular line of research is on the implementation of data-oriented geodynamic modeling, with the purpose of building an observationally consistent and physically correct geodynamic framework. Such models could often catalyze new insights into the functioning mechanisms of the various aspects of plate tectonics, and their predictive nature could also guide future research in a deterministic fashion. Over the years, we have been working on constructing large-scale geodynamic models with both sequential and variational data assimilation techniques. These models act as a bridge between different observational records, and the superposition of the constraining power from different data sets help reveal unknown processes and mechanisms of the dynamics of the mantle and lithosphere. We simulate the post-Cretaceous subduction history in South America using a forward (sequential) approach. The model is constrained using past subduction history, seafloor age evolution, tectonic architecture of continents, and the present day geophysical observations. Our results quantify the various driving forces shaping the present South American flat slabs, which we found are all internally torn. The 3-D geometry of these torn slabs further explains the abnormal seismicity pattern and enigmatic volcanic history. An inverse (variational) model simulating the late Cenozoic western U.S. mantle dynamics with similar constraints reveals a different mechanism for the formation of Yellowstone-related volcanism from traditional understanding. Furthermore, important insights on the mantle density and viscosity structures also emerge from these models.

  12. Estimating Latent Variable Interactions with Nonnormal Observed Data: A Comparison of Four Approaches

    ERIC Educational Resources Information Center

    Cham, Heining; West, Stephen G.; Ma, Yue; Aiken, Leona S.

    2012-01-01

    A Monte Carlo simulation was conducted to investigate the robustness of 4 latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of nonnormality of the observed…

  13. Simulations of ultra-high energy cosmic rays in the local Universe and the origin of cosmic magnetic fields

    NASA Astrophysics Data System (ADS)

    Hackstein, S.; Vazza, F.; Brüggen, M.; Sorce, J. G.; Gottlöber, S.

    2018-04-01

    We simulate the propagation of cosmic rays at ultra-high energies, ≳1018 eV, in models of extragalactic magnetic fields in constrained simulations of the local Universe. We use constrained initial conditions with the cosmological magnetohydrodynamics code ENZO. The resulting models of the distribution of magnetic fields in the local Universe are used in the CRPROPA code to simulate the propagation of ultra-high energy cosmic rays. We investigate the impact of six different magneto-genesis scenarios, both primordial and astrophysical, on the propagation of cosmic rays over cosmological distances. Moreover, we study the influence of different source distributions around the Milky Way. Our study shows that different scenarios of magneto-genesis do not have a large impact on the anisotropy measurements of ultra-high energy cosmic rays. However, at high energies above the Greisen-Zatsepin-Kuzmin (GZK)-limit, there is anisotropy caused by the distribution of nearby sources, independent of the magnetic field model. This provides a chance to identify cosmic ray sources with future full-sky measurements and high number statistics at the highest energies. Finally, we compare our results to the dipole signal measured by the Pierre Auger Observatory. All our source models and magnetic field models could reproduce the observed dipole amplitude with a pure iron injection composition. Our results indicate that the dipole is observed due to clustering of secondary nuclei in direction of nearby sources of heavy nuclei. A light injection composition is disfavoured, since the increase in dipole angular power from 4 to 8 EeV is too slow compared to observation by the Pierre Auger Observatory.

  14. Microenvironmental independence associated with tumor progression.

    PubMed

    Anderson, Alexander R A; Hassanein, Mohamed; Branch, Kevin M; Lu, Jenny; Lobdell, Nichole A; Maier, Julie; Basanta, David; Weidow, Brandy; Narasanna, Archana; Arteaga, Carlos L; Reynolds, Albert B; Quaranta, Vito; Estrada, Lourdes; Weaver, Alissa M

    2009-11-15

    Tumor-microenvironment interactions are increasingly recognized to influence tumor progression. To understand the competitive dynamics of tumor cells in diverse microenvironments, we experimentally parameterized a hybrid discrete-continuum mathematical model with phenotypic trait data from a set of related mammary cell lines with normal, transformed, or tumorigenic properties. Surprisingly, in a resource-rich microenvironment, with few limitations on proliferation or migration, transformed (but not tumorigenic) cells were most successful and outcompeted other cell types in heterogeneous tumor simulations. Conversely, constrained microenvironments with limitations on space and/or growth factors gave a selective advantage to phenotypes derived from tumorigenic cell lines. Analysis of the relative performance of each phenotype in constrained versus unconstrained microenvironments revealed that, although all cell types grew more slowly in resource-constrained microenvironments, the most aggressive cells were least affected by microenvironmental constraints. A game theory model testing the relationship between microenvironment resource availability and competitive cellular dynamics supports the concept that microenvironmental independence is an advantageous cellular trait in resource-limited microenvironments.

  15. Low-dimensional recurrent neural network-based Kalman filter for speech enhancement.

    PubMed

    Xia, Youshen; Wang, Jun

    2015-07-01

    This paper proposes a new recurrent neural network-based Kalman filter for speech enhancement, based on a noise-constrained least squares estimate. The parameters of speech signal modeled as autoregressive process are first estimated by using the proposed recurrent neural network and the speech signal is then recovered from Kalman filtering. The proposed recurrent neural network is globally asymptomatically stable to the noise-constrained estimate. Because the noise-constrained estimate has a robust performance against non-Gaussian noise, the proposed recurrent neural network-based speech enhancement algorithm can minimize the estimation error of Kalman filter parameters in non-Gaussian noise. Furthermore, having a low-dimensional model feature, the proposed neural network-based speech enhancement algorithm has a much faster speed than two existing recurrent neural networks-based speech enhancement algorithms. Simulation results show that the proposed recurrent neural network-based speech enhancement algorithm can produce a good performance with fast computation and noise reduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Boundary control for a constrained two-link rigid-flexible manipulator with prescribed performance

    NASA Astrophysics Data System (ADS)

    Cao, Fangfei; Liu, Jinkun

    2018-05-01

    In this paper, we consider a boundary control problem for a constrained two-link rigid-flexible manipulator. The nonlinear system is described by hybrid ordinary differential equation-partial differential equation (ODE-PDE) dynamic model. Based on the coupled ODE-PDE model, boundary control is proposed to regulate the joint positions and eliminate the elastic vibration simultaneously. With the help of prescribed performance functions, the tracking error can converge to an arbitrarily small residual set and the convergence rate is no less than a certain pre-specified value. Asymptotic stability of the closed-loop system is rigorously proved by the LaSalle's Invariance Principle extended to infinite-dimensional system. Numerical simulations are provided to demonstrate the effectiveness of the proposed controller.

  17. Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.

    PubMed

    Lv, Jie; Havlak, Paul; Putnam, Nicholas H

    2011-10-05

    Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.

  18. A predictive parameter estimation approach for the thermodynamically constrained averaging theory applied to diffusion in porous media

    NASA Astrophysics Data System (ADS)

    Valdes-Parada, F. J.; Ostvar, S.; Wood, B. D.; Miller, C. T.

    2017-12-01

    Modeling of hierarchical systems such as porous media can be performed by different approaches that bridge microscale physics to the macroscale. Among the several alternatives available in the literature, the thermodynamically constrained averaging theory (TCAT) has emerged as a robust modeling approach that provides macroscale models that are consistent across scales. For specific closure relation forms, TCAT models are expressed in terms of parameters that depend upon the physical system under study. These parameters are usually obtained from inverse modeling based upon either experimental data or direct numerical simulation at the pore scale. Other upscaling approaches, such as the method of volume averaging, involve an a priori scheme for parameter estimation for certain microscale and transport conditions. In this work, we show how such a predictive scheme can be implemented in TCAT by studying the simple problem of single-phase passive diffusion in rigid and homogeneous porous media. The components of the effective diffusivity tensor are predicted for several porous media by solving ancillary boundary-value problems in periodic unit cells. The results are validated through a comparison with data from direct numerical simulation. This extension of TCAT constitutes a useful advance for certain classes of problems amenable to this estimation approach.

  19. Constrained motion model of mobile robots and its applications.

    PubMed

    Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong

    2009-06-01

    Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.

  20. A modeling study of direct and indirect N2O emissions from a representative catchment in the U. S. Corn Belt

    USDA-ARS?s Scientific Manuscript database

    Indirect nitrous oxide (N2O) emissions from drainage ditches and headwater streams are poorly constrained. To date, few studies have monitored stream N2O emissions and to our knowledge, no modeling studies have been conducted to simulate stream N2O emissions. In this study, we developed direct and i...

  1. Modeling Gas-Aerosol Processes during MILAGRO 2006

    NASA Astrophysics Data System (ADS)

    Zaveri, R. A.; Chapman, E. G.; Easter, R. C.; Fast, J. D.; Flocke, F.; Kleinman, L. I.; Madronich, S.; Springston, S. R.; Voss, P. B.; Weinheimer, A.

    2007-12-01

    Significant gas-aerosol interactions are expected in the Mexico City outflow due to formation of various semi- volatile secondary inorganic and organic gases that can partition into the particulate phase and due to various heterogeneous chemical processes. A number of T0-T1-T2 Lagrangian transport episodes during the MILAGRO campaign provide focused modeling opportunities to elucidate the roles of various chemical and physical processes in the evolution of the primary trace gases and aerosol particles emitted in Mexico City over a period of 4-8 hours. Additionally, one long-range Lagrangian transport episode on March 18-19, 2006, as characterized by the Controlled Meteorological (CMET) balloon trajectories, presents an excellent opportunity to model evolution of Mexico City pollutants over 26 hours. The key tools in our analysis of these Lagrangian episodes include a comprehensive Lagrangian box-model and the WRF-chem model based on the new Model for Simulating Aerosol Interactions and Chemistry (MOSAIC), which simulates gas-phase photochemistry, heterogeneous reactions, equilibrium particulate phase-state and water content, and dynamic gas-particle partitioning for size- resolved aerosols. Extensive gas, aerosol, and meteorological measurements onboard the G1 and C130 aircraft and T0, T1, and T2 ground sites will be used to initialize, constrain, and evaluate the models. For the long-range transport event, in-situ vertical profiles of wind vectors from repeated CMET balloon soundings in the Mexico City outflow will be used to nudge the winds in the WRF-chem simulation. Preliminary model results will be presented with the intention to explore further collaborative opportunities to use additional gas and particulate measurements to better constrain and evaluate the models.

  2. Mechanisms of diurnal precipitation over the US Great Plains: a cloud resolving model perspective

    NASA Astrophysics Data System (ADS)

    Lee, Myong-In; Choi, Ildae; Tao, Wei-Kuo; Schubert, Siegfried D.; Kang, In-Sik

    2010-02-01

    The mechanisms of summertime diurnal precipitation in the US Great Plains were examined with the two-dimensional (2D) Goddard Cumulus Ensemble (GCE) cloud-resolving model (CRM). The model was constrained by the observed large-scale background state and surface flux derived from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Program’s Intensive Observing Period (IOP) data at the Southern Great Plains (SGP). The model, when continuously-forced by realistic surface flux and large-scale advection, simulates reasonably well the temporal evolution of the observed rainfall episodes, particularly for the strongly forced precipitation events. However, the model exhibits a deficiency for the weakly forced events driven by diurnal convection. Additional tests were run with the GCE model in order to discriminate between the mechanisms that determine daytime and nighttime convection. In these tests, the model was constrained with the same repeating diurnal variation in the large-scale advection and/or surface flux. The results indicate that it is primarily the surface heat and moisture flux that is responsible for the development of deep convection in the afternoon, whereas the large-scale upward motion and associated moisture advection play an important role in preconditioning nocturnal convection. In the nighttime, high clouds are continuously built up through their interaction and feedback with long-wave radiation, eventually initiating deep convection from the boundary layer. Without these upper-level destabilization processes, the model tends to produce only daytime convection in response to boundary layer heating. This study suggests that the correct simulation of the diurnal variation in precipitation requires that the free-atmospheric destabilization mechanisms resolved in the CRM simulation must be adequately parameterized in current general circulation models (GCMs) many of which are overly sensitive to the parameterized boundary layer heating.

  3. Tradeoffs in Acceleration and Initialization of Superparameterized Global Atmospheric Models for MJO and Climate Science

    NASA Astrophysics Data System (ADS)

    Pritchard, M. S.; Bretherton, C. S.; DeMott, C. A.

    2014-12-01

    New trade-offs are discussed in the cloud superparameterization approach to explicitly representing deep convection in global climate models. Intrinsic predictability tests show that the memory of cloud-resolving-scale organization is not critical for producing desired modes of organized convection such as the Madden-Julian Oscillation (MJO). This has implications for the feasibility of data assimilation and real-world initialization for superparameterized weather forecasting. Climate simulation sensitivity tests demonstrate that 400% acceleration of cloud superparameterization is possible by restricting the 32-128 km scale regime without deteriorating the realism of the simulated MJO but the number of cloud resolving model grid columns is discovered to constrain the efficiency of vertical mixing, with consequences for the simulated liquid cloud climatology. Tuning opportunities for next generation accelerated superparameterized climate models are discussed.

  4. Evaluating Micrometeorological Estimates of Groundwater Discharge from Great Basin Desert Playas

    NASA Astrophysics Data System (ADS)

    Jackson, T.; Halford, K. J.; Gardner, P.

    2017-12-01

    Groundwater availability studies in the arid southwestern United States traditionally have assumed that groundwater discharge by evapotranspiration (ETg) from desert playas is a significant component of the groundwater budget. This result occurs because desert playa ETg rates are poorly constrained by Bowen Ratio energy budget (BREB) and eddy-covariance (EC) micrometeorological measurement approaches. Best attempts by previous studies to constrain ETg from desert playas have resulted in ETg rates that are below the detection limit of micrometeorological approaches. This study uses numerical models to further constrain desert playa ETg rates that are below the detection limit of EC (0.1 mm/d) and BREB (0.3 mm/d) approaches, and to evaluate the effect of hydraulic properties and salinity-based groundwater-density contrasts on desert playa ETg rates. Numerical models simulated ETg rates from desert playas in Death Valley, California and Dixie Valley, Nevada. Results indicate that actual ETg rates from desert playas are significantly below the upper detection limits provided by the BREB- and EC-based micrometeorological measurements. Discharge from desert playas contribute less than 2 percent of total groundwater discharge from Dixie and Death Valleys, which suggests discharge from desert playas is negligible in other basins. Numerical simulation results also show that ETg from desert playas primarily is limited by differences in hydraulic properties between alluvial fan and playa sediments and, to a lesser extent, by salinity-based groundwater density contrasts.

  5. Numeric stratigraphic modeling: Testing sequence Numeric stratigraphic modeling: Testing sequence stratigraphic concepts using high resolution geologic examples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armentrout, J.M.; Smith-Rouch, L.S.; Bowman, S.A.

    1996-08-01

    Numeric simulations based on integrated data sets enhance our understanding of depositional geometry and facilitate quantification of depositional processes. Numeric values tested against well-constrained geologic data sets can then be used in iterations testing each variable, and in predicting lithofacies distributions under various depositional scenarios using the principles of sequence stratigraphic analysis. The stratigraphic modeling software provides a broad spectrum of techniques for modeling and testing elements of the petroleum system. Using well-constrained geologic examples, variations in depositional geometry and lithofacies distributions between different tectonic settings (passive vs. active margin) and climate regimes (hothouse vs. icehouse) can provide insight tomore » potential source rock and reservoir rock distribution, maturation timing, migration pathways, and trap formation. Two data sets are used to illustrate such variations: both include a seismic reflection profile calibrated by multiple wells. The first is a Pennsylvanian mixed carbonate-siliciclastic system in the Paradox basin, and the second a Pliocene-Pleistocene siliciclastic system in the Gulf of Mexico. Numeric simulations result in geometry and facies distributions consistent with those interpreted using the integrated stratigraphic analysis of the calibrated seismic profiles. An exception occurs in the Gulf of Mexico study where the simulated sediment thickness from 3.8 to 1.6 Ma within an upper slope minibasin was less than that mapped using a regional seismic grid. Regional depositional patterns demonstrate that this extra thickness was probably sourced from out of the plane of the modeled transect, illustrating the necessity for three-dimensional constraints on two-dimensional modeling.« less

  6. A Model-Data Fusion Approach for Constraining Modeled GPP at Global Scales Using GOME2 SIF Data

    NASA Astrophysics Data System (ADS)

    MacBean, N.; Maignan, F.; Lewis, P.; Guanter, L.; Koehler, P.; Bacour, C.; Peylin, P.; Gomez-Dans, J.; Disney, M.; Chevallier, F.

    2015-12-01

    Predicting the fate of the ecosystem carbon, C, stocks and their sensitivity to climate change relies heavily on our ability to accurately model the gross carbon fluxes, i.e. photosynthesis and respiration. However, there are large differences in the Gross Primary Productivity (GPP) simulated by different land surface models (LSMs), not only in terms of mean value, but also in terms of phase and amplitude when compared to independent data-based estimates. This strongly limits our ability to provide accurate predictions of carbon-climate feedbacks. One possible source of this uncertainty is from inaccurate parameter values resulting from incomplete model calibration. Solar Induced Fluorescence (SIF) has been shown to have a linear relationship with GPP at the typical spatio-temporal scales used in LSMs (Guanter et al., 2011). New satellite-derived SIF datasets have the potential to constrain LSM parameters related to C uptake at global scales due to their coverage. Here we use SIF data derived from the GOME2 instrument (Köhler et al., 2014) to optimize parameters related to photosynthesis and leaf phenology of the ORCHIDEE LSM, as well as the linear relationship between SIF and GPP. We use a multi-site approach that combines many model grid cells covering a wide spatial distribution within the same optimization (e.g. Kuppel et al., 2014). The parameters are constrained per Plant Functional type as the linear relationship described above varies depending on vegetation structural properties. The relative skill of the optimization is compared to a case where only satellite-derived vegetation index data are used to constrain the model, and to a case where both data streams are used. We evaluate the results using an independent data-driven estimate derived from FLUXNET data (Jung et al., 2011) and with a new atmospheric tracer, Carbonyl sulphide (OCS) following the approach of Launois et al. (ACPD, in review). We show that the optimization reduces the strong positive bias of the ORCHIDEE model and increases the correlation compared to independent estimates. Differences in spatial patterns and gradients between simulated GPP and observed SIF remain largely unchanged however, suggesting that the underlying representation of vegetation type and/or structure and functioning in the model requires further investigation.

  7. Numerical Simulation of Shock/Detonation-Deformable-Particle Interaction with Constrained Interface Reinitialization

    NASA Astrophysics Data System (ADS)

    Zhang, Ju; Jackson, Thomas; Balachandar, Sivaramakrishnan

    2015-06-01

    We will develop a computational model built upon our verified and validated in-house SDT code to provide improved description of the multiphase blast wave dynamics where solid particles are considered deformable and can even undergo phase transitions. Our SDT computational framework includes a reactive compressible flow solver with sophisticated material interface tracking capability and realistic equation of state (EOS) such as Mie-Gruneisen EOS for multiphase flow modeling. The behavior of diffuse interface models by Shukla et al. (2010) and Tiwari et al. (2013) at different shock impedance ratio will be first examined and characterized. The recent constrained interface reinitialization by Shukla (2014) will then be developed to examine if conservation property can be improved. This work was supported in part by the U.S. Department of Energy and by the Defense Threat Reduction Agency.

  8. Hamiltonian Effective Field Theory Study of the N^{*}(1535) Resonance in Lattice QCD.

    PubMed

    Liu, Zhan-Wei; Kamleh, Waseem; Leinweber, Derek B; Stokes, Finn M; Thomas, Anthony W; Wu, Jia-Jun

    2016-02-26

    Drawing on experimental data for baryon resonances, Hamiltonian effective field theory (HEFT) is used to predict the positions of the finite-volume energy levels to be observed in lattice QCD simulations of the lowest-lying J^{P}=1/2^{-} nucleon excitation. In the initial analysis, the phenomenological parameters of the Hamiltonian model are constrained by experiment and the finite-volume eigenstate energies are a prediction of the model. The agreement between HEFT predictions and lattice QCD results obtained on volumes with spatial lengths of 2 and 3 fm is excellent. These lattice results also admit a more conventional analysis where the low-energy coefficients are constrained by lattice QCD results, enabling a determination of resonance properties from lattice QCD itself. Finally, the role and importance of various components of the Hamiltonian model are examined.

  9. Modeling sustainable reuse of nitrogen-laden wastewater by poplar.

    PubMed

    Wang, Yusong; Licht, Louis; Just, Craig

    2016-01-01

    Numerical modeling was used to simulate the leaching of nitrogen (N) to groundwater as a consequence of irrigating food processing wastewater onto grass and poplar under various management scenarios. Under current management practices for a large food processor, a simulated annual N loading of 540 kg ha(-1) yielded 93 kg ha(-1) of N leaching for grass and no N leaching for poplar during the growing season. Increasing the annual growing season N loading to approximately 1,550 kg ha(-1) for poplar only, using "weekly", "daily" and "calculated" irrigation scenarios, yielded N leaching of 17 kg ha(-1), 6 kg ha(-1), and 4 kg ha(-1), respectively. Constraining the simulated irrigation schedule by the current onsite wastewater storage capacity of approximately 757 megaliters (Ml) yielded N leaching of 146 kg ha(-1) yr(-1) while storage capacity scenarios of 3,024 and 4,536 Ml yielded N leaching of 65 and 13 kg ha(-1) yr(-1), respectively, for a loading of 1,550 kg ha(-1) yr(-1). Further constraining the model by the current wastewater storage volume and the available land area (approximately 1,000 hectares) required a "diverse" irrigation schedule that was predicted to leach a weighted average of 13 kg-N ha(-1) yr(-1) when dosed with 1,063 kg-N ha(-1) yr(-1).

  10. Establishment of a rotor model basis

    NASA Technical Reports Server (NTRS)

    Mcfarland, R. E.

    1982-01-01

    Radial-dimension computations in the RSRA's blade-element model are modified for both the acquisition of extensive baseline data and for real-time simulation use. The baseline data, which are for the evaluation of model changes, use very small increments and are of high quality. The modifications to the real-time simulation model are for accuracy improvement, especially when a minimal number of blade segments is required for real-time synchronization. An accurate technique for handling tip loss in discrete blade models is developed. The mathematical consistency and convergence properties of summation algorithms for blade forces and moments are examined and generalized integration coefficients are applied to equal-annuli midpoint spacing. Rotor conditions identified as 'constrained' and 'balanced' are used and the propagation of error is analyzed.

  11. Qualitative simulation for process modeling and control

    NASA Technical Reports Server (NTRS)

    Dalle Molle, D. T.; Edgar, T. F.

    1989-01-01

    A qualitative model is developed for a first-order system with a proportional-integral controller without precise knowledge of the process or controller parameters. Simulation of the qualitative model yields all of the solutions to the system equations. In developing the qualitative model, a necessary condition for the occurrence of oscillatory behavior is identified. Initializations that cannot exhibit oscillatory behavior produce a finite set of behaviors. When the phase-space behavior of the oscillatory behavior is properly constrained, these initializations produce an infinite but comprehensible set of asymptotically stable behaviors. While the predictions include all possible behaviors of the real system, a class of spurious behaviors has been identified. When limited numerical information is included in the model, the number of predictions is significantly reduced.

  12. CONSTRAINING A MODEL OF TURBULENT CORONAL HEATING FOR AU MICROSCOPII WITH X-RAY, RADIO, AND MILLIMETER OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cranmer, Steven R.; Wilner, David J.; MacGregor, Meredith A.

    2013-08-01

    Many low-mass pre-main-sequence stars exhibit strong magnetic activity and coronal X-ray emission. Even after the primordial accretion disk has been cleared out, the star's high-energy radiation continues to affect the formation and evolution of dust, planetesimals, and large planets. Young stars with debris disks are thus ideal environments for studying the earliest stages of non-accretion-driven coronae. In this paper we simulate the corona of AU Mic, a nearby active M dwarf with an edge-on debris disk. We apply a self-consistent model of coronal loop heating that was derived from numerical simulations of solar field-line tangling and magnetohydrodynamic turbulence. We alsomore » synthesize the modeled star's X-ray luminosity and thermal radio/millimeter continuum emission. A realistic set of parameter choices for AU Mic produces simulated observations that agree with all existing measurements and upper limits. This coronal model thus represents an alternative explanation for a recently discovered ALMA central emission peak that was suggested to be the result of an inner 'asteroid belt' within 3 AU of the star. However, it is also possible that the central 1.3 mm peak is caused by a combination of active coronal emission and a bright inner source of dusty debris. Additional observations of this source's spatial extent and spectral energy distribution at millimeter and radio wavelengths will better constrain the relative contributions of the proposed mechanisms.« less

  13. Investigating the Potential Impact of the Surface Water and Ocean Topography (SWOT) Altimeter on Ocean Mesoscale Prediction

    NASA Astrophysics Data System (ADS)

    Carrier, M.; Ngodock, H.; Smith, S. R.; Souopgui, I.

    2016-02-01

    NASA's Surface Water and Ocean Topography (SWOT) satellite, scheduled for launch in 2020, will provide sea surface height anomaly (SSHA) observations with a wider swath width and higher spatial resolution than current satellite altimeters. It is expected that this will help to further constrain ocean models in terms of the mesoscale circulation. In this work, this expectation is investigated by way of twin data assimilation experiments using the Navy Coastal Ocean Model Four Dimensional Variational (NCOM-4DVAR) data assimilation system using a weak constraint formulation. Here, a nature run is created from which SWOT observations are sampled, as well as along-track SSHA observations from simulated Jason-2 tracks. The simulated SWOT data has appropriate spatial coverage, resolution, and noise characteristics based on an observation-simulator program provided by the SWOT science team. The experiment is run for a three-month period during which the analysis is updated every 24 hours and each analysis is used to initialize a 96 hour forecast. The forecasts in each experiment are compared to the available nature run to determine the impact of the assimilated data. It is demonstrated here that the SWOT observations help to constrain the model mesoscale in a more consistent manner than traditional altimeter observations. The findings of this study suggest that data from SWOT may have a substantial impact on improving the ocean model analysis and forecast of mesoscale features and surface ocean transport.

  14. Clues on the Milky Way disc formation from population synthesis simulations

    NASA Astrophysics Data System (ADS)

    Robin, A. C.; Reylé, C.; Bienaymé, O.; Fernandez-Trincado, J. G.; Amôres, E. B.

    2016-09-01

    In recent years the stellar populations of the Milky Way have been investigated from large scale surveys in different ways, from pure star count analysis to detailed studies based on spectroscopic surveys. While in the former case the data can constrain the scale height and scale length thanks to completeness, they suffer from high correlation between these two values. On the other hand, spectroscopic surveys suffer from complex selection functions which hardly allow to derive accurate density distributions. The scale length in particular has been difficult to be constrained, resulting in discrepant values in the literature. Here, we investigate the thick disc characteristics by comparing model simulations with large scale data sets. The simulations are done from the population synthesis model of Besançon. We explore the parameters of the thick disc (shape, local density, age, metallicity) using a Monte Carlo Markov Chain method to constrain the model free parameters (Robin et al. 2014). Correlations between parameters are limited due to the vast spatial coverage of the used surveys (SDSS + 2MASS). We show that the thick disc was created during a long phase of formation, starting about 12 Gyr ago and finishing about 10 Gyr ago, during which gravitational contraction occurred, both vertically and radially. Moreover, in its early phase the thick disc was flaring in the outskirts. We conclude that the thick disc has been created prior to the thin disc during a gravitational collapse phase, slowed down by turbulence related to a high star formation rate, as explained for example in Bournaud et al. (2009) or Lehnert et al. (2009). Our result does not favor a formation from an initial thin disc thickened later by merger events or by secular evolution of the thin disc. We then study the in-plane distribution of stars in the thin disc from 2MASS and show that the thin disc scale length varies as a function of age, indicating an inside out formation. Moreover, we investigate the warp and flare and demonstrate that the warp amplitude is changing with time and the node angle is slightly precessing. Finally, we show comparisons between the new model and spectroscopic surveys. The new model allows to correctly simulate the kinematics, the metallicity, and α-abundance distributions in the solar neighbourhood as well as in the bulge region.

  15. Modeling of intercontinental Saharan dust transport: What consequences on atmospheric concentrations and deposition fluxes in the Caribbean?

    NASA Astrophysics Data System (ADS)

    Laurent, Benoit; Formenti, Paola; Desboeufs, Karine; Vincent, Julie; Denjean, Cyrielle; Siour, Guillaume; Mayol-Bracero, Olga L.

    2015-04-01

    The Dust Aging and Transport from Africa to the Caribbean (Dust-AttaCk) project aims todocument the physical and optical properties of long-range transported African dust to the Caribbean. A comprehensive field campaign was conducted in Cape San Juan, Puerto Rico (18.38°N 65.62°W) during June-July 2012, offering the opportunity to constrain the way Saharan dust are transported from North Africa to the Caribbean by 3D models. Our main objectives are: (i) to discuss the ability of the CHIMERE Eulerian off-line chemistry-transport model to simulate atmospheric Saharan dust loads observed in the Caribbean during the Dust-AttaCk campaign, as well as the altitude of the dust plumes transport over the North Atlantic Ocean up to the Caribbean, (ii) to study the main Saharan dust emission source areas contributing to the dust loads in the Caribbean, (iii) to estimate the Saharan dust deposition in the Caribbean for deposition events observed during the Dust-AttaCk campaign. The dust model outputs are hourly dust concentration fields in µg m-3 for 12 aerosol size bins up to 30 µm and for each of the 15 sigma pressure vertical levels, column integrated dustaerosol optical depth (AOD), and dry and wet deposition fluxes.The simulations performed for the Dust-AttaCk campaign period as well as satellite observations (MODIS AOD, SEVIRI AOD) are used to identify the Saharan emission source regions activated and to study the evolution of the dust plumes tothe Cape San Juan station. In complement, the vertical transport of dust plumes transported from Saharan dust sources and over the North Atlantic Ocean is investigated combining model simulations and CALIOP observations. Aerosol surface concentrations and AOD simulated with CHIMERE are compared with sin-situ observations at Cape San Juan and AERONET stations. Wet deposition measurements performed allow us to constrain dust deposition flux simulated in the Caribbean after long-range transport.

  16. Constraining the Enceladus Plume and Understanding Its Physics via Numerical Simulation from Underground Source to Infinity

    NASA Astrophysics Data System (ADS)

    Yeoh, S. K.; Li, Z.; Goldstein, D. B.; Varghese, P. L.; Trafton, L. M.; Levin, D. A.

    2014-12-01

    The Enceladus ice/vapor plume not only accounts for the various features observed in the Saturnian system, such as the E-ring, the narrow neutral H2O torus, and Enceladus' own bright albedo, but also raises exciting new possibilities, including the existence of liquid water on Enceladus. Therefore, understanding the plume and its physics is important. Here we assume that the plume arises from flow expansion within multiple narrow subsurface cracks connected to reservoirs of liquid water underground, and simulate this expanding flow from the underground reservoir out to several Enceladus radii where Cassini data are available for comparison. The direct simulation Monte Carlo (DSMC) method is used to simulate the subsurface and near-field collisional regions and a free-molecular model is used to propagate the plume out into the far-field. We include the following physical processes in our simulations: the flow interaction with the crack walls, grain condensation from the vapor phase, non-equilibrium effects (e.g. freezing of molecular internal energy modes), the interaction between the vapor and the ice grains, the gravitational fields of Enceladus and Saturn, and Coriolis and centrifugal forces (due to motion in non-inertial reference frame). The end result is a plume model that includes the relevant physics of the flow from the underground source out to where Cassini measurements are taken. We have made certain assumptions about the channel geometry and reservoir conditions. The model is constrained using various available Cassini data (particularly those of INMS, CDA and UVIS) to understand the plume physics as well as estimate the vapor and grain production rates and its temporal variability.

  17. Modeling possible spreadings of a buoyant surface plume with lagrangian and eulerian approaches at different resolutions using flow syntheses from 1992-2007 - a Gulf of Mexico study

    NASA Astrophysics Data System (ADS)

    Tulloch, R.; Hill, C. N.; Jahn, O.

    2010-12-01

    We present results from an ensemble of BP oil spill simulations. The oil spill slick is modeled as a buoyant surface plume that is transported by ocean currents modulated, in some experiments, by surface winds. Ocean currents are taken from ECCO2 project (see http://ecco2.org ) observationally constrained state estimates spanning 1992-2007. In this work we (i) explore the role of increased resolution of ocean eddies, (ii) compare inferences from particle based, lagrangian, approaches with eulerian, field based, approaches and (ii) examine the impact of differential response of oil particles and water to normal and extreme, hurricane derived, wind stress. We focus on three main questions. Is the simulated response to an oil spill markedly different for different years, depending on ocean circulation and wind forcing? Does the simulated response depend heavily on resolution and are lagrangian and eulerian estimates comparable? We start from two regional configurations of the MIT General Circulation Model (MITgcm - see http://mitgcm.org ) at 16km and 4km resolutions respectively, both covering the Gulf of Mexico and western North Atlantic regions. The simulations are driven at open boundaries with momentum and hydrographic fields from ECCO2 observationally constrained global circulation estimates. The time dependent surface flow fields from these simulations are used to transport a dye that can optionally decay over time (approximating biological breakdown) and to transport lagrangian particles. Using these experiments we examine the robustness of conclusions regarding the fate of a buoyant slick, injected at a single point. In conclusion we discuss how future drilling operations could use similar approaches to better anticipate outcomes of accidents both in this region and elsewhere.

  18. Testing manifest monotonicity using order-constrained statistical inference.

    PubMed

    Tijmstra, Jesper; Hessen, David J; van der Heijden, Peter G M; Sijtsma, Klaas

    2013-01-01

    Most dichotomous item response models share the assumption of latent monotonicity, which states that the probability of a positive response to an item is a nondecreasing function of a latent variable intended to be measured. Latent monotonicity cannot be evaluated directly, but it implies manifest monotonicity across a variety of observed scores, such as the restscore, a single item score, and in some cases the total score. In this study, we show that manifest monotonicity can be tested by means of the order-constrained statistical inference framework. We propose a procedure that uses this framework to determine whether manifest monotonicity should be rejected for specific items. This approach provides a likelihood ratio test for which the p-value can be approximated through simulation. A simulation study is presented that evaluates the Type I error rate and power of the test, and the procedure is applied to empirical data.

  19. Shock interaction with deformable particles using a constrained interface reinitialization scheme

    NASA Astrophysics Data System (ADS)

    Sridharan, P.; Jackson, T. L.; Zhang, J.; Balachandar, S.; Thakur, S.

    2016-02-01

    In this paper, we present axisymmetric numerical simulations of shock propagation in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. We use the Mie-Gruneisen equation of state to describe both the medium and the particle. The numerical method is a finite-volume based solver on a Cartesian grid, that allows for multi-material interfaces and shocks, and uses a novel constrained reinitialization scheme to precisely preserve particle mass and volume. We compute the unsteady inviscid drag coefficient as a function of time, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. We also compute the mass-averaged particle pressure and show that the observed oscillations inside the particle are on the particle-acoustic time scale. Finally, we present simplified point-particle models that can be used for macroscale simulations. In the Appendix, we extend the isothermal or isentropic assumption concerning the point-force models to non-ideal equations of state, thus justifying their use for the current problem.

  20. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  1. Super-Eddington accreting massive black holes explore high-z cosmology: Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Cai, Rong-Gen; Guo, Zong-Kuan; Huang, Qing-Guo; Yang, Tao

    2018-06-01

    In this paper, we simulate Super-Eddington accreting massive black holes (SEAMBHs) as the candles to probe cosmology for the first time. SEAMBHs have been demonstrated to be able to provide a new tool for estimating cosmological distance. Thus, we create a series of mock data sets of SEAMBHs, especially in the high redshift region, to check their abilities to probe the cosmology. To fulfill the potential of the SEAMBHs on the cosmology, we apply the simulated data to three projects. The first is the exploration of their abilities to constrain the cosmological parameters, in which we combine different data sets of current observations such as the cosmic microwave background from Planck and type Ia supernovae from Joint Light-curve Analysis (JLA). We find that the high redshift SEAMBHs can help to break the degeneracies of the background cosmological parameters constrained by Planck and JLA, thus giving much tighter constraints of the cosmological parameters. The second uses the high redshift SEAMBHs as the complements of the low redshift JLA to constrain the early expansion rate and the dark energy density evolution in the cold dark matter frame. Our results show that these high redshift SEAMBHs are very powerful on constraining the early Hubble rate and the evolution of the dark energy density; thus they can give us more information about the expansion history of our Universe, which is also crucial for testing the Λ CDM model in the high redshift region. Finally, we check the SEAMBH candles' abilities to reconstruct the equation of state for dark energy at high redshift. In summary, our results show that the SEAMBHs, as the rare candles in the high redshift region, can provide us a new and independent observation to probe cosmology in the future.

  2. Holocene constraints on simulated tropical Pacific climate

    NASA Astrophysics Data System (ADS)

    Emile-Geay, J.; Cobb, K. M.; Carre, M.; Braconnot, P.; Leloup, J.; Zhou, Y.; Harrison, S. P.; Correge, T.; Mcgregor, H. V.; Collins, M.; Driscoll, R.; Elliot, M.; Schneider, B.; Tudhope, A. W.

    2015-12-01

    The El Niño-Southern Oscillation (ENSO) influences climate and weather worldwide, so uncertainties in its response to external forcings contribute to the spread in global climate projections. Theoretical and modeling studies have argued that such forcings may affect ENSO either via the seasonal cycle, the mean state, or extratropical influences, but these mechanisms are poorly constrained by the short instrumental record. Here we synthesize a pan-Pacific network of high-resolution marine biocarbonates spanning discrete snapshots of the Holocene (past 10, 000 years of Earth's history), which we use to constrain a set of global climate model (GCM) simulations via a forward model and a consistent treatment of uncertainty. Observations suggest important reductions in ENSO variability throughout the interval, most consistently during 3-5 kyBP, when approximately 2/3 reductions are inferred. The magnitude and timing of these ENSO variance reductions bear little resemblance to those sim- ulated by GCMs, or to equatorial insolation. The central Pacific witnessed a mid-Holocene increase in seasonality, at odds with the reductions simulated by GCMs. Finally, while GCM aggregate behavior shows a clear inverse relationship between seasonal amplitude and ENSO-band variance in sea-surface temperature, in agreement with many previous studies, such a relationship is not borne out by these observations. Our synthesis suggests that tropical Pacific climate is highly variable, but exhibited millennia-long periods of reduced ENSO variability whose origins, whether forced or unforced, contradict existing explanations. It also points to deficiencies in the ability of current GCMs to simulate forced changes in the tropical Pacific seasonal cycle and its interaction with ENSO, highlighting a key area of growth for future modeling efforts.

  3. Analysis of the Effect of Interior Nudging on Temperature and Precipitation Distributions of Multi-year Regional Climate Simulations

    NASA Astrophysics Data System (ADS)

    Nolte, C. G.; Otte, T. L.; Bowden, J. H.; Otte, M. J.

    2010-12-01

    There is disagreement in the regional climate modeling community as to the appropriateness of the use of internal nudging. Some investigators argue that the regional model should be minimally constrained and allowed to respond to regional-scale forcing, while others have noted that in the absence of interior nudging, significant large-scale discrepancies develop between the regional model solution and the driving coarse-scale fields. These discrepancies lead to reduced confidence in the ability of regional climate models to dynamically downscale global climate model simulations under climate change scenarios, and detract from the usability of the regional simulations for impact assessments. The advantages and limitations of interior nudging schemes for regional climate modeling are investigated in this study. Multi-year simulations using the WRF model driven by reanalysis data over the continental United States at 36km resolution are conducted using spectral nudging, grid point nudging, and for a base case without interior nudging. The means, distributions, and inter-annual variability of temperature and precipitation will be evaluated in comparison to regional analyses.

  4. A Novel Approach for Determining Source–Receptor Relationships in Model Simulations: A Case Study of Black Carbon Transport in Northern Hemisphere Winter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Po-Lun; Gattiker, J. R.; Liu, Xiaohong

    2013-06-27

    A Gaussian process (GP) emulator is applied to quantify the contribution of local and remote emissions of black carbon (BC) on the BC concentrations in different regions using a Latin Hypercube sampling strategy for emission perturbations in the offline version of the Community Atmosphere Model Version 5.1 (CAM5) simulations. The source-receptor relationships are computed based on simulations constrained by a standard free-running CAM5 simulation and the ERA-Interim reanalysis product. The analysis demonstrates that the emulator is capable of retrieving the source-receptor relationships based on a small number of CAM5 simulations. Most regions are found susceptible to their local emissions. Themore » emulator also finds that the source-receptor relationships retrieved from the model-driven and the reanalysis-driven simulations are very similar, suggesting that the simulated circulation in CAM5 resembles the assimilated meteorology in ERA-Interim. The robustness of the results provides confidence for applying the emulator to detect dose-response signals in the climate system.« less

  5. Determining dynamical parameters of the Milky Way Galaxy based on high-accuracy radio astrometry

    NASA Astrophysics Data System (ADS)

    Honma, Mareki; Nagayama, Takumi; Sakai, Nobuyuki

    2015-08-01

    In this paper we evaluate how the dynamical structure of the Galaxy can be constrained by high-accuracy VLBI (Very Long Baseline Interferometry) astrometry such as VERA (VLBI Exploration of Radio Astrometry). We generate simulated samples of maser sources which follow the gas motion caused by a spiral or bar potential, with their distribution similar to those currently observed with VERA and VLBA (Very Long Baseline Array). We apply the Markov chain Monte Carlo analyses to the simulated sample sources to determine the dynamical parameter of the models. We show that one can successfully determine the initial model parameters if astrometric results are obtained for a few hundred sources with currently achieved astrometric accuracy. If astrometric data are available from 500 sources, the expected accuracy of R0 and Θ0 is ˜ 1% or higher, and parameters related to the spiral structure can be constrained by an error of 10% or with higher accuracy. We also show that the parameter determination accuracy is basically independent of the locations of resonances such as corotation and/or inner/outer Lindblad resonances. We also discuss the possibility of model selection based on the Bayesian information criterion (BIC), and demonstrate that BIC can be used to discriminate different dynamical models of the Galaxy.

  6. Parameter Tuning and Calibration of RegCM3 with MIT-Emanuel Cumulus Parameterization Scheme over CORDEX East Asian Domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Liwei; Qian, Yun; Zhou, Tianjun

    2014-10-01

    In this study, we calibrated the performance of regional climate model RegCM3 with Massachusetts Institute of Technology (MIT)-Emanuel cumulus parameterization scheme over CORDEX East Asia domain by tuning the selected seven parameters through multiple very fast simulated annealing (MVFSA) sampling method. The seven parameters were selected based on previous studies, which customized the RegCM3 with MIT-Emanuel scheme through three different ways by using the sensitivity experiments. The responses of model results to the seven parameters were investigated. Since the monthly total rainfall is constrained, the simulated spatial pattern of rainfall and the probability density function (PDF) distribution of daily rainfallmore » rates are significantly improved in the optimal simulation. Sensitivity analysis suggest that the parameter “relative humidity criteria” (RH), which has not been considered in the default simulation, has the largest effect on the model results. The responses of total rainfall over different regions to RH were examined. Positive responses of total rainfall to RH are found over northern equatorial western Pacific, which are contributed by the positive responses of explicit rainfall. Followed by an increase of RH, the increases of the low-level convergence and the associated increases in cloud water favor the increase of the explicit rainfall. The identified optimal parameters constrained by the total rainfall have positive effects on the low-level circulation and the surface air temperature. Furthermore, the optimized parameters based on the extreme case are suitable for a normal case and the model’s new version with mixed convection scheme.« less

  7. 3D-PTV around Operational Wind Turbines

    NASA Astrophysics Data System (ADS)

    Brownstein, Ian; Dabiri, John

    2016-11-01

    Laboratory studies and numerical simulations of wind turbines are typically constrained in how they can inform operational turbine behavior. Laboratory experiments are usually unable to match both pertinent parameters of full-scale wind turbines, the Reynolds number (Re) and tip speed ratio, using scaled-down models. Additionally, numerical simulations of the flow around wind turbines are constrained by the large domain size and high Re that need to be simulated. When these simulations are preformed, turbine geometry is typically simplified resulting in flow structures near the rotor not being well resolved. In order to bypass these limitations, a quantitative flow visualization method was developed to take in situ measurements of the flow around wind turbines at the Field Laboratory for Optimized Wind Energy (FLOWE) in Lancaster, CA. The apparatus constructed was able to seed an approximately 9m x 9m x 5m volume in the wake of the turbine using artificial snow. Quantitative measurements were obtained by tracking the evolution of the artificial snow using a four camera setup. The methodology for calibrating and collecting data, as well as preliminary results detailing the flow around a 2kW vertical-axis wind turbine (VAWT), will be presented.

  8. System Engineering Infrastructure Evolution Galileo IOV and the Steps Beyond

    NASA Astrophysics Data System (ADS)

    Eickhoff, J.; Herpel, H.-J.; Steinle, T.; Birn, R.; Steiner, W.-D.; Eisenmann, H.; Ludwig, T.

    2009-05-01

    The trends to more and more constrained financial budgets in satellite engineering require a permanent optimization of the S/C system engineering processes and infrastructure. Astrium in the recent years already has built up a system simulation infrastructure - the "Model-based Development & Verification Environment" - which meanwhile is well known all over Europe and is established as Astrium's standard approach for ESA, DLR projects and now even the EU/ESA-Project Galileo IOV. The key feature of the MDVE / FVE approach is to provide entire S/C simulation (with full featured OBC simulation) already in early phases to start OBSW code tests on a simulated S/C and to later add hardware in the loop step by step up to an entire "Engineering Functional Model (EFM)" or "FlatSat". The subsequent enhancements to this simulator infrastructure w.r.t. spacecraft design data handling are reported in the following sections.

  9. Aperiodic Robust Model Predictive Control for Constrained Continuous-Time Nonlinear Systems: An Event-Triggered Approach.

    PubMed

    Liu, Changxin; Gao, Jian; Li, Huiping; Xu, Demin

    2018-05-01

    The event-triggered control is a promising solution to cyber-physical systems, such as networked control systems, multiagent systems, and large-scale intelligent systems. In this paper, we propose an event-triggered model predictive control (MPC) scheme for constrained continuous-time nonlinear systems with bounded disturbances. First, a time-varying tightened state constraint is computed to achieve robust constraint satisfaction, and an event-triggered scheduling strategy is designed in the framework of dual-mode MPC. Second, the sufficient conditions for ensuring feasibility and closed-loop robust stability are developed, respectively. We show that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm. Finally, numerical simulations and comparison studies are performed to verify the theoretical results.

  10. Phase diagrams of Janus fluids with up-down constrained orientations

    NASA Astrophysics Data System (ADS)

    Fantoni, Riccardo; Giacometti, Achille; Maestre, Miguel Ángel G.; Santos, Andrés

    2013-11-01

    A class of binary mixtures of Janus fluids formed by colloidal spheres with the hydrophobic hemispheres constrained to point either up or down are studied by means of Gibbs ensemble Monte Carlo simulations and simple analytical approximations. These fluids can be experimentally realized by the application of an external static electrical field. The gas-liquid and demixing phase transitions in five specific models with different patch-patch affinities are analyzed. It is found that a gas-liquid transition is present in all the models, even if only one of the four possible patch-patch interactions is attractive. Moreover, provided the attraction between like particles is stronger than between unlike particles, the system demixes into two subsystems with different composition at sufficiently low temperatures and high densities.

  11. Probing the Milky Way electron density using multi-messenger astronomy

    NASA Astrophysics Data System (ADS)

    Breivik, Katelyn; Larson, Shane

    2015-04-01

    Multi-messenger observations of ultra-compact binaries in both gravitational waves and electromagnetic radiation supply highly complementary information, providing new ways of characterizing the internal dynamics of these systems, as well as new probes of the galaxy itself. Electron density models, used in pulsar distance measurements via the electron dispersion measure, are currently not well constrained. Simultaneous radio and gravitational wave observations of pulsars in binaries provide a method of measuring the average electron density along the line of sight to the pulsar, thus giving a new method for constraining current electron density models. We present this method and assess its viability with simulations of the compact binary component of the Milky Way using the public domain binary evolution code, BSE. This work is supported by NASA Award NNX13AM10G.

  12. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    NASA Astrophysics Data System (ADS)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z < 3.2km) as well the more detailed level perspective of clouds (40 levels from 0 to 19km). Results show that in most models an increase of the SST leads to a decrease of the low-layer cloud fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental parameters.

  13. Using Paleo-climate Comparisons to Constrain Future Projections in CMIP5

    NASA Technical Reports Server (NTRS)

    Schmidt, G. A.; Annan, J D.; Bartlein, P. J.; Cook, B. I.; Guilyardi, E.; Hargreaves, J. C.; Harrison, S. P.; Kageyama, M.; LeGrande, A. N..; Konecky, B.; hide

    2013-01-01

    We present a description of the theoretical framework and best practice for using the paleo-climate model component of the Coupled Model Intercomparison Project (Phase 5) (CMIP5) to constrain future projections of climate using the same models. The constraints arise from measures of skill in hindcasting paleo-climate changes from the present over 3 periods: the Last Glacial Maximum (LGM) (21 thousand years before present, ka), the mid-Holocene (MH) (6 ka) and the Last Millennium (LM) (8501850 CE). The skill measures may be used to validate robust patterns of climate change across scenarios or to distinguish between models that have differing outcomes in future scenarios. We find that the multi-model ensemble of paleo-simulations is adequate for addressing at least some of these issues. For example, selected benchmarks for the LGM and MH are correlated to the rank of future projections of precipitationtemperature or sea ice extent to indicate that models that produce the best agreement with paleoclimate information give demonstrably different future results than the rest of the models. We also find that some comparisons, for instance associated with model variability, are strongly dependent on uncertain forcing timeseries, or show time dependent behaviour, making direct inferences for the future problematic. Overall, we demonstrate that there is a strong potential for the paleo-climate simulations to help inform the future projections and urge all the modeling groups to complete this subset of the CMIP5 runs.

  14. Climate Change Impacts for the Conterminous USA: An Integrated Assessment Part 7. Economic Analysis of Field Crops and Land Use with Climate Change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sands, Ronald D.; Edmonds, James A.

    PNNL's Agriculture and Land Use (AgLU) model is used to demonstrate the impact of potential changes in climate on agricultural production and land use in the United States. AgLU simulates production of four crop types in several world regions, in 15-year time steps from 1990 to 2095. Changes in yield of major field crops in the United States, for 12 climate scenarios, are obtained from simulations of the EPIC crop growth model. Results from the HUMUS model are used to constrain crop irrigation, and the BIOME3 model is used to simulate productivity of unmanaged ecosystems. Assumptions about changes in agriculturalmore » productivity outside the United States are treated on a scenario basis, either responding in the same way as in the United States, or not responding to climate.« less

  15. Constraining the effects of permeability uncertainty for geologic CO2 sequestration in a basalt reservoir

    NASA Astrophysics Data System (ADS)

    Jayne, R., Jr.; Pollyea, R.

    2016-12-01

    Carbon capture and sequestration (CCS) in geologic reservoirs is one strategy for reducing anthropogenic CO2 emissions from large-scale point-source emitters. Recent developments at the CarbFix CCS pilot in Iceland have shown that basalt reservoirs are highly effective for permanent mineral trapping on the basis of CO2-water-rock interactions, which result in the formation of carbonates minerals. In order to advance our understanding of basalt sequestration in large igneous provinces, this research uses numerical simulation to evaluate the feasibility of industrial-scale CO2 injections in the Columbia River Basalt Group (CRBG). Although bulk reservoir properties are well constrained on the basis of field and laboratory testing from the Wallula Basalt Sequestration Pilot Project, there remains significant uncertainty in the spatial distribution of permeability at the scale of individual basalt flows. Geostatistical analysis of hydrologic data from 540 wells illustrates that CRBG reservoirs are reasonably modeled as layered heterogeneous systems on the basis of basalt flow morphology; however, the regional dataset is insufficient to constrain permeability variability at the scale of an individual basalt flow. As a result, permeability distribution for this modeling study is established by centering the lognormal permeability distribution in the regional dataset over the bulk permeability measured at Wallula site, which results in a spatially random permeability distribution within the target reservoir. In order to quantify the effects of this permeability uncertainty, CO2 injections are simulated within 50 equally probable synthetic reservoir domains. Each model domain comprises three-dimensional geometry with 530,000 grid blocks, and fracture-matrix interaction is simulated as interacting continua for the two low permeability layers (flow interiors) bounding the injection zone. Results from this research illustrate that permeability uncertainty at the scale of individual basalt flows may significantly impact both injection pressure accumulation and CO2 distribution.

  16. A probabilistic assessment of calcium carbonate export and dissolution in the modern ocean

    NASA Astrophysics Data System (ADS)

    Battaglia, Gianna; Steinacher, Marco; Joos, Fortunat

    2016-05-01

    The marine cycle of calcium carbonate (CaCO3) is an important element of the carbon cycle and co-governs the distribution of carbon and alkalinity within the ocean. However, CaCO3 export fluxes and mechanisms governing CaCO3 dissolution are highly uncertain. We present an observationally constrained, probabilistic assessment of the global and regional CaCO3 budgets. Parameters governing pelagic CaCO3 export fluxes and dissolution rates are sampled using a Monte Carlo scheme to construct a 1000-member ensemble with the Bern3D ocean model. Ensemble results are constrained by comparing simulated and observation-based fields of excess dissolved calcium carbonate (TA*). The minerals calcite and aragonite are modelled explicitly and ocean-sediment fluxes are considered. For local dissolution rates, either a strong or a weak dependency on CaCO3 saturation is assumed. In addition, there is the option to have saturation-independent dissolution above the saturation horizon. The median (and 68 % confidence interval) of the constrained model ensemble for global biogenic CaCO3 export is 0.90 (0.72-1.05) Gt C yr-1, that is within the lower half of previously published estimates (0.4-1.8 Gt C yr-1). The spatial pattern of CaCO3 export is broadly consistent with earlier assessments. Export is large in the Southern Ocean, the tropical Indo-Pacific, the northern Pacific and relatively small in the Atlantic. The constrained results are robust across a range of diapycnal mixing coefficients and, thus, ocean circulation strengths. Modelled ocean circulation and transport timescales for the different set-ups were further evaluated with CFC11 and radiocarbon observations. Parameters and mechanisms governing dissolution are hardly constrained by either the TA* data or the current compilation of CaCO3 flux measurements such that model realisations with and without saturation-dependent dissolution achieve skill. We suggest applying saturation-independent dissolution rates in Earth system models to minimise computational costs.

  17. Penetration of Gold Nanoparticles through Human Skin: Unraveling Its Mechanisms at the Molecular Scale.

    PubMed

    Gupta, Rakesh; Rai, Beena

    2016-07-28

    Recent experimental studies suggest that nanosized gold nanoparticles (AuNPs) are able to penetrate into the deeper layer (epidermis and dermis) of rat and human skin. However, the mechanisms by which these AuNPs penetrate and disrupt the skin's lipid matrix are not well understood. In this study, we have used computer simulations to explore the translocation and the permeation of AuNPs through the model skin lipid membrane using both unconstrained and constrained coarse-grained molecular dynamics simulations. Each AuNP (1-6 nm) disrupted the bilayer packing and entered the interior of the bilayer rapidly (within 100 ns). It created a hydrophobic vacancy in the bilayer, which was mostly filled by skin constituents. Bigger AuNPs induced changes in the bilayer structure, and undulations were observed in the bilayer. The bilayer exhibited self-healing properties; it retained its original form once the simulation was run further after the removal of the AuNPs. Constrained simulation results showed that there was a trade-off between the kinetics and thermodynamics of AuNP permeation at a molecular scale. The combined effect of both resulted in a high permeation of small-sized AuNPs. The molecular-level information obtained through our simulations offers a very convenient method to design novel drug delivery systems and effective cosmetics.

  18. Precipitation and runoff simulations of select perennial and ephemeral watersheds in the middle Carson River basin, Eagle, Dayton, and Churchill Valleys, west-central Nevada

    USGS Publications Warehouse

    Jeton, Anne E.; Maurer, Douglas K.

    2011-01-01

    The effect that land use may have on streamflow in the Carson River, and ultimately its impact on downstream users can be evaluated by simulating precipitation-runoff processes and estimating groundwater inflow in the middle Carson River in west-central Nevada. To address these concerns, the U.S. Geological Survey, in cooperation with the Bureau of Reclamation, began a study in 2008 to evaluate groundwater flow in the Carson River basin extending from Eagle Valley to Churchill Valley, called the middle Carson River basin in this report. This report documents the development and calibration of 12 watershed models and presents model results and the estimated mean annual water budgets for the modeled watersheds. This part of the larger middle Carson River study will provide estimates of runoff tributary to the Carson River and the potential for groundwater inflow (defined here as that component of recharge derived from percolation of excess water from the soil zone to the groundwater reservoir). The model used for the study was the U.S. Geological Survey's Precipitation-Runoff Modeling System, a physically based, distributed-parameter model designed to simulate precipitation and snowmelt runoff as well as snowpack accumulation and snowmelt processes. Models were developed for 2 perennial watersheds in Eagle Valley having gaged daily mean runoff, Ash Canyon Creek and Clear Creek, and for 10 ephemeral watersheds in the Dayton Valley and Churchill Valley hydrologic areas. Model calibration was constrained by daily mean runoff for the 2 perennial watersheds and for the 10 ephemeral watersheds by limited indirect runoff estimates and by mean annual runoff estimates derived from empirical methods. The models were further constrained by limited climate data adjusted for altitude differences using annual precipitation volumes estimated in a previous study. The calibration periods were water years 1980-2007 for Ash Canyon Creek, and water years 1991-2007 for Clear Creek. To allow for water budget comparisons to the ephemeral models, the two perennial models were then run from 1980 to 2007, the time period constrained somewhat by the later record for the high-altitude climate station used in the simulation. The daily mean values of precipitation, runoff, evapotranspiration, and groundwater inflow simulated from the watershed models were summed to provide mean annual rates and volumes derived from each year of the simulation. Mean annual bias for the calibration period for Ash Canyon Creek and Clear Creek watersheds was within 6 and 3 percent, and relative errors were about 18 and -2 percent, respectively. For the 1980-2007 period of record, mean recharge efficiency and runoff efficiency (percentage of precipitation as groundwater inflow and runoff) averaged 7 and 39 percent, respectively, for Ash Canyon Creek, and 8 and 31 percent, respectively, for Clear Creek. For this same period, groundwater inflow volumes averaged about 500 acre-feet for Ash Canyon and 1,200 acre-feet for Clear Creek. The simulation period for the ephemeral watersheds ranged from water years 1978 to 2007. Mean annual simulated precipitation ranged from 6 to 11 inches. Estimates of recharge efficiency for the ephemeral watersheds ranged from 3 percent for Eureka Canyon to 7 percent for Eldorado Canyon. Runoff efficiency ranged from 7 percent for Eureka Canyon and 15 percent at Brunswick Canyon. For the 1978-2007 period, mean annual groundwater inflow volumes ranged from about 40 acre-feet for Eureka Canyon to just under 5,000 acre-feet for Churchill Canyon watershed. Watershed model results indicate significant interannual variability in the volumes of groundwater inflow caused by climate variations. For most of the modeled watersheds, little to no groundwater inflow was simulated for years with less than 8 inches of precipitation, unless those years were preceded by abnormally high precipitation years with significant subsurface storage carryover.

  19. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  20. The Hydrological Sensitivity to Global Warming and Solar Geoengineering Derived from Thermodynamic Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik

    2015-01-16

    We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many ofmore » the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.« less

  1. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  2. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  3. Application and Evaluation of a Snowmelt Runoff Model in the Tamor River Basin, Eastern Himalaya Using a Markov Chain Monte Carlo (MCMC) Data Assimilation Approach

    NASA Technical Reports Server (NTRS)

    Panday, Prajjwal K.; Williams, Christopher A.; Frey, Karen E.; Brown, Molly E.

    2013-01-01

    Previous studies have drawn attention to substantial hydrological changes taking place in mountainous watersheds where hydrology is dominated by cryospheric processes. Modelling is an important tool for understanding these changes but is particularly challenging in mountainous terrain owing to scarcity of ground observations and uncertainty of model parameters across space and time. This study utilizes a Markov Chain Monte Carlo data assimilation approach to examine and evaluate the performance of a conceptual, degree-day snowmelt runoff model applied in the Tamor River basin in the eastern Nepalese Himalaya. The snowmelt runoff model is calibrated using daily streamflow from 2002 to 2006 with fairly high accuracy (average Nash-Sutcliffe metric approx. 0.84, annual volume bias <3%). The Markov Chain Monte Carlo approach constrains the parameters to which the model is most sensitive (e.g. lapse rate and recession coefficient) and maximizes model fit and performance. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall compared with simulations using observed station precipitation. The average snowmelt contribution to total runoff in the Tamor River basin for the 2002-2006 period is estimated to be 29.7+/-2.9% (which includes 4.2+/-0.9% from snowfall that promptly melts), whereas 70.3+/-2.6% is attributed to contributions from rainfall. On average, the elevation zone in the 4000-5500m range contributes the most to basin runoff, averaging 56.9+/-3.6% of all snowmelt input and 28.9+/-1.1% of all rainfall input to runoff. Model simulated streamflow using an interpolated precipitation data set decreases the fractional contribution from rainfall versus snowmelt compared with simulations using observed station precipitation. Model experiments indicate that the hydrograph itself does not constrain estimates of snowmelt versus rainfall contributions to total outflow but that this derives from the degree-day melting model. Lastly, we demonstrate that the data assimilation approach is useful for quantifying and reducing uncertainty related to model parameters and thus provides uncertainty bounds on snowmelt and rainfall contributions in such mountainous watersheds.

  4. Double quick, double click reversible peptide "stapling".

    PubMed

    Grison, Claire M; Burslem, George M; Miles, Jennifer A; Pilsl, Ludwig K A; Yeo, David J; Imani, Zeynab; Warriner, Stuart L; Webb, Michael E; Wilson, Andrew J

    2017-07-01

    The development of constrained peptides for inhibition of protein-protein interactions is an emerging strategy in chemical biology and drug discovery. This manuscript introduces a versatile, rapid and reversible approach to constrain peptides in a bioactive helical conformation using BID and RNase S peptides as models. Dibromomaleimide is used to constrain BID and RNase S peptide sequence variants bearing cysteine (Cys) or homocysteine ( h Cys) amino acids spaced at i and i + 4 positions by double substitution. The constraint can be readily removed by displacement of the maleimide using excess thiol. This new constraining methodology results in enhanced α-helical conformation (BID and RNase S peptide) as demonstrated by circular dichroism and molecular dynamics simulations, resistance to proteolysis (BID) as demonstrated by trypsin proteolysis experiments and retained or enhanced potency of inhibition for Bcl-2 family protein-protein interactions (BID), or greater capability to restore the hydrolytic activity of the RNAse S protein (RNase S peptide). Finally, use of a dibromomaleimide functionalized with an alkyne permits further divergent functionalization through alkyne-azide cycloaddition chemistry on the constrained peptide with fluorescein, oligoethylene glycol or biotin groups to facilitate biophysical and cellular analyses. Hence this methodology may extend the scope and accessibility of peptide stapling.

  5. Improved Parameter-Estimation With MRI-Constrained PET Kinetic Modeling: A Simulation Study

    NASA Astrophysics Data System (ADS)

    Erlandsson, Kjell; Liljeroth, Maria; Atkinson, David; Arridge, Simon; Ourselin, Sebastien; Hutton, Brian F.

    2016-10-01

    Kinetic analysis can be applied both to dynamic PET and dynamic contrast enhanced (DCE) MRI data. We have investigated the potential of MRI-constrained PET kinetic modeling using simulated [ 18F]2-FDG data for skeletal muscle. The volume of distribution, Ve, for the extra-vascular extra-cellular space (EES) is the link between the two models: It can be estimated by DCE-MRI, and then used to reduce the number of parameters to estimate in the PET model. We used a 3 tissue-compartment model with 5 rate constants (3TC5k), in order to distinguish between EES and the intra-cellular space (ICS). Time-activity curves were generated by simulation using the 3TC5k model for 3 different Ve values under basal and insulin stimulated conditions. Noise was added and the data were fitted with the 2TC3k model and with the 3TC5k model with and without Ve constraint. One hundred noise-realisations were generated at 4 different noise-levels. The results showed reductions in bias and variance with Ve constraint in the 3TC5k model. We calculated the parameter k3", representing the combined effect of glucose transport across the cellular membrane and phosphorylation, as an extra outcome measure. For k3", the average coefficient of variation was reduced from 52% to 9.7%, while for k3 in the standard 2TC3k model it was 3.4%. The accuracy of the parameters estimated with our new modeling approach depends on the accuracy of the assumed Ve value. In conclusion, we have shown that, by utilising information that could be obtained from DCE-MRI in the kinetic analysis of [ 18F]2-FDG-PET data, it is in principle possible to obtain better parameter estimates with a more complex model, which may provide additional information as compared to the standard model.

  6. Predictability of Subsurface Temperature and the AMOC

    NASA Astrophysics Data System (ADS)

    Chang, Y.; Schubert, S. D.

    2013-12-01

    GEOS 5 coupled model is extensively used for experimental decadal climate prediction. Understanding the limits of decadal ocean predictability is critical for making progress in these efforts. Using this model, we study the subsurface temperature initial value predictability, the variability of the Atlantic meridional overturning circulation (AMOC) and its impacts on the global climate. Our approach is to utilize the idealized data assimilation technology developed at the GMAO. The technique 'replay' allows us to assess, for example, the impact of the surface wind stresses and/or precipitation on the ocean in a very well controlled environment. By running the coupled model in replay mode we can in fact constrain the model using any existing reanalysis data set. We replay the model constraining (nudging) it to the MERRA reanalysis in various fields from 1948-2012. The fields, u,v,T,q,ps, are adjusted towards the 6-hourly analyzed fields in atmosphere. The simulated AMOC variability is studied with a 400-year-long segment of replay integration. The 84 cases of 10-year hindcasts are initialized from 4 different replay cycles. Here, the variability and predictability are examined further by a measure to quantify how much the subsurface temperature and AMOC variability has been influenced by atmospheric forcing and by ocean internal variability. The simulated impact of the AMOC on the multi-decadal variability of the SST, sea surface height (SSH) and sea ice extent is also studied.

  7. Reducing uncertainty for estimating forest carbon stocks and dynamics using integrated remote sensing, forest inventory and process-based modeling

    NASA Astrophysics Data System (ADS)

    Poulter, B.; Ciais, P.; Joetzjer, E.; Maignan, F.; Luyssaert, S.; Barichivich, J.

    2015-12-01

    Accurately estimating forest biomass and forest carbon dynamics requires new integrated remote sensing, forest inventory, and carbon cycle modeling approaches. Presently, there is an increasing and urgent need to reduce forest biomass uncertainty in order to meet the requirements of carbon mitigation treaties, such as Reducing Emissions from Deforestation and forest Degradation (REDD+). Here we describe a new parameterization and assimilation methodology used to estimate tropical forest biomass using the ORCHIDEE-CAN dynamic global vegetation model. ORCHIDEE-CAN simulates carbon uptake and allocation to individual trees using a mechanistic representation of photosynthesis, respiration and other first-order processes. The model is first parameterized using forest inventory data to constrain background mortality rates, i.e., self-thinning, and productivity. Satellite remote sensing data for forest structure, i.e., canopy height, is used to constrain simulated forest stand conditions using a look-up table approach to match canopy height distributions. The resulting forest biomass estimates are provided for spatial grids that match REDD+ project boundaries and aim to provide carbon estimates for the criteria described in the IPCC Good Practice Guidelines Tier 3 category. With the increasing availability of forest structure variables derived from high-resolution LIDAR, RADAR, and optical imagery, new methodologies and applications with process-based carbon cycle models are becoming more readily available to inform land management.

  8. Patterns of deoxygenation: sensitivity to natural and anthropogenic drivers

    NASA Astrophysics Data System (ADS)

    Oschlies, Andreas; Duteil, Olaf; Getzlaff, Julia; Koeve, Wolfgang; Landolfi, Angela; Schmidtko, Sunke

    2017-08-01

    Observational estimates and numerical models both indicate a significant overall decline in marine oxygen levels over the past few decades. Spatial patterns of oxygen change, however, differ considerably between observed and modelled estimates. Particularly in the tropical thermocline that hosts open-ocean oxygen minimum zones, observations indicate a general oxygen decline, whereas most of the state-of-the-art models simulate increasing oxygen levels. Possible reasons for the apparent model-data discrepancies are examined. In order to attribute observed historical variations in oxygen levels, we here study mechanisms of changes in oxygen supply and consumption with sensitivity model simulations. Specifically, the role of equatorial jets, of lateral and diapycnal mixing processes, of changes in the wind-driven circulation and atmospheric nutrient supply, and of some poorly constrained biogeochemical processes are investigated. Predominantly wind-driven changes in the low-latitude oceanic ventilation are identified as a possible factor contributing to observed oxygen changes in the low-latitude thermocline during the past decades, while the potential role of biogeochemical processes remains difficult to constrain. We discuss implications for the attribution of observed oxygen changes to anthropogenic impacts and research priorities that may help to improve our mechanistic understanding of oxygen changes and the quality of projections into a changing future. This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'.

  9. Reducing usage of the computational resources by event driven approach to model predictive control

    NASA Astrophysics Data System (ADS)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  10. Deduction as Stochastic Simulation

    DTIC Science & Technology

    2013-07-01

    different tokens representing entities that it contains. The second parameter constrains the contents of a model, and in particular the different...of premises. In summary, the system manipulates stochastically the size, the contents , and the revisions of models. We now describe in detail each...9 10 0. 0 0. 1 0. 2 0. 3 λ = 4 0 1 2 3 4 5 6 7 8 9 10 0. 0 0. 1 0. 2 0. 3 λ = 5 The contents of a mental model (parameter ε) The second component

  11. Evaluating Micrometeorological Estimates of Groundwater Discharge from Great Basin Desert Playas.

    PubMed

    Jackson, Tracie R; Halford, Keith J; Gardner, Philip M

    2018-03-06

    Groundwater availability studies in the arid southwestern United States traditionally have assumed that groundwater discharge by evapotranspiration (ET g ) from desert playas is a significant component of the groundwater budget. However, desert playa ET g rates are poorly constrained by Bowen ratio energy budget (BREB) and eddy-covariance (EC) micrometeorological measurement approaches. Best attempts by previous studies to constrain ET g from desert playas have resulted in ET g rates that are within the measurement error of micrometeorological approaches. This study uses numerical models to further constrain desert playa ET g rates that are within the measurement error of BREB and EC approaches, and to evaluate the effect of hydraulic properties and salinity-based groundwater density contrasts on desert playa ET g rates. Numerical models simulated ET g rates from desert playas in Death Valley, California and Dixie Valley, Nevada. Results indicate that actual ET g rates from desert playas are significantly below the uncertainty thresholds of BREB- and EC-based micrometeorological measurements. Discharge from desert playas likely contributes less than 2% of total groundwater discharge from Dixie and Death Valleys, which suggests discharge from desert playas also is negligible in other basins. Simulation results also show that ET g from desert playas primarily is limited by differences in hydraulic properties between alluvial fan and playa sediments and, to a lesser extent, by salinity-based groundwater density contrasts. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  12. Constraining early and interacting dark energy with gravitational wave standard sirens: the potential of the eLISA mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caprini, Chiara; Tamanini, Nicola, E-mail: chiara.caprini@cea.fr, E-mail: nicola.tamanini@cea.fr

    We perform a forecast analysis of the capability of the eLISA space-based interferometer to constrain models of early and interacting dark energy using gravitational wave standard sirens. We employ simulated catalogues of standard sirens given by merging massive black hole binaries visible by eLISA, with an electromagnetic counterpart detectable by future telescopes. We consider three-arms mission designs with arm length of 1, 2 and 5 million km, 5 years of mission duration and the best-level low frequency noise as recently tested by the LISA Pathfinder. Standard sirens with eLISA give access to an intermediate range of redshift 1 ∼< zmore » ∼< 8, and can therefore provide competitive constraints on models where the onset of the deviation from ΛCDM (i.e. the epoch when early dark energy starts to be non-negligible, or when the interaction with dark matter begins) occurs relatively late, at z ∼< 6. If instead early or interacting dark energy is relevant already in the pre-recombination era, current cosmological probes (especially the cosmic microwave background) are more efficient than eLISA in constraining these models, except possibly in the interacting dark energy model if the energy exchange is proportional to the energy density of dark energy.« less

  13. Steering of Upper Ocean Currents and Fronts by the Topographically Constrained Abyssal Circulation

    DTIC Science & Technology

    2008-07-06

    a) Mean surface dynamic height relative to 1000 m from version 2.5 of the Generalized Digital Environmental Model ( GDEM ) oceanic climatology, an...NLOM simulations in comparison to the mean surface dynamic height with respect to 1000 m from the Generalized Digital Environmental Model ( GDEM ...the Kuroshio pathway east of Japan, giving much better agreement with the pathway in the GDEM climatology. Dynamics of the topographic impact on

  14. Simulating carbon and water fluxes at Arctic and boreal ecosystems in Alaska by optimizing the modified BIOME-BGC with eddy covariance data

    NASA Astrophysics Data System (ADS)

    Ueyama, M.; Kondo, M.; Ichii, K.; Iwata, H.; Euskirchen, E. S.; Zona, D.; Rocha, A. V.; Harazono, Y.; Nakai, T.; Oechel, W. C.

    2013-12-01

    To better predict carbon and water cycles in Arctic ecosystems, we modified a process-based ecosystem model, BIOME-BGC, by introducing new processes: change in active layer depth on permafrost and phenology of tundra vegetation. The modified BIOME-BGC was optimized using an optimization method. The model was constrained using gross primary productivity (GPP) and net ecosystem exchange (NEE) at 23 eddy covariance sites in Alaska, and vegetation/soil carbon from a literature survey. The model was used to simulate regional carbon and water fluxes of Alaska from 1900 to 2011. Simulated regional fluxes were validated with upscaled GPP, ecosystem respiration (RE), and NEE based on two methods: (1) a machine learning technique and (2) a top-down model. Our initial simulation suggests that the original BIOME-BGC with default ecophysiological parameters substantially underestimated GPP and RE for tundra and overestimated those fluxes for boreal forests. We will discuss how optimization using the eddy covariance data impacts the historical simulation by comparing the new version of the model with simulated results from the original BIOME-BGC with default ecophysiological parameters. This suggests that the incorporation of the active layer depth and plant phenology processes is important to include when simulating carbon and water fluxes in Arctic ecosystems.

  15. Visualization in Mechanics: The Dynamics of an Unbalanced Roller

    ERIC Educational Resources Information Center

    Cumber, Peter S.

    2017-01-01

    It is well known that mechanical engineering students often find mechanics a difficult area to grasp. This article describes a system of equations describing the motion of a balanced and an unbalanced roller constrained by a pivot arm. A wide range of dynamics can be simulated with the model. The equations of motion are embedded in a graphical…

  16. A RSSI-based parameter tracking strategy for constrained position localization

    NASA Astrophysics Data System (ADS)

    Du, Jinze; Diouris, Jean-François; Wang, Yide

    2017-12-01

    In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.

  17. Fitting and forecasting coupled dark energy in the non-linear regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casas, Santiago; Amendola, Luca; Pettorino, Valeria

    2016-01-01

    We consider cosmological models in which dark matter feels a fifth force mediated by the dark energy scalar field, also known as coupled dark energy. Our interest resides in estimating forecasts for future surveys like Euclid when we take into account non-linear effects, relying on new fitting functions that reproduce the non-linear matter power spectrum obtained from N-body simulations. We obtain fitting functions for models in which the dark matter-dark energy coupling is constant. Their validity is demonstrated for all available simulations in the redshift range 0z=–1.6 and wave modes below 0k=1 h/Mpc. These fitting formulas can be used tomore » test the predictions of the model in the non-linear regime without the need for additional computing-intensive N-body simulations. We then use these fitting functions to perform forecasts on the constraining power that future galaxy-redshift surveys like Euclid will have on the coupling parameter, using the Fisher matrix method for galaxy clustering (GC) and weak lensing (WL). We find that by using information in the non-linear power spectrum, and combining the GC and WL probes, we can constrain the dark matter-dark energy coupling constant squared, β{sup 2}, with precision smaller than 4% and all other cosmological parameters better than 1%, which is a considerable improvement of more than an order of magnitude compared to corresponding linear power spectrum forecasts with the same survey specifications.« less

  18. Recent Changes in Global Photosynthesis and Terrestrial Ecosystem Respiration Constrained From Multiple Observations

    NASA Astrophysics Data System (ADS)

    Li, Wei; Ciais, Philippe; Wang, Yilong; Yin, Yi; Peng, Shushi; Zhu, Zaichun; Bastos, Ana; Yue, Chao; Ballantyne, Ashley P.; Broquet, Grégoire; Canadell, Josep G.; Cescatti, Alessandro; Chen, Chi; Cooper, Leila; Friedlingstein, Pierre; Le Quéré, Corinne; Myneni, Ranga B.; Piao, Shilong

    2018-01-01

    To assess global carbon cycle variability, we decompose the net land carbon sink into the sum of gross primary productivity (GPP), terrestrial ecosystem respiration (TER), and fire emissions and apply a Bayesian framework to constrain these fluxes between 1980 and 2014. The constrained GPP and TER fluxes show an increasing trend of only half of the prior trend simulated by models. From the optimization, we infer that TER increased in parallel with GPP from 1980 to 1990, but then stalled during the cooler periods, in 1990-1994 coincident with the Pinatubo eruption, and during the recent warming hiatus period. After each of these TER stalling periods, TER is found to increase faster than GPP, explaining a relative reduction of the net land sink. These results shed light on decadal variations of GPP and TER and suggest that they exhibit different responses to temperature anomalies over the last 35 years.

  19. Simulations of Ground Motion in Southern California based upon the Spectral-Element Method

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Komatitsch, D.; Liu, Q.

    2003-12-01

    We use the spectral-element method to simulate ground motion generated by recent well-recorded small earthquakes in Southern California. Simulations are performed using a new sedimentary basin model that is constrained by hundreds of petroleum industry well logs and more than twenty thousand kilometers of seismic reflection profiles. The numerical simulations account for 3D variations of seismic wave speeds and density, topography and bathymetry, and attenuation. Simulations for several small recent events demonstrate that the combination of a detailed sedimentary basin model and an accurate numerical technique facilitates the simulation of ground motion at periods of 2 seconds and longer inside the Los Angeles basin and 6 seconds and longer elsewhere. Peak ground displacement, velocity and acceleration maps illustrate that significant amplification occurs in the basin. Centroid-Moment Tensor mechanisms are obtained based upon Pnl and surface waveforms and numerically calculated 3D Frechet derivatives. We use a combination of waveform and waveform-envelope misfit criteria, and facilitate pure double-couple or zero-trace moment-tensor inversions.

  20. Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.

    PubMed

    Lee, Won Hee; Bullmore, Ed; Frangou, Sophia

    2017-02-01

    There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Mapping the function of neuronal ion channels in model and experiment

    PubMed Central

    Podlaski, William F; Seeholzer, Alexander; Groschner, Lukas N; Miesenböck, Gero; Ranjan, Rajnish; Vogels, Tim P

    2017-01-01

    Ion channel models are the building blocks of computational neuron models. Their biological fidelity is therefore crucial for the interpretation of simulations. However, the number of published models, and the lack of standardization, make the comparison of ion channel models with one another and with experimental data difficult. Here, we present a framework for the automated large-scale classification of ion channel models. Using annotated metadata and responses to a set of voltage-clamp protocols, we assigned 2378 models of voltage- and calcium-gated ion channels coded in NEURON to 211 clusters. The IonChannelGenealogy (ICGenealogy) web interface provides an interactive resource for the categorization of new and existing models and experimental recordings. It enables quantitative comparisons of simulated and/or measured ion channel kinetics, and facilitates field-wide standardization of experimentally-constrained modeling. DOI: http://dx.doi.org/10.7554/eLife.22152.001 PMID:28267430

  2. Constraining the low-cloud optical depth feedback at middle and high latitudes using satellite observations

    DOE PAGES

    Terai, C. R.; Klein, S. A.; Zelinka, M. D.

    2016-08-26

    The increase in cloud optical depth with warming at middle and high latitudes is a robust cloud feedback response found across all climate models. This study builds on results that suggest the optical depth response to temperature is timescale invariant for low-level clouds. The timescale invariance allows one to use satellite observations to constrain the models' optical depth feedbacks. Three passive-sensor satellite retrievals are compared against simulations from eight models from the Atmosphere Model Intercomparison Project (AMIP) of the 5th Coupled Model Intercomparison Project (CMIP5). This study confirms that the low-cloud optical depth response is timescale invariant in the AMIPmore » simulations, generally at latitudes higher than 40°. Compared to satellite estimates, most models overestimate the increase in optical depth with warming at the monthly and interannual timescales. Many models also do not capture the increase in optical depth with estimated inversion strength that is found in all three satellite observations and in previous studies. The discrepancy between models and satellites exists in both hemispheres and in most months of the year. A simple replacement of the models' optical depth sensitivities with the satellites' sensitivities reduces the negative shortwave cloud feedback by at least 50% in the 40°–70°S latitude band and by at least 65% in the 40°–70°N latitude band. Furthermore, based on this analysis of satellite observations, we conclude that the low-cloud optical depth feedback at middle and high latitudes is likely too negative in climate models.« less

  3. Constraining the low-cloud optical depth feedback at middle and high latitudes using satellite observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terai, C. R.; Klein, S. A.; Zelinka, M. D.

    The increase in cloud optical depth with warming at middle and high latitudes is a robust cloud feedback response found across all climate models. This study builds on results that suggest the optical depth response to temperature is timescale invariant for low-level clouds. The timescale invariance allows one to use satellite observations to constrain the models' optical depth feedbacks. Three passive-sensor satellite retrievals are compared against simulations from eight models from the Atmosphere Model Intercomparison Project (AMIP) of the 5th Coupled Model Intercomparison Project (CMIP5). This study confirms that the low-cloud optical depth response is timescale invariant in the AMIPmore » simulations, generally at latitudes higher than 40°. Compared to satellite estimates, most models overestimate the increase in optical depth with warming at the monthly and interannual timescales. Many models also do not capture the increase in optical depth with estimated inversion strength that is found in all three satellite observations and in previous studies. The discrepancy between models and satellites exists in both hemispheres and in most months of the year. A simple replacement of the models' optical depth sensitivities with the satellites' sensitivities reduces the negative shortwave cloud feedback by at least 50% in the 40°–70°S latitude band and by at least 65% in the 40°–70°N latitude band. Furthermore, based on this analysis of satellite observations, we conclude that the low-cloud optical depth feedback at middle and high latitudes is likely too negative in climate models.« less

  4. Splines and polynomial tools for flatness-based constrained motion planning

    NASA Astrophysics Data System (ADS)

    Suryawan, Fajar; De Doná, José; Seron, María

    2012-08-01

    This article addresses the problem of trajectory planning for flat systems with constraints. Flat systems have the useful property that the input and the state can be completely characterised by the so-called flat output. We propose a spline parametrisation for the flat output, the performance output, the states and the inputs. Using this parametrisation the problem of constrained trajectory planning can be cast into a simple quadratic programming problem. An important result is that the B-spline parametrisation used gives exact results for constrained linear continuous-time system. The result is exact in the sense that the constrained signal can be made arbitrarily close to the boundary without having intersampling issues (as one would have in sampled-data systems). Simulation examples are presented, involving the generation of rest-to-rest trajectories. In addition, an experimental result of the method is also presented, where two methods to generate trajectories for a magnetic-levitation (maglev) system in the presence of constraints are compared and each method's performance is discussed. The first method uses the nonlinear model of the plant, which turns out to belong to the class of flat systems. The second method uses a linearised version of the plant model around an operating point. In every case, a continuous-time description is used. The experimental results on a real maglev system reported here show that, in most scenarios, the nonlinear and linearised models produce almost similar, indistinguishable trajectories.

  5. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    NASA Astrophysics Data System (ADS)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  6. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Maiti, Raman

    2016-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  7. Computational Modelling of Patella Femoral Kinematics During Gait Cycle and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Maiti, Raman

    2018-06-01

    The effect of loading and boundary conditions on patellar mechanics is significant due to the complications arising in patella femoral joints during total knee replacements. To understand the patellar mechanics with respect to loading and motion, a computational model representing the patella femoral joint was developed and validated against experimental results. The computational model was created in IDEAS NX and simulated in MSC ADAMS/VIEW software. The results obtained in the form of internal external rotations and anterior posterior displacements for a new and experimentally simulated specimen for patella femoral joint under standard gait condition were compared with experimental measurements performed on the Leeds ProSim knee simulator. A good overall agreement between the computational prediction and the experimental data was obtained for patella femoral kinematics. Good agreement between the model and the past studies was observed when the ligament load was removed and the medial lateral displacement was constrained. The model is sensitive to ±5 % change in kinematics, frictional, force and stiffness coefficients and insensitive to time step.

  8. A method to identify and analyze biological programs through automated reasoning

    PubMed Central

    Yordanov, Boyan; Dunn, Sara-Jane; Kugler, Hillel; Smith, Austin; Martello, Graziano; Emmott, Stephen

    2016-01-01

    Predictive biology is elusive because rigorous, data-constrained, mechanistic models of complex biological systems are difficult to derive and validate. Current approaches tend to construct and examine static interaction network models, which are descriptively rich, but often lack explanatory and predictive power, or dynamic models that can be simulated to reproduce known behavior. However, in such approaches implicit assumptions are introduced as typically only one mechanism is considered, and exhaustively investigating all scenarios is impractical using simulation. To address these limitations, we present a methodology based on automated formal reasoning, which permits the synthesis and analysis of the complete set of logical models consistent with experimental observations. We test hypotheses against all candidate models, and remove the need for simulation by characterizing and simultaneously analyzing all mechanistic explanations of observed behavior. Our methodology transforms knowledge of complex biological processes from sets of possible interactions and experimental observations to precise, predictive biological programs governing cell function. PMID:27668090

  9. A detailed model for simulation of catchment scale subsurface hydrologic processes

    NASA Technical Reports Server (NTRS)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  10. The consensus in the two-feature two-state one-dimensional Axelrod model revisited

    NASA Astrophysics Data System (ADS)

    Biral, Elias J. P.; Tilles, Paulo F. C.; Fontanari, José F.

    2015-04-01

    The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains.

  11. A practical solution to reduce soft tissue artifact error at the knee using adaptive kinematic constraints.

    PubMed

    Potvin, Brigitte M; Shourijeh, Mohammad S; Smale, Kenneth B; Benoit, Daniel L

    2017-09-06

    Musculoskeletal modeling and simulations have vast potential in clinical and research fields, but face various challenges in representing the complexities of the human body. Soft tissue artifact from skin-mounted markers may lead to non-physiological representation of joint motions being used as inputs to models in simulations. To address this, we have developed adaptive joint constraints on five of the six degree of freedom of the knee joint based on in vivo tibiofemoral joint motions recorded during walking, hopping and cutting motions from subjects instrumented with intra-cortical pins inserted into their tibia and femur. The constraint boundaries vary as a function of knee flexion angle and were tested on four whole-body models including four to six knee degrees of freedom. A musculoskeletal model developed in OpenSim simulation software was constrained to these in vivo boundaries during level gait and inverse kinematics and dynamics were then resolved. Statistical parametric mapping indicated significant differences (p<0.05) in kinematics between bone pin constrained and unconstrained model conditions, notably in knee translations, while hip and ankle flexion/extension angles were also affected, indicating the error at the knee propagates to surrounding joints. These changes to hip, knee, and ankle kinematics led to measurable changes in hip and knee transverse plane moments, and knee frontal plane moments and forces. Since knee flexion angle can be validly represented using skin mounted markers, our tool uses this reliable measure to guide the five other degrees of freedom at the knee and provide a more valid representation of the kinematics for these degrees of freedom. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Carbon balance of South Asia constrained by passenger aircraft CO2 measurements

    NASA Astrophysics Data System (ADS)

    Patra, P. K.; Niwa, Y.; Schuck, T. J.; Brenninkmeijer, C. A.; Machida, T.; Matsueda, H.; Sawa, Y.

    2011-12-01

    Quantifying the fluxes of carbon dioxide (CO2) between the atmosphere and terrestrial ecosystems in all their diversity, across the continents, is important and urgent for implementing effective mitigating policies. Whereas much is known for Europe and North America for instance, in comparison, South Asia, with 1.6 billion inhabitants and considerable CO2 fluxes, remained terra incognita in this respect. The sole measurement site at Cape Rama does not constrain CO2 fluxes during the summer monsoon season. We use regional measurements of atmospheric CO2 aboard a Lufthansa passenger aircraft between Frankfurt (Germany) and Chennai (India) at cruise altitude, in addition to the existing network sites for 2008, to estimate monthly fluxes for 64-regions using Bayesian inversion and ACTM transport model simulations. The applicability of the model's transport parameterization is confirmed using multi-tracer (SF6, CH4, N2O) simulations for the CARIBIC datasets. The annual carbon flux obtained by including the aircraft data is twice as large as the fluxes simulated by a terrestrial ecosystem model that was applied to prescribe the fluxes used in the inversions. It is shown that South Asia sequestered carbon at a rate of 0.37±0.20 Pg C yr-1 for the years 2007 and 2008, primarily during the summer monsoon season when the water limitation for this tropical ecosystem is relaxed. The seasonality and the strength of the calculated monthly fluxes are successfully validated using independent measurements of vertical CO2 profiles over Delhi and spatial variations at cruising altitude by the CONTRAIL program over Asia aboard Japan Airlines passenger aircraft (Patra et al., 2011). Major challenges remain the verification of the inverse model flux seasonality and annual totals by bottom-up estimations using field measurements and terrestrial ecosystem models.

  13. Top-down estimate of dust emissions through integration of MODIS and MISR aerosol retrievals with the GEOS-Chem adjoint model

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-04-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOS-Chem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  14. Top-down Estimate of Dust Emissions Through Integration of MODIS and MISR Aerosol Retrievals With the Geos-chem Adjoint Model

    NASA Technical Reports Server (NTRS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-01-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOSChem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  15. McMAC: Towards a MAC Protocol with Multi-Constrained QoS Provisioning for Diverse Traffic in Wireless Body Area Networks

    PubMed Central

    Monowar, Muhammad Mostafa; Hassan, Mohammad Mehedi; Bajaber, Fuad; Al-Hussein, Musaed; Alamri, Atif

    2012-01-01

    The emergence of heterogeneous applications with diverse requirements for resource-constrained Wireless Body Area Networks (WBANs) poses significant challenges for provisioning Quality of Service (QoS) with multi-constraints (delay and reliability) while preserving energy efficiency. To address such challenges, this paper proposes McMAC, a MAC protocol with multi-constrained QoS provisioning for diverse traffic classes in WBANs. McMAC classifies traffic based on their multi-constrained QoS demands and introduces a novel superframe structure based on the “transmit-whenever-appropriate” principle, which allows diverse periods for diverse traffic classes according to their respective QoS requirements. Furthermore, a novel emergency packet handling mechanism is proposed to ensure packet delivery with the least possible delay and the highest reliability. McMAC is also modeled analytically, and extensive simulations were performed to evaluate its performance. The results reveal that McMAC achieves the desired delay and reliability guarantee according to the requirements of a particular traffic class while achieving energy efficiency. PMID:23202224

  16. Directional constraint of endpoint force emerges from hindlimb anatomy.

    PubMed

    Bunderson, Nathan E; McKay, J Lucas; Ting, Lena H; Burkholder, Thomas J

    2010-06-15

    Postural control requires the coordination of force production at the limb endpoints to apply an appropriate force to the body. Subjected to horizontal plane perturbations, quadruped limbs stereotypically produce force constrained along a line that passes near the center of mass. This phenomenon, referred to as the force constraint strategy, may reflect mechanical constraints on the limb or body, a specific neural control strategy or an interaction among neural controls and mechanical constraints. We used a neuromuscular model of the cat hindlimb to test the hypothesis that the anatomical constraints restrict the mechanical action of individual muscles during stance and constrain the response to perturbations to a line independent of perturbation direction. In a linearized neuromuscular model of the cat hindlimb, muscle lengthening directions were highly conserved across 10,000 different muscle activation patterns, each of which produced an identical, stance-like endpoint force. These lengthening directions were closely aligned with the sagittal plane and reveal an anatomical structure for directionally constrained force responses. Each of the 10,000 activation patterns was predicted to produce stable stance based on Lyapunov stability analysis. In forward simulations of the nonlinear, seven degree of freedom model under the action of 200 random muscle activation patterns, displacement of the endpoint from its equilibrium position produced restoring forces, which were also biased toward the sagittal plane. The single exception was an activation pattern based on minimum muscle stress optimization, which produced destabilizing force responses in some perturbation directions. The sagittal force constraint increased during simulations as the system shifted from an inertial response during the acceleration phase to a viscoelastic response as peak velocity was obtained. These results qualitatively match similar experimental observations and suggest that the force constraint phenomenon may result from the anatomical arrangement of the limb.

  17. Directional constraint of endpoint force emerges from hindlimb anatomy

    PubMed Central

    Bunderson, Nathan E.; McKay, J. Lucas; Ting, Lena H.; Burkholder, Thomas J.

    2010-01-01

    Postural control requires the coordination of force production at the limb endpoints to apply an appropriate force to the body. Subjected to horizontal plane perturbations, quadruped limbs stereotypically produce force constrained along a line that passes near the center of mass. This phenomenon, referred to as the force constraint strategy, may reflect mechanical constraints on the limb or body, a specific neural control strategy or an interaction among neural controls and mechanical constraints. We used a neuromuscular model of the cat hindlimb to test the hypothesis that the anatomical constraints restrict the mechanical action of individual muscles during stance and constrain the response to perturbations to a line independent of perturbation direction. In a linearized neuromuscular model of the cat hindlimb, muscle lengthening directions were highly conserved across 10,000 different muscle activation patterns, each of which produced an identical, stance-like endpoint force. These lengthening directions were closely aligned with the sagittal plane and reveal an anatomical structure for directionally constrained force responses. Each of the 10,000 activation patterns was predicted to produce stable stance based on Lyapunov stability analysis. In forward simulations of the nonlinear, seven degree of freedom model under the action of 200 random muscle activation patterns, displacement of the endpoint from its equilibrium position produced restoring forces, which were also biased toward the sagittal plane. The single exception was an activation pattern based on minimum muscle stress optimization, which produced destabilizing force responses in some perturbation directions. The sagittal force constraint increased during simulations as the system shifted from an inertial response during the acceleration phase to a viscoelastic response as peak velocity was obtained. These results qualitatively match similar experimental observations and suggest that the force constraint phenomenon may result from the anatomical arrangement of the limb. PMID:20511528

  18. Local Infrasound Variability Related to In Situ Atmospheric Observation

    NASA Astrophysics Data System (ADS)

    Kim, Keehoon; Rodgers, Arthur; Seastrand, Douglas

    2018-04-01

    Local infrasound is widely used to constrain source parameters of near-surface events (e.g., chemical explosions and volcanic eruptions). While atmospheric conditions are critical to infrasound propagation and source parameter inversion, local atmospheric variability is often ignored by assuming homogeneous atmospheres, and their impact on the source inversion uncertainty has never been accounted for due to the lack of quantitative understanding of infrasound variability. We investigate atmospheric impacts on local infrasound propagation by repeated explosion experiments with a dense acoustic network and in situ atmospheric measurement. We perform full 3-D waveform simulations with local atmospheric data and numerical weather forecast model to quantify atmosphere-dependent infrasound variability and address the advantage and restriction of local weather data/numerical weather model for sound propagation simulation. Numerical simulations with stochastic atmosphere models also showed nonnegligible influence of atmospheric heterogeneity on infrasound amplitude, suggesting an important role of local turbulence.

  19. Greenland-Wide Seasonal Temperatures During the Last Deglaciation

    NASA Astrophysics Data System (ADS)

    Buizert, C.; Keisling, B. A.; Box, J. E.; He, F.; Carlson, A. E.; Sinclair, G.; DeConto, R. M.

    2018-02-01

    The sensitivity of the Greenland ice sheet to climate forcing is of key importance in assessing its contribution to past and future sea level rise. Surface mass loss occurs during summer, and accounting for temperature seasonality is critical in simulating ice sheet evolution and in interpreting glacial landforms and chronologies. Ice core records constrain the timing and magnitude of climate change but are largely limited to annual mean estimates from the ice sheet interior. Here we merge ice core reconstructions with transient climate model simulations to generate Greenland-wide and seasonally resolved surface air temperature fields during the last deglaciation. Greenland summer temperatures peak in the early Holocene, consistent with records of ice core melt layers. We perform deglacial Greenland ice sheet model simulations to demonstrate that accounting for realistic temperature seasonality decreases simulated glacial ice volume, expedites the deglacial margin retreat, mutes the impact of abrupt climate warming, and gives rise to a clear Holocene ice volume minimum.

  20. Global Gross Primary Productivity for 2015 Inferred from OCO-2 SIF and a Carbon-Cycle Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Norton, A.; Rayner, P. J.; Scholze, M.; Koffi, E. N. D.

    2016-12-01

    The intercomparison study CMIP5 among other studies (e.g. Bodman et al., 2013) has shown that the land carbon flux contributes significantly to the uncertainty in projections of future CO2 concentration and climate (Friedlingstein et al., 2014)). The main challenge lies in disaggregating the relatively well-known net land carbon flux into its component fluxes, gross primary production (GPP) and respiration. Model simulations of these processes disagree considerably, and accurate observations of photosynthetic activity have proved a hindrance. Here we build upon the Carbon Cycle Data Assimilation System (CCDAS) (Rayner et al., 2005) to constrain estimates of one of these uncertain fluxes, GPP, using satellite observations of Solar Induced Fluorescence (SIF). SIF has considerable benefits over other proxy observations as it tracks not just the presence of vegetation but actual photosynthetic activity (Walther et al., 2016; Yang et al., 2015). To combine these observations with process-based simulations of GPP we have coupled the model SCOPE with the CCDAS model BETHY. This provides a mechanistic relationship between SIF and GPP, and the means to constrain the processes relevant to SIF and GPP via model parameters in a data assimilation system. We ingest SIF observations from NASA's Orbiting Carbon Observatory 2 (OCO-2) for 2015 into the data assimilation system to constrain estimates of GPP in space and time, while allowing for explicit consideration of uncertainties in parameters and observations. Here we present first results of the assimilation with SIF. Preliminary results indicate a constraint on global annual GPP of at least 75% when using SIF observations, reducing the uncertainty to < 3 PgC yr-1. A large portion of the constraint is propagated via parameters that describe leaf phenology. These results help to bring together state-of-the-art observations and model to improve understanding and predictive capability of GPP.

  1. Constraining a complex biogeochemical model for CO2 and N2O emission simulations from various land uses by model-data fusion

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz

    2017-07-01

    This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.

  2. First Top-Down Estimates of Anthropogenic NOx Emissions Using High-Resolution Airborne Remote Sensing Observations

    NASA Astrophysics Data System (ADS)

    Souri, Amir H.; Choi, Yunsoo; Pan, Shuai; Curci, Gabriele; Nowlan, Caroline R.; Janz, Scott J.; Kowalewski, Matthew G.; Liu, Junjie; Herman, Jay R.; Weinheimer, Andrew J.

    2018-03-01

    A number of satellite-based instruments have become an essential part of monitoring emissions. Despite sound theoretical inversion techniques, the insufficient samples and the footprint size of current observations have introduced an obstacle to narrow the inversion window for regional models. These key limitations can be partially resolved by a set of modest high-quality measurements from airborne remote sensing. This study illustrates the feasibility of nitrogen dioxide (NO2) columns from the Geostationary Coastal and Air Pollution Events Airborne Simulator (GCAS) to constrain anthropogenic NOx emissions in the Houston-Galveston-Brazoria area. We convert slant column densities to vertical columns using a radiative transfer model with (i) NO2 profiles from a high-resolution regional model (1 × 1 km2) constrained by P-3B aircraft measurements, (ii) the consideration of aerosol optical thickness impacts on radiance at NO2 absorption line, and (iii) high-resolution surface albedo constrained by ground-based spectrometers. We characterize errors in the GCAS NO2 columns by comparing them to Pandora measurements and find a striking correlation (r > 0.74) with an uncertainty of 3.5 × 1015 molecules cm-2. On 9 of 10 total days, the constrained anthropogenic emissions by a Kalman filter yield an overall 2-50% reduction in polluted areas, partly counterbalancing the well-documented positive bias of the model. The inversion, however, boosts emissions by 94% in the same areas on a day when an unprecedented local emissions event potentially occurred, significantly mitigating the bias of the model. The capability of GCAS at detecting such an event ensures the significance of forthcoming geostationary satellites for timely estimates of top-down emissions.

  3. Simulation of solar array slewing of Indian remote sensing satellite

    NASA Astrophysics Data System (ADS)

    Maharana, P. K.; Goel, P. S.

    The effect of flexible arrays on sun tracking for the IRS satellite is studied. Equations of motion of satellites carrying a rotating flexible appendage are developed following the Newton-Euler approach and utilizing the constrained modes of the appendage. The drive torque, detent torque and friction torque in the SADA are included in the model. Extensive simulations of the slewing motion are carried out. The phenomena of back-stepping, step-missing, step-slipping and the influences of array flexibility in the acquisition mode are observed for certain combinations of parameters.

  4. The robustness in dynamics of out of equilibrium bidirectional transport systems with constrained entrances

    NASA Astrophysics Data System (ADS)

    Sharma, Natasha; Verma, Atul Kumar; Gupta, Arvind Kumar

    2018-05-01

    Macroscopic and microscopic long-distance bidirectional transfer depends on connections between entrances and exits of various transport mediums. Persuaded by the associations, we introduce a small system module of Totally Asymmetric Simple Exclusion Process including oppositely directed species of particles moving on two parallel channels with constrained entrances. The dynamical rules which characterize the system obey symmetry between the two species and are identical for both the channels. The model displays a rich steady-state behavior, including symmetry breaking phenomenon. The phase diagram is analyzed theoretically within the mean-field approximation and substantiated with Monte Carlo simulations. Relevant mean-field calculations are also presented. We further compared the phase segregation with those observed in previous works, and it is examined that the structure of phase separation in proposed model is distinguished from earlier ones. Interestingly, for phases with broken symmetry, symmetry with respect to channels has been observed as the distinct particles behave differently while the similar type of particles exhibits the same conduct in the system. For symmetric phases, significant properties including currents and densities in the channels are identical for both types of particles. The effect of symmetry breaking occurrence on the Monte Carlo simulation results has also been examined based on particle density histograms. Finally, phase properties of the system having strong size dependency have been explored based on simulations findings.

  5. High regional climate sensitivity over continental China constrained by glacial-recent changes in temperature and the hydrological cycle.

    PubMed

    Eagle, Robert A; Risi, Camille; Mitchell, Jonathan L; Eiler, John M; Seibt, Ulrike; Neelin, J David; Li, Gaojun; Tripati, Aradhna K

    2013-05-28

    The East Asian monsoon is one of Earth's most significant climatic phenomena, and numerous paleoclimate archives have revealed that it exhibits variations on orbital and suborbital time scales. Quantitative constraints on the climate changes associated with these past variations are limited, yet are needed to constrain sensitivity of the region to changes in greenhouse gas levels. Here, we show central China is a region that experienced a much larger temperature change since the Last Glacial Maximum than typically simulated by climate models. We applied clumped isotope thermometry to carbonates from the central Chinese Loess Plateau to reconstruct temperature and water isotope shifts from the Last Glacial Maximum to present. We find a summertime temperature change of 6-7 °C that is reproduced by climate model simulations presented here. Proxy data reveal evidence for a shift to lighter isotopic composition of meteoric waters in glacial times, which is also captured by our model. Analysis of model outputs suggests that glacial cooling over continental China is significantly amplified by the influence of stationary waves, which, in turn, are enhanced by continental ice sheets. These results not only support high regional climate sensitivity in Central China but highlight the fundamental role of planetary-scale atmospheric dynamics in the sensitivity of regional climates to continental glaciation, changing greenhouse gas levels, and insolation.

  6. An effective parameter optimization with radiation balance constraints in the CAM5

    NASA Astrophysics Data System (ADS)

    Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.

    2017-12-01

    Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.

  7. Crustal tracers in the atmosphere and ocean: Relating their concentrations, fluxes, and ages

    NASA Astrophysics Data System (ADS)

    Han, Qin

    Crustal tracers are important sources of key limiting nutrients (e.g., iron) in remote ocean regions where they have a large impact on global biogeochemical cycles. However, the atmospheric delivery of bio-available iron to oceans via mineral dust aerosol deposition is poorly constrained. This dissertation aims to improve understanding and model representation of oceanic dust deposition and to provide soluble iron flux maps by testing observations of crustal tracer concentrations and solubilities against predictions from two conceptual solubility models. First, we assemble a database of ocean surface dissolved Al and incorporate Al cycling into the global Biogeochemical Elemental Cycling (BEC) model. The observed Al concentrations show clear basin-scale differences that are useful for constraining dust deposition. The dynamic mixed layer depth and Al residence time in the BEC model significantly improve the simulated dissolved Al field. Some of the remaining model-data discrepancies appear related to the neglect of aerosol size, age, and air mass characteristics in estimating tracer solubility. Next, we develop the Mass-Age Tracking method (MAT) to efficiently and accurately estimate the mass-weighted age of tracers. We apply MAT to four sizes of desert dust aerosol and simulate, for the first time, global distributions of aerosol age in the atmosphere and at deposition. These dust size and age distributions at deposition, together with independent information on air mass acidity, allow us to test two simple yet plausible models for predicting the dissolution of mineral dust iron and aluminum during atmospheric transport. These models represent aerosol solubility as controlled (1) by a diffusive process leaching nutrients from the dust into equilibrium with the liquid water coating or (2) by a process that continually dissolves nutrients in proportion to the particle surface area. The surface-controlled model better captures the spatial pattern of observed solubility in the Atlantic. Neither model improves previous estimates of the solubility in the Pacific, nor do they significantly improve the global BEC simulation of dissolved iron or aluminum.

  8. A coupled stochastic inverse-management framework for dealing with nonpoint agriculture pollution under groundwater parameter uncertainty

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, Carlos; Palacios-Marqués, Daniel; Merigó, José M.

    2014-04-01

    In this paper a methodology for the stochastic management of groundwater quality problems is presented, which can be used to provide agricultural advisory services. A stochastic algorithm to solve the coupled flow and mass transport inverse problem is combined with a stochastic management approach to develop methods for integrating uncertainty; thus obtaining more reliable policies on groundwater nitrate pollution control from agriculture. The stochastic inverse model allows identifying non-Gaussian parameters and reducing uncertainty in heterogeneous aquifers by constraining stochastic simulations to data. The management model determines the spatial and temporal distribution of fertilizer application rates that maximizes net benefits in agriculture constrained by quality requirements in groundwater at various control sites. The quality constraints can be taken, for instance, by those given by water laws such as the EU Water Framework Directive (WFD). Furthermore, the methodology allows providing the trade-off between higher economic returns and reliability in meeting the environmental standards. Therefore, this new technology can help stakeholders in the decision-making process under an uncertainty environment. The methodology has been successfully applied to a 2D synthetic aquifer, where an uncertainty assessment has been carried out by means of Monte Carlo simulation techniques.

  9. Measuring tongue shapes and positions with ultrasound imaging: a validation experiment using an articulatory model.

    PubMed

    Ménard, Lucie; Aubin, Jérôme; Thibeault, Mélanie; Richard, Gabrielle

    2012-01-01

    The goal of this paper is to assess the validity of various metrics developed to characterize tongue shapes and positions collected through ultrasound imaging in experimental setups where the probe is not constrained relative to the subject's head. Midsagittal contours were generated using an articulatory-acoustic model of the vocal tract. Sections of the tongue were extracted to simulate ultrasound imaging. Various transformations were applied to the tongue contours in order to simulate ultrasound probe displacements: vertical displacement, horizontal displacement, and rotation. The proposed data analysis method reshapes tongue contours into triangles and then extracts measures of angles, x and y coordinates of the highest point of the tongue, curvature degree, and curvature position. Parameters related to the absolute tongue position (tongue height and front/back position) are more sensitive to horizontal and vertical displacements of the probe, whereas parameters related to tongue curvature are less sensitive to such displacements. Because of their robustness to probe displacements, parameters related to tongue shape (especially curvature) are particularly well suited to cases where the transducer is not constrained relative to the head (studies with clinical populations or children). Copyright © 2011 S. Karger AG, Basel.

  10. A Computer Simulation Study of Vntr Population Genetics: Constrained Recombination Rules Out the Infinite Alleles Model

    PubMed Central

    Harding, R. M.; Boyce, A. J.; Martinson, J. J.; Flint, J.; Clegg, J. B.

    1993-01-01

    Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. We show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation model reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. We use sampling theory to confirm the intrinsically poor fit to the infinite alleles model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations. PMID:8293988

  11. A computer simulation study of VNTR population genetics: constrained recombination rules out the infinite alleles model.

    PubMed

    Harding, R M; Boyce, A J; Martinson, J J; Flint, J; Clegg, J B

    1993-11-01

    Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. We show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation model reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. We use sampling theory to confirm the intrinsically poor fit to the infinite alleles model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations.

  12. Hamiltonian thermostats fail to promote heat flow

    NASA Astrophysics Data System (ADS)

    Hoover, Wm. G.; Hoover, Carol G.

    2013-12-01

    Hamiltonian mechanics can be used to constrain temperature simultaneously with energy. We illustrate the interesting situations that develop when two different temperatures are imposed within a composite Hamiltonian system. The model systems we treat are ϕ4 chains, with quartic tethers and quadratic nearest-neighbor Hooke's-law interactions. This model is known to satisfy Fourier's law. Our prototypical problem sandwiches a Newtonian subsystem between hot and cold Hamiltonian reservoir regions. We have characterized four different Hamiltonian reservoir types. There is no tendency for any of these two-temperature Hamiltonian simulations to transfer heat from the hot to the cold degrees of freedom. Evidently steady heat flow simulations require energy sources and sinks, and are therefore incompatible with Hamiltonian mechanics.

  13. Simulation of aerobic and anaerobic biodegradation processes at a crude oil spill site

    USGS Publications Warehouse

    Essaid, Hedeff I.; Bekins, Barbara A.; Godsy, E. Michael; Warren, Ean; Baedecker, Mary Jo; Cozzarelli, Isabelle M.

    1995-01-01

    A two-dimensional, multispecies reactive solute transport model with sequential aerobic and anaerobic degradation processes was developed and tested. The model was used to study the field-scale solute transport and degradation processes at the Bemidji, Minnesota, crude oil spill site. The simulations included the biodegradation of volatile and nonvolatile fractions of dissolved organic carbon by aerobic processes, manganese and iron reduction, and methanogenesis. Model parameter estimates were constrained by published Monod kinetic parameters, theoretical yield estimates, and field biomass measurements. Despite the considerable uncertainty in the model parameter estimates, results of simulations reproduced the general features of the observed groundwater plume and the measured bacterial concentrations. In the simulation, 46% of the total dissolved organic carbon (TDOC) introduced into the aquifer was degraded. Aerobic degradation accounted for 40% of the TDOC degraded. Anaerobic processes accounted for the remaining 60% of degradation of TDOC: 5% by Mn reduction, 19% by Fe reduction, and 36% by methanogenesis. Thus anaerobic processes account for more than half of the removal of DOC at this site.

  14. A satellite simulator for TRMM PR applied to climate model simulations

    NASA Astrophysics Data System (ADS)

    Spangehl, T.; Schroeder, M.; Bodas-Salcedo, A.; Hollmann, R.; Riley Dellaripa, E. M.; Schumacher, C.

    2017-12-01

    Climate model simulations have to be compared against observation based datasets in order to assess their skill in representing precipitation characteristics. Here we use a satellite simulator for TRMM PR in order to evaluate simulations performed with MPI-ESM (Earth system model of the Max Planck Institute for Meteorology in Hamburg, Germany) performed within the MiKlip project (https://www.fona-miklip.de/, funded by Federal Ministry of Education and Research in Germany). While classical evaluation methods focus on geophysical parameters such as precipitation amounts, the application of the satellite simulator enables an evaluation in the instrument's parameter space thereby reducing uncertainties on the reference side. The CFMIP Observation Simulator Package (COSP) provides a framework for the application of satellite simulators to climate model simulations. The approach requires the introduction of sub-grid cloud and precipitation variability. Radar reflectivities are obtained by applying Mie theory, with the microphysical assumptions being chosen to match the atmosphere component of MPI-ESM (ECHAM6). The results are found to be sensitive to the methods used to distribute the convective precipitation over the sub-grid boxes. Simple parameterization methods are used to introduce sub-grid variability of convective clouds and precipitation. In order to constrain uncertainties a comprehensive comparison with sub-grid scale convective precipitation variability which is deduced from TRMM PR observations is carried out.

  15. Constraining the Origin of Phobos with the Elpasolite Planetary Ice and Composition Spectrometer (EPICS) - Simulated Performance

    NASA Astrophysics Data System (ADS)

    Nowicki, S. F.; Mesick, K.; Coupland, D. D. S.; Dallmann, N. A.; Feldman, W. C.; Stonehill, L. C.; Hardgrove, C.; Dibb, S.; Gabriel, T. S. J.; West, S.

    2017-12-01

    Elpasolites are a promising new family of inorganic scintillators that can detect both gamma rays and neutrons within a single detector volume, reducing the instrument size, weight, and power (SWaP), all of which are critical for planetary science missions. The ability to distinguish between neutron and gamma events is done through pulse shape discrimination (PSD). The Elpasolite Planetary Ice and Composition Spectrometer (EPICS) utilizes elpasolites in a next-generation, highly capable, low-SWaP gamma-ray and neutron spectrometer. We present simulated capabilities of EPICS sensitivities to neutron and gamma-rays, and demonstrate how EPICS can constrain the origin of Phobos between the following three main hypotheses: 1) accretion after a giant impact with Mars, 2) co-accretion with Mars, and 3) capture of an external body. The MCNP6 code was used to calculate the neutron and gamma-ray flux that escape the surface of Phobos, and GEANT4 to model the response of the EPICS instrument on orbit around Phobos.

  16. Cosmological structure formation in Decaying Dark Matter models

    NASA Astrophysics Data System (ADS)

    Cheng, Dalong; Chu, M.-C.; Tang, Jiayu

    2015-07-01

    The standard cold dark matter (CDM) model predicts too many and too dense small structures. We consider an alternative model that the dark matter undergoes two-body decays with cosmological lifetime τ into only one type of massive daughters with non-relativistic recoil velocity Vk. This decaying dark matter model (DDM) can suppress the structure formation below its free-streaming scale at time scale comparable to τ. Comparing with warm dark matter (WDM), DDM can better reduce the small structures while being consistent with high redshfit observations. We study the cosmological structure formation in DDM by performing self-consistent N-body simulations and point out that cosmological simulations are necessary to understand the DDM structures especially on non-linear scales. We propose empirical fitting functions for the DDM suppression of the mass function and the concentration-mass relation, which depend on the decay parameters lifetime τ, recoil velocity Vk and redshift. The fitting functions lead to accurate reconstruction of the the non-linear power transfer function of DDM to CDM in the framework of halo model. Using these results, we set constraints on the DDM parameter space by demanding that DDM does not induce larger suppression than the Lyman-α constrained WDM models. We further generalize and constrain the DDM models to initial conditions with non-trivial mother fractions and show that the halo model predictions are still valid after considering a global decayed fraction. Finally, we point out that the DDM is unlikely to resolve the disagreement on cluster numbers between the Planck primary CMB prediction and the Sunyaev-Zeldovich (SZ) effect number count for τ ~ H0-1.

  17. Monte Carlo-based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2014-04-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures - for example, by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow for a more detailed analysis of the dynamic behaviour of the soil-plant interface. We coupled two of such high-process-oriented independent models and calibrated both models simultaneously. The catchment modelling framework (CMF) simulated soil hydrology based on the Richards equation and the van Genuchten-Mualem model of the soil hydraulic properties. CMF was coupled with the plant growth modelling framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo-based generalized likelihood uncertainty estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from a uniform distribution. The model was applied to three sites with different management in Müncheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matter of roots, storages, stems and leaves. The shape parameter of the retention curve n was highly constrained, whereas other parameters of the retention curve showed a large equifinality. We attribute this slightly poorer model performance to missing leaf senescence, which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need for including agricultural management options in the coupled model.

  18. Interactive, graphical processing unitbased evaluation of evacuation scenarios at the state scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Aaby, Brandon G; Yoginath, Srikanth B

    2011-01-01

    In large-scale scenarios, transportation modeling and simulation is severely constrained by simulation time. For example, few real- time simulators scale to evacuation traffic scenarios at the level of an entire state, such as Louisiana (approximately 1 million links) or Florida (2.5 million links). New simulation approaches are needed to overcome severe computational demands of conventional (microscopic or mesoscopic) modeling techniques. Here, a new modeling and execution methodology is explored that holds the potential to provide a tradeoff among the level of behavioral detail, the scale of transportation network, and real-time execution capabilities. A novel, field-based modeling technique and its implementationmore » on graphical processing units are presented. Although additional research with input from domain experts is needed for refining and validating the models, the techniques reported here afford interactive experience at very large scales of multi-million road segments. Illustrative experiments on a few state-scale net- works are described based on an implementation of this approach in a software system called GARFIELD. Current modeling cap- abilities and implementation limitations are described, along with possible use cases and future research.« less

  19. Constrained parameterisation of photosynthetic capacity causes significant increase of modelled tropical vegetation surface temperature

    NASA Astrophysics Data System (ADS)

    Kattge, J.; Knorr, W.; Raddatz, T.; Wirth, C.

    2009-04-01

    Photosynthetic capacity is one of the most sensitive parameters of terrestrial biosphere models whose representation in global scale simulations has been severely hampered by a lack of systematic analyses using a sufficiently broad database. Due to its coupling to stomatal conductance changes in the parameterisation of photosynthetic capacity may potentially influence transpiration rates and vegetation surface temperature. Here, we provide a constrained parameterisation of photosynthetic capacity for different plant functional types in the context of the photosynthesis model proposed by Farquhar et al. (1980), based on a comprehensive compilation of leaf photosynthesis rates and leaf nitrogen content. Mean values of photosynthetic capacity were implemented into the coupled climate-vegetation model ECHAM5/JSBACH and modelled gross primary production (GPP) is compared to a compilation of independent observations on stand scale. Compared to the current standard parameterisation the root-mean-squared difference between modelled and observed GPP is substantially reduced for almost all PFTs by the new parameterisation of photosynthetic capacity. We find a systematic depression of NUE (photosynthetic capacity divided by leaf nitrogen content) on certain tropical soils that are known to be deficient in phosphorus. Photosynthetic capacity of tropical trees derived by this study is substantially lower than standard estimates currently used in terrestrial biosphere models. This causes a decrease of modelled GPP while it significantly increases modelled tropical vegetation surface temperatures, up to 0.8°C. These results emphasise the importance of a constrained parameterisation of photosynthetic capacity not only for the carbon cycle, but also for the climate system.

  20. Zero dimensional model of atmospheric SMD discharge and afterglow in humid air

    NASA Astrophysics Data System (ADS)

    Smith, Ryan; Kemaneci, Efe; Offerhaus, Bjoern; Stapelmann, Katharina; Peter Brinkmann, Ralph

    2016-09-01

    A novel mesh-like Surface Micro Discharge (SMD) device designed for surface wound treatment is simulated by multiple time-scaled zero-dimensional models. The chemical dynamics of the discharge are resolved in time at atmospheric pressure in humid conditions. Simulated are the particle densities of electrons, 26 ionic species, and 26 reactive neutral species including: O3, NO, and HNO3. The total of 53 described species are constrained by 624 reactions within the simulated plasma discharge volume. The neutral species are allowed to diffuse into a diffusive gas regime which is of primary interest. Two interdependent zero-dimensional models separated by nine orders of magnitude in temporal resolution are used to accomplish this; thereby reducing the computational load. Through variation of control parameters such as: ignition frequency, deposited power density, duty cycle, humidity level, and N2 content, the ideal operation conditions for the SMD device can be predicted. The described model has been verified by matching simulation parameters and comparing results to that of previous works. Current operating conditions of the experimental mesh-like SMD were matched and results are compared to the simulations. Work supported by SFB TR 87.

  1. How well do simulated last glacial maximum tropical temperatures constrain equilibrium climate sensitivity?

    NASA Astrophysics Data System (ADS)

    Hopcroft, Peter O.; Valdes, Paul J.

    2015-07-01

    Previous work demonstrated a significant correlation between tropical surface air temperature and equilibrium climate sensitivity (ECS) in PMIP (Paleoclimate Modelling Intercomparison Project) phase 2 model simulations of the last glacial maximum (LGM). This implies that reconstructed LGM cooling in this region could provide information about the climate system ECS value. We analyze results from new simulations of the LGM performed as part of Coupled Model Intercomparison Project (CMIP5) and PMIP phase 3. These results show no consistent relationship between the LGM tropical cooling and ECS. A radiative forcing and feedback analysis shows that a number of factors are responsible for this decoupling, some of which are related to vegetation and aerosol feedbacks. While several of the processes identified are LGM specific and do not impact on elevated CO2 simulations, this analysis demonstrates one area where the newer CMIP5 models behave in a qualitatively different manner compared with the older ensemble. The results imply that so-called Earth System components such as vegetation and aerosols can have a significant impact on the climate response in LGM simulations, and this should be taken into account in future analyses.

  2. Chemical Feedback From Decreasing Carbon Monoxide Emissions

    NASA Astrophysics Data System (ADS)

    Gaubert, B.; Worden, H. M.; Arellano, A. F. J.; Emmons, L. K.; Tilmes, S.; Barré, J.; Martinez Alonso, S.; Vitt, F.; Anderson, J. L.; Alkemade, F.; Houweling, S.; Edwards, D. P.

    2017-10-01

    Understanding changes in the burden and growth rate of atmospheric methane (CH4) has been the focus of several recent studies but still lacks scientific consensus. Here we investigate the role of decreasing anthropogenic carbon monoxide (CO) emissions since 2002 on hydroxyl radical (OH) sinks and tropospheric CH4 loss. We quantify this impact by contrasting two model simulations for 2002-2013: (1) a Measurement of the Pollution in the Troposphere (MOPITT) CO reanalysis and (2) a Control-Run without CO assimilation. These simulations are performed with the Community Atmosphere Model with Chemistry of the Community Earth System Model fully coupled chemistry climate model with prescribed CH4 surface concentrations. The assimilation of MOPITT observations constrains the global CO burden, which significantly decreased over this period by 20%. We find that this decrease results to (a) increase in CO chemical production, (b) higher CH4 oxidation by OH, and (c) 8% shorter CH4 lifetime. We elucidate this coupling by a surrogate mechanism for CO-OH-CH4 that is quantified from the full chemistry simulations.

  3. Observations give us CLUES to Cosmic Flows' origins

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny; Courtois, H.; Gottloeber, S.; Hoffman, Y.; Pomarede, D.; Tully, R. B.; Flows, Cosmic; CLUES

    2014-01-01

    In an era where the wealth of telescope-data and the development of computer superclusters keep increasing, the knowledge of Large Scale Structures' formation and evolution constitutes a tremendous challenge. Within this context the project Cosmic Flows has recently produced a catalog of peculiar velocities up to 150 Mpc. These velocities, obtained from direct distance measurements, are ideal markers of the underlying gravitational potential. They form a fantastic input to perform constrained simulations of the Local Universe within the CLUES project. A new method has recently been elaborated to achieve these simulations which prove to be excellent replicas of our neighborhood. The Wiener-Filter, the Reverse Zel'dovich Approximation and the Constrained Realization techniques are combined to build Initial Conditions. The resulting second generation of constrained simulations presents us the formidable history of the Great Attractor's and nearby supercluster's formation.

  4. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    NASA Astrophysics Data System (ADS)

    Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-12-01

    In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  5. Recent Results From MINERvA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patrick, Cheryl

    The MINERvA detector is situated in Fermilab's NuMI beam, which provides neutrinos and antineutrinos in the 1-20 GeV range. It is designed to make precision cross-section measurements for scattering processes on various nuclei. These proceedings summarize the differential cross-section distributions measured for several different processes. Comparison of these with various models hints at additional nuclear effects not included in common simulations. These results will help constrain generators' nuclear models and reduce systematic uncertainties on their predictions. An accurate cross-section model, with minimal uncertainties, is vital to oscillation experiments.

  6. A computer simulation study of VNTR population genetics: Constrained recombination rules out the infinite alleles model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, R.M.; Martinson, J.J.; Flint, J.

    1993-11-01

    Extensive allelic diversity in variable numbers of tandem repeats (VNTRs) has been discovered in the human genome. For population genetic studies of VNTRs, such as forensic applications, it is important to know whether a neutral mutation-drift balance of VNTR polymorphism can be represented by the infinite alleles model. The assumption of the infinite alleles model that each new mutant is unique is very likely to be violated by unequal sister chromatid exchange (USCE), the primary process believed to generate VNTR mutants. The authors show that increasing both mutation rates and misalignment constraint for intrachromosomal recombination in a computer simulation modelmore » reduces simulated VNTR diversity below the expectations of the infinite alleles model. Maximal constraint, represented as slippage of single repeats, reduces simulated VNTR diversity to levels expected from the stepwise mutation model. Although misalignment rule is the more important variable, mutation rate also has an effect. At moderate rates of USCE, simulated VNTR diversity fluctuates around infinite alleles expectation. However, if rates of USCE are high, as for hypervariable VNTRs, simulated VNTR diversity is consistently lower than predicted by the infinite alleles model. This has been observed for many VNTRs and accounted for by technical problems in distinguishing alleles of neighboring size classes. The authors use sampling theory to confirm the intrinsically poor fit to the infinite model of both simulated VNTR diversity and observed VNTR polymorphisms sampled from two Papua New Guinean populations. 25 refs., 20 figs., 4 tabs.« less

  7. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    PubMed Central

    Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen

    2018-01-01

    The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635

  8. Dynamic Financial Constraints: Distinguishing Mechanism Design from Exogenously Incomplete Regimes*

    PubMed Central

    Karaivanov, Alexander; Townsend, Robert M.

    2014-01-01

    We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross-sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured. PMID:25246710

  9. PI controller design of a wind turbine: evaluation of the pole-placement method and tuning using constrained optimization

    NASA Astrophysics Data System (ADS)

    Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.

    2016-09-01

    PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.

  10. Section-constrained local geological interface dynamic updating method based on the HRBF surface

    NASA Astrophysics Data System (ADS)

    Guo, Jiateng; Wu, Lixin; Zhou, Wenhui; Li, Chaoling; Li, Fengdan

    2018-02-01

    Boundaries, attitudes and sections are the most common data acquired from regional field geological surveys, and they are used for three-dimensional (3D) geological modelling. However, constructing topologically consistent 3D geological models from rapid and automatic regional modelling with convenient local modifications remains unresolved. In previous works, the Hermite radial basis function (HRBF) surface was introduced for the simulation of geological interfaces from geological boundaries and attitudes, which allows 3D geological models to be automatically extracted from the modelling area by the interfaces. However, the reasonability and accuracy of non-supervised subsurface modelling is limited without further modifications generated through explanations and analyses performed by geology experts. In this paper, we provide flexible and convenient manual interactive manipulation tools for geologists to sketch constraint lines, and these tools may help geologists transform and apply their expert knowledge to the models. In the modified modelling workflow, the geological sections were treated as auxiliary constraints to construct more reasonable 3D geological models. The geometric characteristics of section lines were abstracted to coordinates and normal vectors, and along with the transformed coordinates and vectors from boundaries and attitudes, these characteristics were adopted to co-calculate the implicit geological surface function parameters of the HRBF equations and form constrained geological interfaces from topographic (boundaries and attitudes) and subsurface data (sketched sections). Based on this new modelling method, a prototype system was developed, in which the section lines could be imported from databases or interactively sketched, and the models could be immediately updated after the new constraints were added. Experimental comparisons showed that all boundary, attitude and section data are well represented in the constrained models, which are consistent with expert explanations and help improve the quality of the models.

  11. Integrated reservoir characterization and flow simulation for well targeting and reservoir management, Iagifu-Hedinia field, Southern Highlands Province, Papua New Guinea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Franklin, S.P.; Livingston, J.E.; Fitzmorris, R.E.

    Infill drilling based on integrated reservoir characterization and flow simulation is increasing recoverable reserves by 20 MMBO, in lagifu-Hedinia Field (IHF). Stratigraphically-zoned models are input to window and full-field flow simulations, and results of the flow simulations target deviated and horizontal wells. Logging and pressure surveys facilitate detailed reservoir management. Flooding surfaces are the dominant control on differential depletion within and between reservoirs. The primary reservoir is the basal Cretaceous Toro Sandstone. Within the IHF, Toro is a 100 m quartz sandstone composed of stacked, coarsening-upward parasequences within a wave-dominated deltaic complex. Flooding surfaces are used to form a hydraulicmore » zonation. The zonation is refined using discontinuities in RIFT pressure gradients and logs from development wells. For flow simulation, models use 3D geostatistical techniques. First, variograms defining spatial correlation are developed. The variograms are used to construct 3D porosity and permeability models which reflect the stratigraphic facies models. Structure models are built using dipmeter, biostratigraphic, and surface data. Deviated wells often cross axial surfaces and geometry is predicted from dip domain and SCAT. Faults are identified using pressure transient data and dipmeter. The Toro reservoir is subnormally pressured and fluid contacts are hydrodynamically tilted. The hydrodynamic flow and tilted contacts are modeled by flow simulation and constrained by maps of the potentiometric surface.« less

  12. Constraining the Late Miocene paleo-CO2 estimates through GCM model-data comparisons

    NASA Astrophysics Data System (ADS)

    Bradshaw, Catherine; Pound, Matthew; Lunt, Daniel; Flecker, Rachel; Salzmann, Ulrich; Haywood, Alan; Riding, James; Francis, Jane

    2010-05-01

    The period following the Mid-Miocene Climatic Optimum experienced a continued downward trend in the δ18O record - a record acknowledged as a proxy indicator of both ice volume and temperature (Zachos et al., 2001). Given the link between atmospheric CO2 and temperature (IPCC, 2007), it could be thought that the timeline throughout the Late Miocene would show a general decline in CO2 in accordance with the δ18O record. However, examination of the palaeo-CO2 record shows a relatively flat profile across this time, or perhaps even a slight increase, but there is a wide variation in the palaeo-CO2 estimate for the differing approximation methods. We use the fully coupled atmosphere-ocean-vegetation model of the Hadley Centre, HadCM3L, which has a low resolution ocean (Hadley Centre Coupled Model, Version 3 - low resolution ocean) with TRIFFID (Top-down Representation of Interactive Foliage and Flora Including Dynamics: Cox, 2001) to generate CO2 sensitivity scenarios for the Late Miocene: 180ppmv, 280ppmv and 400ppmv, as well as a preindustrial control simulation: 280 ppmv. We also run the BIOME4 model offline to produce predicted biome distributions for each of our scenarios. We compare both marine and terrestrial modelled temperatures, and the predicted vegetation distributions for these scenarios against available palaeodata As we simulate with a coupled dynamic ocean model, we use planktonic and benthic foraminiferal-based proxy palaeotemperature estimates to compare to the modelled marine temperatures at the depths consistent with the reconstructed palaeoecology of the foraminifera. We compare our modelled terrestrial temperatures to vegetation-based proxy palaeotemperatures, and we use a newly compiled vegetation reconstruction for the Late Miocene to compare to our modelled vegetation distributions. The new Late Miocene vegetation reconstruction is based on a 200+ point database of palaeobotanical sites. Each location is classified into a biome consistent with the BIOME4 model, to allow for easy data - model comparison. We use all these data - model comparisons to constrain the best-fit scenario and the overall most likely Late Miocene CO2 estimate according to the model simulations. Preliminary results suggest that the 400ppmv simulation provides the best fit to the proxy data.

  13. The Compton-thick Growth of Supermassive Black Holes constrained

    NASA Astrophysics Data System (ADS)

    Buchner, Johannes; Georgakakis, Antonis; Nandra, Kirpal; Brightman, Murray; Menzel, Marie-Luise; Liu, Zhu; Hsu, Li-Ting; Salvato, Mara; Rangel, Cyprian; Aird, James

    2017-08-01

    A heavily obscured growth phase of supermassive black holes (SMBH) is thought to be important in the co-evolution with galaxies. X-rays provide a clean and efficient selection of unobscured and obscured AGN. Recent work with deeper observations and improved analysis methodology allowed us to extend constraints to Compton-thick number densities. We present the first luminosity function of Compton-thick AGN at z=0.5-4 and constrain the overall mass density locked into black holes over cosmic time, a fundamental constraint for cosmological simulations. Recent studies including ours find that the obscuration is redshift and luminosity-dependent in a complex way, which rules out entire sets of obscurer models. A new paradigm, the radiation-lifted torus model, is proposed, in which the obscurer is Eddington-rate dependent and accretion creates and displaces torus clouds. We place observational limits on the behaviour of this mechanism.

  14. The Compton-thick Growth of Supermassive Black Holes constrained

    NASA Astrophysics Data System (ADS)

    Buchner, J.; Georgakakis, A.; Nandra, K.

    2017-10-01

    A heavily obscured growth phase of supermassive black holes (SMBH) is thought to be important in the co-evolution with galaxies. X-rays provide a clean and efficient selection of unobscured and obscured AGN. Recent work with deeper observations and improved analysis methodology allowed us to extend constraints to Compton-thick number densities. We present the first luminosity function of Compton-thick AGN at z=0.5-4 and constrain the overall mass density locked into black holes over cosmic time, a fundamental constraint for cosmological simulations. Recent studies including ours find that the obscuration is redshift and luminosity-dependent in a complex way, which rules out entire sets of obscurer models. A new paradigm, the radiation-lifted torus model, is proposed, in which the obscurer is Eddington-rate dependent and accretion creates and displaces torus clouds. We place observational limits on the behaviour of this mechanism.

  15. Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il

    2014-08-14

    We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less

  16. Impact of isoprene and HONO chemistry on ozone and OVOC formation in a semirural South Korean forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Saewung; Kim, So-Young; Lee, Meehye

    Rapid urbanization and economic development in East Asia in past decades has led to photochemical air pollution problems such as excess photochemical ozone and aerosol formation. Asian megacities such as Seoul, Tokyo, Shanghai, Gangzhou, and Beijing are surrounded by densely forested areas and recent research has consistently demonstrated the importance of biogenic volatile organic compounds from vegetation in determining oxidation capacity in the suburban Asian megacity regions. Uncertainties in constraining tropospheric oxidation capacity, dominated by hydroxyl radical concentrations, undermine our ability to assess regional photochemical air pollution problems. We present an observational dataset of CO, NOX, SO2, ozone, HONO, andmore » VOCs (anthropogenic and biogenic) from Taehwa Research Forest (TRF) near the Seoul Metropolitan Area (SMA) in early June 2012. The data show that TRF is influenced both by aged pollution and fresh BVOC emissions. With the dataset, we diagnose HOx (OH, HO2, and RO2) distributions calculated with the University of Washington Chemical Box Model (UWCM v 2.1). Uncertainty from unconstrained HONO sources and radical recycling processes highlighted in recent studies is examined using multiple model simulations with different model constraints. The results suggest that 1) different model simulation scenarios cause systematic differences in HOX distributions especially OH levels (up to 2.5 times) and 2) radical destruction (HO2+HO2 or HO2+RO2) could be more efficient than radical recycling (HO2+NO) especially in the afternoon. Implications of the uncertainties in radical chemistry are discussed with respect to ozone-VOC-NOX sensitivity and oxidation product formation rates. Overall, the VOC limited regime in ozone photochemistry is predicted but the degree of sensitivity can significantly vary depending on the model scenarios. The model results also suggest that RO2 levels are positively correlated with OVOCs production that is not routinely constrained by observations. These unconstrained OVOCs can cause higher than expected OH loss rates (missing OH reactivity) and secondary organic aerosol formation. The series of modeling experiments constrained by observations strongly urge observational constraint of the radical pool to enable precise understanding of regional photochemical pollution problems in the East Asian megacity region.« less

  17. Reionization Models Classifier using 21cm Map Deep Learning

    NASA Astrophysics Data System (ADS)

    Hassan, Sultan; Liu, Adrian; Kohn, Saul; Aguirre, James E.; La Plante, Paul; Lidz, Adam

    2018-05-01

    Next-generation 21cm observations will enable imaging of reionization on very large scales. These images will contain more astrophysical and cosmological information than the power spectrum, and hence providing an alternative way to constrain the contribution of different reionizing sources populations to cosmic reionization. Using Convolutional Neural Networks, we present a simple network architecture that is sufficient to discriminate between Galaxy-dominated versus AGN-dominated models, even in the presence of simulated noise from different experiments such as the HERA and SKA.

  18. Constraining Future Sea Level Rise Estimates from the Amundsen Sea Embayment, West Antarctica

    NASA Astrophysics Data System (ADS)

    Nias, I.; Cornford, S. L.; Edwards, T.; Gourmelen, N.; Payne, A. J.

    2016-12-01

    The Amundsen Sea Embayment (ASE) is the primary source of mass loss from the West Antarctic Ice Sheet. The catchment is particularly susceptible to grounding line retreat, because the ice sheet is grounded on bedrock that is below sea level and deepening towards its interior. Mass loss from the ASE ice streams, which include Pine Island, Thwaites and Smith glaciers, is a major uncertainty on future sea level rise, and understanding the dynamics of these ice streams is essential to constraining this uncertainty. The aim of this study is to construct a distribution of future ASE sea level contributions from an ensemble of ice sheet model simulations and observations of surface elevation change. A 284 member ensemble was performed using BISICLES, a vertically-integrated ice flow model with adaptive mesh refinement. Within the ensemble parameters associated with basal traction, ice rheology and sub-shelf melt rate were perturbed, and the effect of bed topography and sliding law were also investigated. Initially each configuration was run to 50 model years. Satellite observations of surface height change were then used within a Bayesian framework to assign likelihoods to each ensemble member. Simulations that better reproduced the current thinning patterns across the catchment were given a higher score. The resulting posterior distribution of sea level contributions is narrower than the prior distribution, although the central estimates of sea level rise are similar between the prior and posterior. The most extreme simulations were eliminated and the remaining ensemble members were extended to 200 years, using a simple melt rate forcing.

  19. Planetary geology: Impact processes on asteroids

    NASA Technical Reports Server (NTRS)

    Chapman, C. R.; Davis, D. R.; Greenberg, R.; Weidenschilling, S. J.

    1982-01-01

    The fundamental geological and geophysical properties of asteroids were studied by theoretical and simulation studies of their collisional evolution. Numerical simulations incorporating realistic physical models were developed to study the collisional evolution of hypothetical asteroid populations over the age of the solar system. Ideas and models are constrained by the observed distributions of sizes, shapes, and spin rates in the asteroid belt, by properties of Hirayama families, and by experimental studies of cratering and collisional phenomena. It is suggested that many asteroids are gravitationally-bound "rubble piles.' Those that rotate rapidly may have nonspherical quasi-equilibrium shapes, such as ellipsoids or binaries. Through comparison of models with astronomical data, physical properties of these asteroids (including bulk density) are determined, and physical processes that have operated in the solar system in primordial and subsequent epochs are studied.

  20. NASA Handbook for Models and Simulations: An Implementation Guide for NASA-STD-7009

    NASA Technical Reports Server (NTRS)

    Steele, Martin J.

    2013-01-01

    The purpose of this Handbook is to provide technical information, clarification, examples, processes, and techniques to help institute good modeling and simulation practices in the National Aeronautics and Space Administration (NASA). As a companion guide to NASA-STD- 7009, Standard for Models and Simulations, this Handbook provides a broader scope of information than may be included in a Standard and promotes good practices in the production, use, and consumption of NASA modeling and simulation products. NASA-STD-7009 specifies what a modeling and simulation activity shall or should do (in the requirements) but does not prescribe how the requirements are to be met, which varies with the specific engineering discipline, or who is responsible for complying with the requirements, which depends on the size and type of project. A guidance document, which is not constrained by the requirements of a Standard, is better suited to address these additional aspects and provide necessary clarification. This Handbook stems from the Space Shuttle Columbia Accident Investigation (2003), which called for Agency-wide improvements in the "development, documentation, and operation of models and simulations"' that subsequently elicited additional guidance from the NASA Office of the Chief Engineer to include "a standard method to assess the credibility of the models and simulations."2 General methods applicable across the broad spectrum of model and simulation (M&S) disciplines were sought to help guide the modeling and simulation processes within NASA and to provide for consistent reporting ofM&S activities and analysis results. From this, the standardized process for the M&S activity was developed. The major contents of this Handbook are the implementation details of the general M&S requirements ofNASA-STD-7009, including explanations, examples, and suggestions for improving the credibility assessment of an M&S-based analysis.

  1. Multiplatform Mission Planning and Operations Simulation Environment for Adaptive Remote Sensors

    NASA Astrophysics Data System (ADS)

    Smith, G.; Ball, C.; O'Brien, A.; Johnson, J. T.

    2017-12-01

    We report on the design and development of mission simulator libraries to support the emerging field of adaptive remote sensors. We will outline the current state of the art in adaptive sensing, provide analysis of how the current approach to performing observing system simulation experiments (OSSEs) must be changed to enable adaptive sensors for remote sensing, and present an architecture to enable their inclusion in future OSSEs.The growing potential of sensors capable of real-time adaptation of their operational parameters calls for a new class of mission planning and simulation tools. Existing simulation tools used in OSSEs assume a fixed set of sensor parameters in terms of observation geometry, frequencies used, resolution, or observation time, which allows simplifications to be made in the simulation and allows sensor observation errors to be characterized a priori. Adaptive sensors may vary these parameters depending on the details of the scene observed, so that sensor performance is not simple to model without conducting OSSE simulations that include sensor adaptation in response to varying observational environment. Adaptive sensors are of significance to resource-constrained, small satellite platforms because they enable the management of power and data volumes while providing methods for multiple sensors to collaborate.The new class of OSSEs required to utilize adaptive sensors located on multiple platforms must answer the question: If the physical act of sensing has a cost, how does the system determine if the science value of a measurement is worth the cost and how should that cost be shared among the collaborating sensors?Here we propose to answer this question using an architecture structured around three modules: ADAPT, MANAGE and COLLABORATE. The ADAPT module is a set of routines to facilitate modeling of adaptive sensors, the MANAGE module will implement a set of routines to facilitate simulations of sensor resource management when power and data volume are constrained, and the COLLABORATE module will support simulations of coordination among multiple platforms with adaptive sensors. When used together these modules will for a simulation OSSEs that can enable both the design of adaptive algorithms to support remote sensing and the prediction of the sensor performance.

  2. Unsupervised Bayesian linear unmixing of gene expression microarrays.

    PubMed

    Bazot, Cécile; Dobigeon, Nicolas; Tourneret, Jean-Yves; Zaas, Aimee K; Ginsburg, Geoffrey S; Hero, Alfred O

    2013-03-19

    This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor.

  3. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    NASA Astrophysics Data System (ADS)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models tend to contain a large number of poorly defined and spatially varying model parameters which are therefore computationally expensive to calibrate. Insufficient data can result in model parameter and structural equifinality, particularly when calibration is reliant on catchment outlet discharge behaviour alone. Evaluating spatial patterns of internal hydrological behaviour has the potential to reveal simulations that, whilst consistent with measured outlet discharge, are qualitatively dissimilar to our perceptual understanding of how the system should behave. We argue that such understanding, which may be derived from stakeholder knowledge across different catchments for certain process dynamics, is a valuable source of information to help reject non-behavioural models, and therefore identify feasible model structures and parameters. The challenge, however, is to convert different sources of often qualitative and/or semi-qualitative information into robust quantitative constraints of model states and fluxes, and combine these sources of information together to reject models within an efficient calibration framework. Here we present the development of a framework to incorporate different sources of data to efficiently calibrate distributed catchment models. For each source of information, an interval or inequality is used to define the behaviour of the catchment system. These intervals are then combined to produce a hyper-volume in state space, which is used to identify behavioural models. We apply the methodology to calibrate the Penn State Integrated Hydrological Model (PIHM) at the Wye catchment, Plynlimon, UK. Outlet discharge behaviour is successfully simulated when perceptual understanding of relative groundwater levels between lowland peat, upland peat and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.

  4. Statistical modelling as an aid to the design of retail sampling plans for mycotoxins in food.

    PubMed

    MacArthur, Roy; MacDonald, Susan; Brereton, Paul; Murray, Alistair

    2006-01-01

    A study has been carried out to assess appropriate statistical models for use in evaluating retail sampling plans for the determination of mycotoxins in food. A compound gamma model was found to be a suitable fit. A simulation model based on the compound gamma model was used to produce operating characteristic curves for a range of parameters relevant to retail sampling. The model was also used to estimate the minimum number of increments necessary to minimize the overall measurement uncertainty. Simulation results showed that measurements based on retail samples (for which the maximum number of increments is constrained by cost) may produce fit-for-purpose results for the measurement of ochratoxin A in dried fruit, but are unlikely to do so for the measurement of aflatoxin B1 in pistachio nuts. In order to produce a more accurate simulation, further work is required to determine the degree of heterogeneity associated with batches of food products. With appropriate parameterization in terms of physical and biological characteristics, the systems developed in this study could be applied to other analyte/matrix combinations.

  5. Cosmological constraints with weak-lensing peak counts and second-order statistics in a large-field survey

    NASA Astrophysics Data System (ADS)

    Peel, Austin; Lin, Chieh-An; Lanusse, François; Leonard, Adrienne; Starck, Jean-Luc; Kilbinger, Martin

    2017-03-01

    Peak statistics in weak-lensing maps access the non-Gaussian information contained in the large-scale distribution of matter in the Universe. They are therefore a promising complementary probe to two-point and higher-order statistics to constrain our cosmological models. Next-generation galaxy surveys, with their advanced optics and large areas, will measure the cosmic weak-lensing signal with unprecedented precision. To prepare for these anticipated data sets, we assess the constraining power of peak counts in a simulated Euclid-like survey on the cosmological parameters Ωm, σ8, and w0de. In particular, we study how Camelus, a fast stochastic model for predicting peaks, can be applied to such large surveys. The algorithm avoids the need for time-costly N-body simulations, and its stochastic approach provides full PDF information of observables. Considering peaks with a signal-to-noise ratio ≥ 1, we measure the abundance histogram in a mock shear catalogue of approximately 5000 deg2 using a multiscale mass-map filtering technique. We constrain the parameters of the mock survey using Camelus combined with approximate Bayesian computation, a robust likelihood-free inference algorithm. Peak statistics yield a tight but significantly biased constraint in the σ8-Ωm plane, as measured by the width ΔΣ8 of the 1σ contour. We find Σ8 = σ8(Ωm/ 0.27)α = 0.77-0.05+0.06 with α = 0.75 for a flat ΛCDM model. The strong bias indicates the need to better understand and control the model systematics before applying it to a real survey of this size or larger. We perform a calibration of the model and compare results to those from the two-point correlation functions ξ± measured on the same field. We calibrate the ξ± result as well, since its contours are also biased, although not as severely as for peaks. In this case, we find for peaks Σ8 = 0.76-0.03+0.02 with α = 0.65, while for the combined ξ+ and ξ- statistics the values are Σ8 = 0.76-0.01+0.02 and α = 0.70. We conclude that the constraining power can therefore be comparable between the two weak-lensing observables in large-field surveys. Furthermore, the tilt in the σ8-Ωm degeneracy direction for peaks with respect to that of ξ± suggests that a combined analysis would yield tighter constraints than either measure alone. As expected, w0de cannot be well constrained without a tomographic analysis, but its degeneracy directions with the other two varied parameters are still clear for both peaks and ξ±.

  6. A Mixed Phase Tale: New Ways of using in-situ cloud observations to reduce climate model biases in Southern Ocean

    NASA Astrophysics Data System (ADS)

    Gettelman, A.; Stith, J. L.

    2014-12-01

    Southern ocean clouds are a critical part of the earth's energy budget, and significant biases in the climatology of these clouds exist in models used to predict climate change. We compare in situ measurements of cloud microphysical properties of ice and liquid over the S. Ocean with constrained output from the atmospheric component of an Earth System Model. Observations taken during the HIAPER (the NSF/NCAR G-V aircraft) Pole-to-Pole Observations (HIPPO) multi-year field campaign are compared with simulations from the atmospheric component of the Community Earth System Model (CESM). Remarkably, CESM is able to accurately simulate the locations of cloud formation, and even cloud microphysical properties are comparable between the model and observations. Significantly, the simulations do not predict sufficient supercooled liquid. Altering the model cloud and aerosol processes to better reproduce the observations of supercooled liquid acts to reduce long-standing biases in S. Ocean clouds in CESM, which are typical of other models. Furthermore, sensitivity tests show where better observational constraints on aerosols and cloud microphysics can reduce uncertainty and biases in global models. These results are intended to show how we can connect large scale simulations with field observations in the S. Ocean to better understand Southern Ocean cloud processes and reduce biases in global climate simulations.

  7. SPH/N-body simulations of small (D = 10 km) monolithic asteroidal breakups and improved parametric relations for Monte-Carlo collisional models

    NASA Astrophysics Data System (ADS)

    Ševecek, Pavel; Broz, Miroslav; Nesvorny, David; Durda, Daniel D.; Asphaug, Erik; Walsh, Kevin J.; Richardson, Derek C.

    2016-10-01

    Detailed models of asteroid collisions can yield important constrains for the evolution of the Main Asteroid Belt, but the respective parameter space is large and often unexplored. We thus performed a new set of simulations of asteroidal breakups, i.e. fragmentations of intact targets, subsequent gravitational reaccumulation and formation of small asteroid families, focusing on parent bodies with diameters D = 10 km.Simulations were performed with a smoothed-particle hydrodynamics (SPH) code (Benz & Asphaug 1994), combined with an efficient N-body integrator (Richardson et al. 2000). We assumed a number of projectile sizes, impact velocities and impact angles. The rheology used in the physical model does not include friction nor crushing; this allows for a direct comparison to results of Durda et al. (2007). Resulting size-frequency distributions are significantly different from scaled-down simulations with D = 100 km monolithic targets, although they may be even more different for pre-shattered targets.We derive new parametric relations describing fragment distributions, suitable for Monte-Carlo collisional models. We also characterize velocity fields and angular distributions of fragments, which can be used as initial conditions in N-body simulations of small asteroid families. Finally, we discuss various uncertainties related to SPH simulations.

  8. A Statistical Comparison of PSC Model Simulations and POAM Observations

    NASA Technical Reports Server (NTRS)

    Strawa, A. W.; Drdla, K.; Fromm, M.; Bokarius, K.; Gore, Warren J. (Technical Monitor)

    2002-01-01

    A better knowledge of PSC composition and formation mechanisms is important to better understand and predict stratospheric ozone depletion. Several past studies have attempted to compare modeling results with satellite observations. These comparisons have concentrated on case studies. In this paper we adopt a statistical approach. POAM PSC observations from several Arctic winters are categorized into Type Ia and Ib PSCs using a technique based on Strawa et al. The discrimination technique has been modified to employ the wavelengths dependence of the extinction signal at all wavelengths rather than only at 603 and 10 18 nm. Winter-long simulations for the 1999-2000 Arctic winter have been made using the IMPACT model. These simulations have been constrained by aircraft observations made during the SOLVE/THESEO 2000 campaign. A complete set of winter-long simulations was run for several different microphysical and PSC formation scenarios. The simulations give us perfect knowledge of PSC type (Ia, Ib, or II), composition, especially condensed phase HNO3 which is important for denitrification, and condensed phase H2O. Comparisons are made between the simulation and observation of PSC extinction at 1018 rim versus wavelength dependence, winter-long percentages of Ia and Ib occurrence, and temporal and altitude trends of the PSCs. These comparisons allow us to comment on how realistic some modeling scenarios are.

  9. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  10. Plan View Pattern Control for Steel Plates through Constrained Locally Weighted Regression

    NASA Astrophysics Data System (ADS)

    Shigemori, Hiroyasu; Nambu, Koji; Nagao, Ryo; Araki, Tadashi; Mizushima, Narihito; Kano, Manabu; Hasebe, Shinji

    A technique for performing parameter identification in a locally weighted regression model using foresight information on the physical properties of the object of interest as constraints was proposed. This method was applied to plan view pattern control of steel plates, and a reduction of shape nonconformity (crop) at the plate head end was confirmed by computer simulation based on real operation data.

  11. Exact symmetries in the velocity fluctuations of a hot Brownian swimmer

    NASA Astrophysics Data System (ADS)

    Falasco, Gianmaria; Pfaller, Richard; Bregulla, Andreas P.; Cichos, Frank; Kroy, Klaus

    2016-09-01

    Symmetries constrain dynamics. We test this fundamental physical principle, experimentally and by molecular dynamics simulations, for a hot Janus swimmer operating far from thermal equilibrium. Our results establish scalar and vectorial steady-state fluctuation theorems and a thermodynamic uncertainty relation that link the fluctuating particle current to its entropy production at an effective temperature. A Markovian minimal model elucidates the underlying nonequilibrium physics.

  12. Numerical Simulation of Shock Interaction with Deformable Particles Using a Constrained Interface Reinitialization Scheme

    NASA Astrophysics Data System (ADS)

    Jackson, Thomas L.; Sridharan, Prashanth; Zhang, Ju; Balachandar, S.

    2015-11-01

    In this work we present axisymmetric numerical simulations of shock propagating in nitromethane over an aluminum particle for post-shock pressures up to 10 GPa. The numerical method is a finite-volume based solver on a Cartesian grid, which allows for multi-material interfaces and shocks. To preserve particle mass and volume, a novel constraint reinitialization scheme is introduced. We compute the unsteady drag coefficient as a function of post-shock pressure, and show that when normalized by post-shock conditions, the maximum drag coefficient decreases with increasing post-shock pressure. Using this information, we also present a simplified point-particle force model that can be used for mesoscale simulations.

  13. Modeling intrinsic electrophysiology of AII amacrine cells: preliminary results.

    PubMed

    Apollo, Nick; Grayden, David B; Burkitt, Anthony N; Meffin, Hamish; Kameneva, Tatiana

    2013-01-01

    In patients who have lost their photoreceptors due to retinal degenerative diseases, it is possible to restore rudimentary vision by electrically stimulating surviving neurons. AII amacrine cells, which reside in the inner plexiform layer, split the signal from rod bipolar cells into ON and OFF cone pathways. As a result, it is of interest to develop a computational model to aid in the understanding of how these cells respond to the electrical stimulation delivered by a prosthetic implant. The aim of this work is to develop and constrain parameters in a single-compartment model of an AII amacrine cell using data from whole-cell patch clamp recordings. This model will be used to explore responses of AII amacrine cells to electrical stimulation. Single-compartment Hodgkin-Huxley-type neural models are simulated in the NEURON environment. Simulations showed successful reproduction of the potassium currentvoltage relationship and some of the spiking properties observed in vitro.

  14. Parallelization of the Coupled Earthquake Model

    NASA Technical Reports Server (NTRS)

    Block, Gary; Li, P. Peggy; Song, Yuhe T.

    2007-01-01

    This Web-based tsunami simulation system allows users to remotely run a model on JPL s supercomputers for a given undersea earthquake. At the time of this reporting, predicting tsunamis on the Internet has never happened before. This new code directly couples the earthquake model and the ocean model on parallel computers and improves simulation speed. Seismometers can only detect information from earthquakes; they cannot detect whether or not a tsunami may occur as a result of the earthquake. When earthquake-tsunami models are coupled with the improved computational speed of modern, high-performance computers and constrained by remotely sensed data, they are able to provide early warnings for those coastal regions at risk. The software is capable of testing NASA s satellite observations of tsunamis. It has been successfully tested for several historical tsunamis, has passed all alpha and beta testing, and is well documented for users.

  15. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2011-12-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  16. Uncertainty Quantification and Parameter Tuning: A Case Study of Convective Parameterization Scheme in the WRF Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.

    2012-04-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  17. Some issues in uncertainty quantification and parameter tuning: a case study of convective parameterization scheme in the WRF regional climate model

    NASA Astrophysics Data System (ADS)

    Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.

    2012-03-01

    The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.

  18. Exploring the Efficacy and Limitations of Shock-cooling Models: New Analysis of Type II Supernovae Observed by the Kepler Mission

    NASA Astrophysics Data System (ADS)

    Rubin, Adam; Gal-Yam, Avishay

    2017-10-01

    Modern transient surveys have begun discovering and following supernovae (SNe) shortly after first light—providing systematic measurements of the rise of Type II SNe. We explore how analytic models of early shock-cooling emission from core-collapse SNe can constrain the progenitor’s radius, explosion velocity, and local host extinction. We simulate synthetic photometry in several realistic observing scenarios; assuming the models describe the typical explosions well, we find that ultraviolet observations can constrain the progenitor’s radius to a statistical uncertainty of ±10%-15%, with a systematic uncertainty of ±20%. With these observations the local host extinction (A V ) can be constrained to a factor of two and the shock velocity to ±5% with a systematic uncertainty of ±10%. We also reanalyze the SN light curves presented by Garnavich et al. (2016) and find that KSN 2011a can be fit by a blue supergiant model with a progenitor radius of {R}s< 7.7+8.8({stat})+1.9({sys}) {R}⊙ , while KSN 2011d can be fit with a red supergiant model with a progenitor radius of {R}s={111}-21({stat)-1({sys})}+89({stat)+49({sys})} {R}⊙ . Our results do not agree with those of Garnavich et al. Moreover, we re-evaluate their claims and find that there is no statistically significant evidence for a shock-breakout flare in the light curve of KSN 2011d.

  19. EDIN design study alternate space shuttle booster replacement concepts. Volume 2: Design simulation results

    NASA Technical Reports Server (NTRS)

    Demakes, P. T.; Hirsch, G. N.; Stewart, W. A.; Glatt, C. R.

    1976-01-01

    Historical weight estimating relationships were developed for the liquid rocket booster (LRB) using Saturn technology, and modified as required to support the EDIN05 study. Mission performance was computed using February 1975 shuttle configuration groundrules to allow reasonable comparison of the existing shuttle with the EDIN05 designs. The launch trajectory was constrained to pass through both the RTLS/AOA and main engine cut-off points. Performance analysis was based on a point design trajectory model which optimized initial tilt rate and exo-atmospheric pitch profile. A gravity turn was employed during the boost phase in place of the shuttle angle-of-attack profile. Engine throttling add/or shutdown was used to constrain dynamic pressure and/or longitudinal acceleration where necessary.

  20. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  1. Real time simulation of computer-assisted sequencing of terminal area operations

    NASA Technical Reports Server (NTRS)

    Dear, R. G.

    1981-01-01

    A simulation was developed to investigate the utilization of computer assisted decision making for the task of sequencing and scheduling aircraft in a high density terminal area. The simulation incorporates a decision methodology termed Constrained Position Shifting. This methodology accounts for aircraft velocity profiles, routes, and weight classes in dynamically sequencing and scheduling arriving aircraft. A sample demonstration of Constrained Position Shifting is presented where six aircraft types (including both light and heavy aircraft) are sequenced to land at Denver's Stapleton International Airport. A graphical display is utilized and Constrained Position Shifting with a maximum shift of four positions (rearward or forward) is compared to first come, first serve with respect to arrival at the runway. The implementation of computer assisted sequencing and scheduling methodologies is investigated. A time based control concept will be required and design considerations for such a system are discussed.

  2. Just-in-time Time Data Analytics and Visualization of Climate Simulations using the Bellerophon Framework

    NASA Astrophysics Data System (ADS)

    Anantharaj, V. G.; Venzke, J.; Lingerfelt, E.; Messer, B.

    2015-12-01

    Climate model simulations are used to understand the evolution and variability of earth's climate. Unfortunately, high-resolution multi-decadal climate simulations can take days to weeks to complete. Typically, the simulation results are not analyzed until the model runs have ended. During the course of the simulation, the output may be processed periodically to ensure that the model is preforming as expected. However, most of the data analytics and visualization are not performed until the simulation is finished. The lengthy time period needed for the completion of the simulation constrains the productivity of climate scientists. Our implementation of near real-time data visualization analytics capabilities allows scientists to monitor the progress of their simulations while the model is running. Our analytics software executes concurrently in a co-scheduling mode, monitoring data production. When new data are generated by the simulation, a co-scheduled data analytics job is submitted to render visualization artifacts of the latest results. These visualization output are automatically transferred to Bellerophon's data server located at ORNL's Compute and Data Environment for Science (CADES) where they are processed and archived into Bellerophon's database. During the course of the experiment, climate scientists can then use Bellerophon's graphical user interface to view animated plots and their associated metadata. The quick turnaround from the start of the simulation until the data are analyzed permits research decisions and projections to be made days or sometimes even weeks sooner than otherwise possible! The supercomputer resources used to run the simulation are unaffected by co-scheduling the data visualization jobs, so the model runs continuously while the data are visualized. Our just-in-time data visualization software looks to increase climate scientists' productivity as climate modeling moves into exascale era of computing.

  3. Improving the realism of hydrologic model through multivariate parameter estimation

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Attinger, Sabine; Samaniego, Luis

    2017-04-01

    Increased availability and quality of near real-time observations should improve understanding of predictive skills of hydrological models. Recent studies have shown the limited capability of river discharge data alone to adequately constrain different components of distributed model parameterizations. In this study, the GRACE satellite-based total water storage (TWS) anomaly is used to complement the discharge data with an aim to improve the fidelity of mesoscale hydrologic model (mHM) through multivariate parameter estimation. The study is conducted in 83 European basins covering a wide range of hydro-climatic regimes. The model parameterization complemented with the TWS anomalies leads to statistically significant improvements in (1) discharge simulations during low-flow period, and (2) evapotranspiration estimates which are evaluated against independent (FLUXNET) data. Overall, there is no significant deterioration in model performance for the discharge simulations when complemented by information from the TWS anomalies. However, considerable changes in the partitioning of precipitation into runoff components are noticed by in-/exclusion of TWS during the parameter estimation. A cross-validation test carried out to assess the transferability and robustness of the calibrated parameters to other locations further confirms the benefit of complementary TWS data. In particular, the evapotranspiration estimates show more robust performance when TWS data are incorporated during the parameter estimation, in comparison with the benchmark model constrained against discharge only. This study highlights the value for incorporating multiple data sources during parameter estimation to improve the overall realism of hydrologic model and its applications over large domains. Rakovec, O., Kumar, R., Attinger, S. and Samaniego, L. (2016): Improving the realism of hydrologic model functioning through multivariate parameter estimation. Water Resour. Res., 52, http://dx.doi.org/10.1002/2016WR019430

  4. Improving the Performance of Highly Constrained Water Resource Systems using Multiobjective Evolutionary Algorithms and RiverWare

    NASA Astrophysics Data System (ADS)

    Smith, R.; Kasprzyk, J. R.; Zagona, E. A.

    2015-12-01

    Instead of building new infrastructure to increase their supply reliability, water resource managers are often tasked with better management of current systems. The managers often have existing simulation models that aid their planning, and lack methods for efficiently generating and evaluating planning alternatives. This presentation discusses how multiobjective evolutionary algorithm (MOEA) decision support can be used with the sophisticated water infrastructure model, RiverWare, in highly constrained water planning environments. We first discuss a study that performed a many-objective tradeoff analysis of water supply in the Tarrant Regional Water District (TRWD) in Texas. RiverWare is combined with the Borg MOEA to solve a seven objective problem that includes systemwide performance objectives and individual reservoir storage reliability. Decisions within the formulation balance supply in multiple reservoirs and control pumping between the eastern and western parts of the system. The RiverWare simulation model is forced by two stochastic hydrology scenarios to inform how management changes in wet versus dry conditions. The second part of the presentation suggests how a broader set of RiverWare-MOEA studies can inform tradeoffs in other systems, especially in political situations where multiple actors are in conflict over finite water resources. By incorporating quantitative representations of diverse parties' objectives during the search for solutions, MOEAs may provide support for negotiations and lead to more widely beneficial water management outcomes.

  5. Initialising reservoir models for history matching using pre-production 3D seismic data: constraining methods and uncertainties

    NASA Astrophysics Data System (ADS)

    Niri, Mohammad Emami; Lumley, David E.

    2017-10-01

    Integration of 3D and time-lapse 4D seismic data into reservoir modelling and history matching processes poses a significant challenge due to the frequent mismatch between the initial reservoir model, the true reservoir geology, and the pre-production (baseline) seismic data. A fundamental step of a reservoir characterisation and performance study is the preconditioning of the initial reservoir model to equally honour both the geological knowledge and seismic data. In this paper we analyse the issues that have a significant impact on the (mis)match of the initial reservoir model with well logs and inverted 3D seismic data. These issues include the constraining methods for reservoir lithofacies modelling, the sensitivity of the results to the presence of realistic resolution and noise in the seismic data, the geostatistical modelling parameters, and the uncertainties associated with quantitative incorporation of inverted seismic data in reservoir lithofacies modelling. We demonstrate that in a geostatistical lithofacies simulation process, seismic constraining methods based on seismic litho-probability curves and seismic litho-probability cubes yield the best match to the reference model, even when realistic resolution and noise is included in the dataset. In addition, our analyses show that quantitative incorporation of inverted 3D seismic data in static reservoir modelling carries a range of uncertainties and should be cautiously applied in order to minimise the risk of misinterpretation. These uncertainties are due to the limited vertical resolution of the seismic data compared to the scale of the geological heterogeneities, the fundamental instability of the inverse problem, and the non-unique elastic properties of different lithofacies types.

  6. Shock compression of strongly correlated oxides: A liquid-regime equation of state for cerium(IV) oxide

    NASA Astrophysics Data System (ADS)

    Weck, Philippe F.; Cochrane, Kyle R.; Root, Seth; Lane, J. Matthew D.; Shulenburger, Luke; Carpenter, John H.; Sjostrom, Travis; Mattsson, Thomas R.; Vogler, Tracy J.

    2018-03-01

    The shock Hugoniot for full-density and porous CeO2 was investigated in the liquid regime using ab initio molecular dynamics (AIMD) simulations with Erpenbeck's approach based on the Rankine-Hugoniot jump conditions. The phase space was sampled by carrying out NVT simulations for isotherms between 6000 and 100 000 K and densities ranging from ρ =2.5 to 20 g /cm3 . The impact of on-site Coulomb interaction corrections +U on the equation of state (EOS) obtained from AIMD simulations was assessed by direct comparison with results from standard density functional theory simulations. Classical molecular dynamics (CMD) simulations were also performed to model atomic-scale shock compression of larger porous CeO2 models. Results from AIMD and CMD compression simulations compare favorably with Z-machine shock data to 525 GPa and gas-gun data to 109 GPa for porous CeO2 samples. Using results from AIMD simulations, an accurate liquid-regime Mie-Grüneisen EOS was built for CeO2. In addition, a revised multiphase SESAME-type EOS was constrained using AIMD results and experimental data generated in this work. This study demonstrates the necessity of acquiring data in the porous regime to increase the reliability of existing analytical EOS models.

  7. Evaluation of the land surface water budget in NCEP/NCAR and NCEP/DOE reanalyses using an off-line hydrologic model

    NASA Astrophysics Data System (ADS)

    Maurer, Edwin P.; O'Donnell, Greg M.; Lettenmaier, Dennis P.; Roads, John O.

    2001-08-01

    The ability of the National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis (NRA1) and the follow-up NCEP/Department of Energy (DOE) reanalysis (NRA2), to reproduce the hydrologic budgets over the Mississippi River basin is evaluated using a macroscale hydrology model. This diagnosis is aided by a relatively unconstrained global climate simulation using the NCEP global spectral model, and a more highly constrained regional climate simulation using the NCEP regional spectral model, both employing the same land surface parameterization (LSP) as the reanalyses. The hydrology model is the variable infiltration capacity (VIC) model, which is forced by gridded observed precipitation and temperature. It reproduces observed streamflow, and by closure is constrained to balance other terms in the surface water and energy budgets. The VIC-simulated surface fluxes therefore provide a benchmark for evaluating the predictions from the reanalyses and the climate models. The comparisons, conducted for the 10-year period 1988-1997, show the well-known overestimation of summer precipitation in the southeastern Mississippi River basin, a consistent overestimation of evapotranspiration, and an underprediction of snow in NRA1. These biases are generally lower in NRA2, though a large overprediction of snow water equivalent exists. NRA1 is subject to errors in the surface water budget due to nudging of modeled soil moisture to an assumed climatology. The nudging and precipitation bias alone do not explain the consistent overprediction of evapotranspiration throughout the basin. Another source of error is the gravitational drainage term in the NCEP LSP, which produces the majority of the model's reported runoff. This may contribute to an overprediction of persistence of surface water anomalies in much of the basin. Residual evapotranspiration inferred from an atmospheric balance of NRA1, which is more directly related to observed atmospheric variables, matches the VIC prediction much more closely than the coupled models. However, the persistence of the residual evapotranspiration is much less than is predicted by the hydrological model or the climate models.

  8. Relaxation model for extended magnetohydrodynamics: Comparison to magnetohydrodynamics for dense Z-pinches

    DOE PAGES

    Seyler, C. E.; Martin, M. R.

    2011-01-14

    In this study, it is shown that the two-fluid model under a generalized Ohm’s law formulation and the resistive magnetohydrodynamics (MHD) can both be described as relaxation systems. In the relaxation model, the under-resolved stiff source terms constrain the dynamics of a set of hyperbolic equations to give the correct asymptotic solution. When applied to the collisional two-fluid model, the relaxation of fast time scales associated with displacement current and finite electron mass allows for a natural transition from a system where Ohm’s law determines the current density to a system where Ohm’s law determines the electric field. This resultmore » is used to derive novel algorithms, which allow for multiscale simulation of low and high frequency extended-MHD physics. This relaxation formulation offers an efficient way to implicitly advance the Hall term and naturally simulate a plasma-vacuum interface without invoking phenomenological models. The relaxation model is implemented as an extended-MHD code, which is used to analyze pulsed power loads such as wire arrays and ablating foils. Two-dimensional simulations of pulsed power loads are compared for extended-MHD and MHD. For these simulations, it is also shown that the relaxation model properly recovers the resistive-MHD limit.« less

  9. Multi-centennial ecosystem modelling in northeastern America at the species level

    NASA Astrophysics Data System (ADS)

    Steinkamp, J.; Biskupovic, A.; Rollinson, C.; Dawson, A.; Goring, S. J.; McLachlan, J. S.; Mladenoff, D. J.; Williams, J.; Hickler, T.

    2016-12-01

    Most dynamic global vegetation models (DGVM) are based on a small set of plant functional types (PFTs) to simulate biome distribution, vegetation dynamics, and carbon and nutrient cycles, which is of limited use for more regional studies and stakeholders. We tested a tree-species-based parameterization approach of the LPJ-GUESS DGVM in the northeastern USA, which previously has been successful in simulating the main potential natural vegetation zones in Europe. A transient model run was carried out from 850 A.D. to today, and the model results have been evaluated against pre-settlement vegetation maps and reconstructed vegetation from pollen within the PalEON project and hypothesized potential natural vegetation zones. We will analyze the simulation with respect to long term carbon cycling and the driving forces. Main reconstructed vegetation features were reproduced by the model, which implies that the general processes shaping the forested vegetation in parts of Europe and the northeastern USA are similar. However, so far the decrease in biomass towards the prairie in the west could not fully be captured by the model, which is currently analyzed with additional simulations. Moisture and fire are the important driver at the prairie forest transition zone, which we need to better constrain for this model domain.

  10. Impact of oceanic processes on the carbon cycle during the last termination

    NASA Astrophysics Data System (ADS)

    Bouttes, N.; Paillard, D.; Roche, D. M.; Waelbroeck, C.; Kageyama, M.; Lourantou, A.; Michel, E.; Bopp, L.

    2012-01-01

    During the last termination (from ~18 000 years ago to ~9000 years ago), the climate significantly warmed and the ice sheets melted. Simultaneously, atmospheric CO2 increased from ~190 ppm to ~260 ppm. Although this CO2 rise plays an important role in the deglacial warming, the reasons for its evolution are difficult to explain. Only box models have been used to run transient simulations of this carbon cycle transition, but by forcing the model with data constrained scenarios of the evolution of temperature, sea level, sea ice, NADW formation, Southern Ocean vertical mixing and biological carbon pump. More complex models (including GCMs) have investigated some of these mechanisms but they have only been used to try and explain LGM versus present day steady-state climates. In this study we use a coupled climate-carbon model of intermediate complexity to explore the role of three oceanic processes in transient simulations: the sinking of brines, stratification-dependent diffusion and iron fertilization. Carbonate compensation is accounted for in these simulations. We show that neither iron fertilization nor the sinking of brines alone can account for the evolution of CO2, and that only the combination of the sinking of brines and interactive diffusion can simultaneously simulate the increase in deep Southern Ocean δ13C. The scenario that agrees best with the data takes into account all mechanisms and favours a rapid cessation of the sinking of brines around 18 000 years ago, when the Antarctic ice sheet extent was at its maximum. In this scenario, we make the hypothesis that sea ice formation was then shifted to the open ocean where the salty water is quickly mixed with fresher water, which prevents deep sinking of salty water and therefore breaks down the deep stratification and releases carbon from the abyss. Based on this scenario, it is possible to simulate both the amplitude and timing of the long-term CO2 increase during the last termination in agreement with ice core data. The atmospheric δ13C appears to be highly sensitive to changes in the terrestrial biosphere, underlining the need to better constrain the vegetation evolution during the termination.

  11. Impact of oceanic processes on the carbon cycle during the last termination

    NASA Astrophysics Data System (ADS)

    Bouttes, N.; Paillard, D.; Roche, D. M.; Waelbroeck, C.; Kageyama, M.; Lourantou, A.; Michel, E.; Bopp, L.

    2011-06-01

    During the last termination (from ~18 000 yr ago to ~9000 yr ago) the climate significantly warmed and the ice sheets melted. Simultaneously, atmospheric CO2 increased from ~190 ppm to ~260 ppm. Although this CO2 rise plays an important role in the deglacial warming, the reasons for its evolution are difficult to explain. Only box models have been used to run transient simulations of this carbon cycle transition, but by forcing the model with data constrained scenarios of the evolution of temperature, sea level, sea ice, NADW formation, Southern Ocean vertical mixing and biological carbon pump. More complex models (including GCMs) have investigated some of these mechanisms but they have only been used to try and explain LGM versus present day steady-state climates. In this study we use a climate-carbon coupled model of intermediate complexity to explore the role of three oceanic processes in transient simulations: the sinking of brines, stratification-dependant diffusion and iron fertilization. Carbonate compensation is accounted for in these simulations. We show that neither iron fertilization nor the sinking of brines alone can account for the evolution of CO2, and that only the combination of the sinking of brines and interactive diffusion can simultaneously simulate the increase in deep Southern Ocean δ13C. The scenario that agrees best with the data takes into account all mechanisms and favours a rapid cessation of the sinking of brines around 18 000 yr ago, when the Antarctic ice sheet extent was at its maximum. Sea ice formation was then shifted to the open ocean where the salty water is quickly mixed with fresher water, which prevents deep sinking of salty water and therefore breaks down the deep stratification and releases carbon from the abyss. Based on this scenario it is possible to simulate both the amplitude and timing of the CO2 increase during the last termination in agreement with data. The atmospheric δ13C appears to be highly sensitive to changes in the terrestrial biosphere, underlining the need to better constrain the vegetation evolution during the termination.

  12. Bayesian Analysis of the Glacial-Interglacial Methane Increase Constrained by Stable Isotopes and Earth System Modeling

    NASA Astrophysics Data System (ADS)

    Hopcroft, Peter O.; Valdes, Paul J.; Kaplan, Jed O.

    2018-04-01

    The observed rise in atmospheric methane (CH4) from 375 ppbv during the Last Glacial Maximum (LGM: 21,000 years ago) to 680 ppbv during the late preindustrial era is not well understood. Atmospheric chemistry considerations implicate an increase in CH4 sources, but process-based estimates fail to reproduce the required amplitude. CH4 stable isotopes provide complementary information that can help constrain the underlying causes of the increase. We combine Earth System model simulations of the late preindustrial and LGM CH4 cycles, including process-based estimates of the isotopic discrimination of vegetation, in a box model of atmospheric CH4 and its isotopes. Using a Bayesian approach, we show how model-based constraints and ice core observations may be combined in a consistent probabilistic framework. The resultant posterior distributions point to a strong reduction in wetland and other biogenic CH4 emissions during the LGM, with a modest increase in the geological source, or potentially natural or anthropogenic fires, accounting for the observed enrichment of δ13CH4.

  13. Constraining the hadronic spectrum through QCD thermodynamics on the lattice

    NASA Astrophysics Data System (ADS)

    Alba, Paolo; Bellwied, Rene; Borsányi, Szabolcs; Fodor, Zoltan; Günther, Jana; Katz, Sandor D.; Mantovani Sarti, Valentina; Noronha-Hostler, Jacquelyn; Parotto, Paolo; Pasztor, Attila; Vazquez, Israel Portillo; Ratti, Claudia

    2017-08-01

    Fluctuations of conserved charges allow us to study the chemical composition of hadronic matter. A comparison between lattice simulations and the hadron resonance gas (HRG) model suggested the existence of missing strange resonances. To clarify this issue we calculate the partial pressures of mesons and baryons with different strangeness quantum numbers using lattice simulations in the confined phase of QCD. In order to make this calculation feasible, we perform simulations at imaginary strangeness chemical potentials. We systematically study the effect of different hadronic spectra on thermodynamic observables in the HRG model and compare to lattice QCD results. We show that, for each hadronic sector, the well-established states are not enough in order to have agreement with the lattice results. Additional states, either listed in the Particle Data Group booklet (PDG) but not well established, or predicted by the quark model (QM), are necessary in order to reproduce the lattice data. For mesons, it appears that the PDG and the quark model do not list enough strange mesons, or that, in this sector, interactions beyond those included in the HRG model are needed to reproduce the lattice QCD results.

  14. Evaluation of the tropospheric flows to a major Southern Hemisphere stratospheric warming event using NCEP/NCAR Reanalysis data with a PSU/NCAR nudging MM5V3 model

    NASA Astrophysics Data System (ADS)

    Wang, K.

    2008-04-01

    Previous studies of the exceptional 2002 Southern Hemisphere (SH) stratospheric warming event lead to some uncertainty, namely the question of whether excessive heat fluxes in the upper troposphere and lower stratosphere are a symptom or cause of the 2002 SH warming event. In this work, we use a hemispheric version of the MM5 model with nudging capability and we devised a novel approach to separately test the significance of the stratosphere and troposphere for this year. We paired the flow conditions from 2002 in the stratosphere and troposphere, respectively, against the conditions in 1998 (a year with displaced polar vortex) and in 1948 (a year with strong polar vortex that coincided with the geographical South Pole). Our experiments show that the flow conditions from below determine the stratospheric flow features over the polar region. Regardless of the initial stratospheric conditions in 1998 or 1948, when we simulated these past stratospheres with the troposphere/lower stratosphere conditions constrained to 2002 levels, the simulated middle stratospheres resemble those observed in 2002 stratosphere over the polar region. On the other hand, when the 2002 stratosphere was integrated with the troposphere/lower stratosphere conductions constrained to 1948 and 1998, respectively, the simulated middle stratospheric conditions over the polar region shift toward those of 1948 and 1998. Thus, our experiments further support the wave-forcing theory as the cause of the 2002 SH warming event.

  15. A new potential for the numerical simulations of electrolyte solutions on a hypersphere

    NASA Astrophysics Data System (ADS)

    Caillol, Jean-Michel

    1993-12-01

    We propose a new way of performing numerical simulations of the restricted primitive model of electrolytes—and related models—on a hypersphere. In this new approach, the system is viewed as a single component fluid of charged bihard spheres constrained to move at the surface of a four dimensional sphere. A charged bihard sphere is defined as the rigid association of two antipodal charged hard spheres of opposite signs. These objects interact via a simple analytical potential obtained by solving the Poisson-Laplace equation on the hypersphere. This new technique of simulation enables a precise determination of the chemical potential of the charged species in the canonical ensemble by a straightforward application of Widom's insertion method. Comparisons with previous simulations demonstrate the efficiency and the reliability of the method.

  16. Fire emissions constrained by the synergistic use of formaldehyde and glyoxal SCIAMACHY columns in a two-compound inverse modelling framework

    NASA Astrophysics Data System (ADS)

    Stavrakou, T.; Muller, J.; de Smedt, I.; van Roozendael, M.; Vrekoussis, M.; Wittrock, F.; Richter, A.; Burrows, J.

    2008-12-01

    Formaldehyde (HCHO) and glyoxal (CHOCHO) are carbonyls formed in the oxidation of volatile organic compounds (VOCs) emitted by plants, anthropogenic activities, and biomass burning. They are also directly emitted by fires. Although this primary production represents only a small part of the global source for both species, yet it can be locally important during intense fire events. Simultaneous observations of formaldehyde and glyoxal retrieved from the SCIAMACHY satellite instrument in 2005 and provided by the BIRA/IASB and the Bremen group, respectively, are compared with the corresponding columns simulated with the IMAGESv2 global CTM. The chemical mechanism has been optimized with respect to HCHO and CHOCHO production from pyrogenically emitted NMVOCs, based on the Master Chemical Mechanism (MCM) and on an explicit profile for biomass burning emissions. Gas-to-particle conversion of glyoxal in clouds and in aqueous aerosols is considered in the model. In this study we provide top-down estimates for fire emissions of HCHO and CHOCHO precursors by performing a two- compound inversion of emissions using the adjoint of the IMAGES model. The pyrogenic fluxes are optimized at the model resolution. The two-compound inversion offers the advantage that the information gained from measurements of one species constrains the sources of both compounds, due to the existence of common precursors. In a first inversion, only the burnt biomass amounts are optimized. In subsequent simulations, the emission factors for key individual NMVOC compounds are also varied.

  17. Modelling absorbing aerosol with ECHAM-HAM: Insights from regional studies

    NASA Astrophysics Data System (ADS)

    Tegen, Ina; Heinold, Bernd; Schepanski, Kerstin; Banks, Jamie; Kubin, Anne; Schacht, Jacob

    2017-04-01

    Quantifying distributions and properties of absorbing aerosol is a basis for investigations of interactions of aerosol particles with radiation and climate. While evaluations of aerosol models by field measurements can be particularly successful at the regional scale, such results need to be put into a global context for climate studies. We present an overview over studies performed at the Leibniz Institute for Tropospheric Research aiming at constraining the properties of mineral dust and soot aerosol in the global aerosol model ECHAM6-HAM2 based on different regional studies. An example is the impact of different sources for dust transported to central Asia, which is influenced, by far-range transport of dust from Arabia and the Sahara together with dust from local sources. Dust types from these different source regions were investigated in the context of the CADEX project and are expected to have different optical properties. For Saharan dust, satellite retrievals from MSG SEVIRI are used to constrain Saharan dust sources and optical properties. In the Arctic region, on one hand dust aerosol is simulated in the framework of the PalMod project. On the other hand aerosol measurements that will be taken during the DFG-funded (AC)3 field campaigns will be used to evaluate the simulated transport pathways of soot aerosol from European, North American and Asian sources, as well as the parameterization of soot ageing processes in ECHAM6-HAM2. Ultimately, results from these studies will improve the representation of aerosol absorption in the global model.

  18. Prediction of hydrographs and flow-duration curves in almost ungauged catchments: Which runoff measurements are most informative for model calibration?

    NASA Astrophysics Data System (ADS)

    Pool, Sandra; Viviroli, Daniel; Seibert, Jan

    2017-11-01

    Applications of runoff models usually rely on long and continuous runoff time series for model calibration. However, many catchments around the world are ungauged and estimating runoff for these catchments is challenging. One approach is to perform a few runoff measurements in a previously fully ungauged catchment and to constrain a runoff model by these measurements. In this study we investigated the value of such individual runoff measurements when taken at strategic points in time for applying a bucket-type runoff model (HBV) in ungauged catchments. Based on the assumption that a limited number of runoff measurements can be taken, we sought the optimal sampling strategy (i.e. when to measure the streamflow) to obtain the most informative data for constraining the runoff model. We used twenty gauged catchments across the eastern US, made the assumption that these catchments were ungauged, and applied different runoff sampling strategies. All tested strategies consisted of twelve runoff measurements within one year and ranged from simply using monthly flow maxima to a more complex selection of observation times. In each case the twelve runoff measurements were used to select 100 best parameter sets using a Monte Carlo calibration approach. Runoff simulations using these 'informed' parameter sets were then evaluated for an independent validation period in terms of the Nash-Sutcliffe efficiency of the hydrograph and the mean absolute relative error of the flow-duration curve. Model performance measures were normalized by relating them to an upper and a lower benchmark representing a well-informed and an uninformed model calibration. The hydrographs were best simulated with strategies including high runoff magnitudes as opposed to the flow-duration curves that were generally better estimated with strategies that captured low and mean flows. The choice of a sampling strategy covering the full range of runoff magnitudes enabled hydrograph and flow-duration curve simulations close to a well-informed model calibration. The differences among such strategies covering the full range of runoff magnitudes were small indicating that the exact choice of a strategy might be less crucial. Our study corroborates the information value of a small number of strategically selected runoff measurements for simulating runoff with a bucket-type runoff model in almost ungauged catchments.

  19. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE PAGES

    Huang, Qiuhua; Vittal, Vijay

    2018-05-09

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  20. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Qiuhua; Vittal, Vijay

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  1. Algorithms and architecture for multiprocessor based circuit simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deutsch, J.T.

    Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less

  2. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  3. A Study of Interactions between Mixing and Chemical Reaction Using the Rate-Controlled Constrained-Equilibrium Method

    NASA Astrophysics Data System (ADS)

    Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed

    2016-10-01

    The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.

  4. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE PAGES

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...

    2016-12-23

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  5. Commensurate comparisons of models with energy budget observations reveal consistent climate sensitivities

    NASA Astrophysics Data System (ADS)

    Armour, K.

    2017-12-01

    Global energy budget observations have been widely used to constrain the effective, or instantaneous climate sensitivity (ICS), producing median estimates around 2°C (Otto et al. 2013; Lewis & Curry 2015). A key question is whether the comprehensive climate models used to project future warming are consistent with these energy budget estimates of ICS. Yet, performing such comparisons has proven challenging. Within models, values of ICS robustly vary over time, as surface temperature patterns evolve with transient warming, and are generally smaller than the values of equilibrium climate sensitivity (ECS). Naively comparing values of ECS in CMIP5 models (median of about 3.4°C) to observation-based values of ICS has led to the suggestion that models are overly sensitive. This apparent discrepancy can partially be resolved by (i) comparing observation-based values of ICS to model values of ICS relevant for historical warming (Armour 2017; Proistosescu & Huybers 2017); (ii) taking into account the "efficacies" of non-CO2 radiative forcing agents (Marvel et al. 2015); and (iii) accounting for the sparseness of historical temperature observations and differences in sea-surface temperature and near-surface air temperature over the oceans (Richardson et al. 2016). Another potential source of discrepancy is a mismatch between observed and simulated surface temperature patterns over recent decades, due to either natural variability or model deficiencies in simulating historical warming patterns. The nature of the mismatch is such that simulated patterns can lead to more positive radiative feedbacks (higher ICS) relative to those engendered by observed patterns. The magnitude of this effect has not yet been addressed. Here we outline an approach to perform fully commensurate comparisons of climate models with global energy budget observations that take all of the above effects into account. We find that when apples-to-apples comparisons are made, values of ICS in models are consistently in good agreement with values of ICS inferred from global energy budget constraints. This suggests that the current generation of coupled climate models are not overly sensitive. However, since global energy budget observations do not constrain ECS, it is less certain whether model ECS values are realistic.

  6. Combined micro and macro geodynamic modelling of mantle flow: methods, potentialities and limits.

    NASA Astrophysics Data System (ADS)

    Faccenda, M.

    2015-12-01

    Over the last few years, geodynamic simulations aiming at reconstructing the Earth's internal dynamics have increasingly attempted to link processes occurring at the micro (i.e., strain-induced lattice preferred orientation (LPO) of crystal aggregates) and macro scale (2D/3D mantle convection). As a major outcome, such a combined approach results in the prediction of the modelled region's elastic properties that, in turn, can be used to perform seismological synthetic experiments. By comparison with observables, the geodynamic simulations can then be considered as a good numerical analogue of specific tectonic settings, constraining their deep structure and recent tectonic evolution. In this contribution, I will discuss the recent methodologies, potentialities and current limits of combined micro- and macro-flow simulations, with particular attention to convergent margins whose dynamics and deep structure is still the object of extensive studies.

  7. Visualization in mechanics: the dynamics of an unbalanced roller

    NASA Astrophysics Data System (ADS)

    Cumber, Peter S.

    2017-04-01

    It is well known that mechanical engineering students often find mechanics a difficult area to grasp. This article describes a system of equations describing the motion of a balanced and an unbalanced roller constrained by a pivot arm. A wide range of dynamics can be simulated with the model. The equations of motion are embedded in a graphical user interface for its numerical solution in MATLAB. This allows a student's focus to be on the influence of different parameters on the system dynamics. The simulation tool can be used as a dynamics demonstrator in a lecture or as an educational tool driven by the imagination of the student. By way of demonstration the simulation tool has been applied to a range of roller-pivot arm configurations. In addition, approximations to the equations of motion are explored and a second-order model is shown to be accurate for a limited range of parameters.

  8. Solar Corona Simulation Model With Positivity-preserving Property

    NASA Astrophysics Data System (ADS)

    Feng, X. S.

    2015-12-01

    Positivity-preserving is one of crucial problems in solar corona simulation. In such numerical simulation of low plasma β region, keeping density and pressure is a first of all matter to obtain physical sound solution. In the present paper, we utilize the maximum-principle-preserving flux limiting technique to develop a class of second order positivity-preserving Godunov finite volume HLL methods for the solar wind plasma MHD equations. Based on the underlying first order building block of positivity preserving Lax-Friedrichs, our schemes, under the constrained transport (CT) and generalized Lagrange multiplier (GLM) framework, can achieve high order accuracy, a discrete divergence-free condition and positivity of the numerical solution simultaneously without extra CFL constraints. Numerical results in four Carrington rotation during the declining, rising, minimum and maximum solar activity phases are provided to demonstrate the performance of modeling small plasma beta with positivity-preserving property of the proposed method.

  9. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    PubMed

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  10. The role of bias in simulation of the Indian monsoon and its relationship to predictability

    NASA Astrophysics Data System (ADS)

    Kelly, P.

    2016-12-01

    Confidence in future projections of how climate change will affect the Indian monsoon is currently limited by- among other things-model biases. That is, the systematic error in simulating the mean present day climate. An important priority question in seamless prediction involves the role of the mean state. How much of the prediction error in imperfect models stems from a biased mean state (itself a result of many interacting process errors), and how much stems from the flow dependence of processes during an oscillation or variation we are trying to predict? Using simple but effective nudging techniques, we are able to address this question in a clean and incisive framework that teases apart the roles of the mean state vs. transient flow dependence in constraining predictability. The role of bias in model fidelity of simulations of the Indian monsoon is investigated in CAM5, and the relationship to predictability in remote regions in the "free" (non-nudged) domain is explored.

  11. Effect of Substrate Wetting on the Morphology and Dynamics of Phase Separating Multi-Component Mixture

    NASA Astrophysics Data System (ADS)

    Goyal, Abheeti; Toschi, Federico; van der Schoot, Paul

    2017-11-01

    We study the morphological evolution and dynamics of phase separation of multi-component mixture in thin film constrained by a substrate. Specifically, we have explored the surface-directed spinodal decomposition of multicomponent mixture numerically by Free Energy Lattice Boltzmann (LB) simulations. The distinguishing feature of this model over the Shan-Chen (SC) model is that we have explicit and independent control over the free energy functional and EoS of the system. This vastly expands the ambit of physical systems that can be realistically simulated by LB simulations. We investigate the effect of composition, film thickness and substrate wetting on the phase morphology and the mechanism of growth in the vicinity of the substrate. The phase morphology and averaged size in the vicinity of the substrate fluctuate greatly due to the wetting of the substrate in both the parallel and perpendicular directions. Additionally, we also describe how the model presented here can be extended to include an arbitrary number of fluid components.

  12. The ShakeOut earthquake source and ground motion simulations

    USGS Publications Warehouse

    Graves, R.W.; Houston, Douglas B.; Hudnut, K.W.

    2011-01-01

    The ShakeOut Scenario is premised upon the detailed description of a hypothetical Mw 7.8 earthquake on the southern San Andreas Fault and the associated simulated ground motions. The main features of the scenario, such as its endpoints, magnitude, and gross slip distribution, were defined through expert opinion and incorporated information from many previous studies. Slip at smaller length scales, rupture speed, and rise time were constrained using empirical relationships and experience gained from previous strong-motion modeling. Using this rupture description and a 3-D model of the crust, broadband ground motions were computed over a large region of Southern California. The largest simulated peak ground acceleration (PGA) and peak ground velocity (PGV) generally range from 0.5 to 1.0 g and 100 to 250 cm/s, respectively, with the waveforms exhibiting strong directivity and basin effects. Use of a slip-predictable model results in a high static stress drop event and produces ground motions somewhat higher than median level predictions from NGA ground motion prediction equations (GMPEs).

  13. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study

    PubMed Central

    Kim, Minjung; Lamont, Andrea E.; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M. Lee

    2015-01-01

    Regression mixture models are a novel approach for modeling heterogeneous effects of predictors on an outcome. In the model building process residual variances are often disregarded and simplifying assumptions made without thorough examination of the consequences. This simulation study investigated the impact of an equality constraint on the residual variances across latent classes. We examine the consequence of constraining the residual variances on class enumeration (finding the true number of latent classes) and parameter estimates under a number of different simulation conditions meant to reflect the type of heterogeneity likely to exist in applied analyses. Results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted estimated class sizes and showed the potential to greatly impact parameter estimates in each class. Results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions were made. PMID:26139512

  14. A statistical motion model based on biomechanical simulations for data fusion during image-guided prostate interventions.

    PubMed

    Hu, Yipeng; Morgan, Dominic; Ahmed, Hashim Uddin; Pendsé, Doug; Sahu, Mahua; Allen, Clare; Emberton, Mark; Hawkes, David; Barratt, Dean

    2008-01-01

    A method is described for generating a patient-specific, statistical motion model (SMM) of the prostate gland. Finite element analysis (FEA) is used to simulate the motion of the gland using an ultrasound-based 3D FE model over a range of plausible boundary conditions and soft-tissue properties. By applying principal component analysis to the displacements of the FE mesh node points inside the gland, the simulated deformations are then used as training data to construct the SMM. The SMM is used to both predict the displacement field over the whole gland and constrain a deformable surface registration algorithm, given only a small number of target points on the surface of the deformed gland. Using 3D transrectal ultrasound images of the prostates of five patients, acquired before and after imposing a physical deformation, to evaluate the accuracy of predicted landmark displacements, the mean target registration error was found to be less than 1.9 mm.

  15. DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, M.; Manchester, W. B.; Van der Holst, B.

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful ofmore » observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).« less

  16. A global wetland methane emissions and uncertainty dataset for atmospheric chemical transport models (WetCHARTs version 1.0)

    NASA Astrophysics Data System (ADS)

    Bloom, A. Anthony; Bowman, Kevin W.; Lee, Meemong; Turner, Alexander J.; Schroeder, Ronny; Worden, John R.; Weidner, Richard; McDonald, Kyle C.; Jacob, Daniel J.

    2017-06-01

    Wetland emissions remain one of the principal sources of uncertainty in the global atmospheric methane (CH4) budget, largely due to poorly constrained process controls on CH4 production in waterlogged soils. Process-based estimates of global wetland CH4 emissions and their associated uncertainties can provide crucial prior information for model-based top-down CH4 emission estimates. Here we construct a global wetland CH4 emission model ensemble for use in atmospheric chemical transport models (WetCHARTs version 1.0). Our 0.5° × 0.5° resolution model ensemble is based on satellite-derived surface water extent and precipitation reanalyses, nine heterotrophic respiration simulations (eight carbon cycle models and a data-constrained terrestrial carbon cycle analysis) and three temperature dependence parameterizations for the period 2009-2010; an extended ensemble subset based solely on precipitation and the data-constrained terrestrial carbon cycle analysis is derived for the period 2001-2015. We incorporate the mean of the full and extended model ensembles into GEOS-Chem and compare the model against surface measurements of atmospheric CH4; the model performance (site-level and zonal mean anomaly residuals) compares favourably against published wetland CH4 emissions scenarios. We find that uncertainties in carbon decomposition rates and the wetland extent together account for more than 80 % of the dominant uncertainty in the timing, magnitude and seasonal variability in wetland CH4 emissions, although uncertainty in the temperature CH4 : C dependence is a significant contributor to seasonal variations in mid-latitude wetland CH4 emissions. The combination of satellite, carbon cycle models and temperature dependence parameterizations provides a physically informed structural a priori uncertainty that is critical for top-down estimates of wetland CH4 fluxes. Specifically, our ensemble can provide enhanced information on the prior CH4 emission uncertainty and the error covariance structure, as well as a means for using posterior flux estimates and their uncertainties to quantitatively constrain the biogeochemical process controls of global wetland CH4 emissions.

  17. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  18. Modeling Coronal Mass Ejections with EUHFORIA: A Parameter Study of the Gibson-Low Flux Rope Model using Multi-Viewpoint Observations

    NASA Astrophysics Data System (ADS)

    Verbeke, C.; Asvestari, E.; Scolini, C.; Pomoell, J.; Poedts, S.; Kilpua, E.

    2017-12-01

    Coronal Mass Ejections (CMEs) are one of the big influencers on the coronal and interplanetary dynamics. Understanding their origin and evolution from the Sun to the Earth is crucial in order to determine the impact on our Earth and society. One of the key parameters that determine the geo-effectiveness of the coronal mass ejection is its internal magnetic configuration. We present a detailed parameter study of the Gibson-Low flux rope model. We focus on changes in the input parameters and how these changes affect the characteristics of the CME at Earth. Recently, the Gibson-Low flux rope model has been implemented into the inner heliosphere model EUHFORIA, a magnetohydrodynamics forecasting model of large-scale dynamics from 0.1 AU up to 2 AU. Coronagraph observations can be used to constrain the kinematics and morphology of the flux rope. One of the key parameters, the magnetic field, is difficult to determine directly from observations. In this work, we approach the problem by conducting a parameter study in which flux ropes with varying magnetic configurations are simulated. We then use the obtained dataset to look for signatures in imaging observations and in-situ observations in order to find an empirical way of constraining the parameters related to the magnetic field of the flux rope. In particular, we focus on events observed by at least two spacecraft (STEREO + L1) in order to discuss the merits of using observations from multiple viewpoints in constraining the parameters.

  19. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    NASA Astrophysics Data System (ADS)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  20. Models of volcanic eruption hazards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohletz, K.H.

    1992-01-01

    Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluidmore » flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.« less

  1. The Impact of Parameterized Convection on Climatological Precipitation in Atmospheric Global Climate Models

    NASA Astrophysics Data System (ADS)

    Maher, Penelope; Vallis, Geoffrey K.; Sherwood, Steven C.; Webb, Mark J.; Sansom, Philip G.

    2018-04-01

    Convective parameterizations are widely believed to be essential for realistic simulations of the atmosphere. However, their deficiencies also result in model biases. The role of convection schemes in modern atmospheric models is examined using Selected Process On/Off Klima Intercomparison Experiment simulations without parameterized convection and forced with observed sea surface temperatures. Convection schemes are not required for reasonable climatological precipitation. However, they are essential for reasonable daily precipitation and constraining extreme daily precipitation that otherwise develops. Systematic effects on lapse rate and humidity are likewise modest compared with the intermodel spread. Without parameterized convection Kelvin waves are more realistic. An unexpectedly large moist Southern Hemisphere storm track bias is identified. This storm track bias persists without convection schemes, as does the double Intertropical Convergence Zone and excessive ocean precipitation biases. This suggests that model biases originate from processes other than convection or that convection schemes are missing key processes.

  2. Models of volcanic eruption hazards

    NASA Astrophysics Data System (ADS)

    Wohletz, K. H.

    Volcanic eruptions pose an ever present but poorly constrained hazard to life and property for geothermal installations in volcanic areas. Because eruptions occur sporadically and may limit field access, quantitative and systematic field studies of eruptions are difficult to complete. Circumventing this difficulty, laboratory models and numerical simulations are pivotal in building our understanding of eruptions. For example, the results of fuel-coolant interaction experiments show that magma-water interaction controls many eruption styles. Applying these results, increasing numbers of field studies now document and interpret the role of external water eruptions. Similarly, numerical simulations solve the fundamental physics of high-speed fluid flow and give quantitative predictions that elucidate the complexities of pyroclastic flows and surges. A primary goal of these models is to guide geologists in searching for critical field relationships and making their interpretations. Coupled with field work, modeling is beginning to allow more quantitative and predictive volcanic hazard assessments.

  3. Sensitivity of Global Terrestrial Gross Primary Production to Hydrologic States Simulated by the Community Land Model Using Two Runoff Parameterizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Huimin; Huang, Maoyi; Leung, Lai-Yung R.

    2014-09-01

    The terrestrial water and carbon cycles interact strongly at various spatio-temporal scales. To elucidate how hydrologic processes may influence carbon cycle processes, differences in terrestrial carbon cycle simulations induced by structural differences in two runoff generation schemes were investigated using the Community Land Model 4 (CLM4). Simulations were performed with runoff generation using the default TOPMODEL-based and the Variable Infiltration Capacity (VIC) model approaches under the same experimental protocol. The comparisons showed that differences in the simulated gross primary production (GPP) are mainly attributed to differences in the simulated leaf area index (LAI) rather than soil moisture availability. More specifically,more » differences in runoff simulations can influence LAI through changes in soil moisture, soil temperature, and their seasonality that affect the onset of the growing season and the subsequent dynamic feedbacks between terrestrial water, energy, and carbon cycles. As a result of a relative difference of 36% in global mean total runoff between the two models and subsequent changes in soil moisture, soil temperature, and LAI, the simulated global mean GPP differs by 20.4%. However, the relative difference in the global mean net ecosystem exchange between the two models is small (2.1%) due to competing effects on total mean ecosystem respiration and other fluxes, although large regional differences can still be found. Our study highlights the significant interactions among the water, energy, and carbon cycles and the need for reducing uncertainty in the hydrologic parameterization of land surface models to better constrain carbon cycle modeling.« less

  4. Dynamics and control for Constrained Multibody Systems modeled with Maggi's equation: Application to Differential Mobile Robots Partll

    NASA Astrophysics Data System (ADS)

    Amengonu, Yawo H.; Kakad, Yogendra P.

    2014-07-01

    Quasivelocity techniques were applied to derive the dynamics of a Differential Wheeled Mobile Robot (DWMR) in the companion paper. The present paper formulates a control system design for trajectory tracking of this class of robots. The method develops a feedback linearization technique for the nonlinear system using dynamic extension algorithm. The effectiveness of the nonlinear controller is illustrated with simulation example.

  5. Broadband ground-motion simulation using a hybrid approach

    USGS Publications Warehouse

    Graves, R.W.; Pitarka, A.

    2010-01-01

    This paper describes refinements to the hybrid broadband ground-motion simulation methodology of Graves and Pitarka (2004), which combines a deterministic approach at low frequencies (f 1 Hz). In our approach, fault rupture is represented kinematically and incorporates spatial heterogeneity in slip, rupture speed, and rise time. The prescribed slip distribution is constrained to follow an inverse wavenumber-squared fall-off and the average rupture speed is set at 80% of the local shear-wave velocity, which is then adjusted such that the rupture propagates faster in regions of high slip and slower in regions of low slip. We use a Kostrov-like slip-rate function having a rise time proportional to the square root of slip, with the average rise time across the entire fault constrained empirically. Recent observations from large surface rupturing earthquakes indicate a reduction of rupture propagation speed and lengthening of rise time in the near surface, which we model by applying a 70% reduction of the rupture speed and increasing the rise time by a factor of 2 in a zone extending from the surface to a depth of 5 km. We demonstrate the fidelity of the technique by modeling the strong-motion recordings from the Imperial Valley, Loma Prieta, Landers, and Northridge earthquakes.

  6. Uncertainties in the Modelled CO2 Threshold for Antarctic Glaciation

    NASA Technical Reports Server (NTRS)

    Gasson, E.; Lunt, D. J.; DeConto, R.; Goldner, A.; Heinemann, M.; Huber, M.; LeGrande, A. N.; Pollard, D.; Sagoo, N.; Siddall, M.; hide

    2014-01-01

    frequently cited atmospheric CO2 threshold for the onset of Antarctic glaciation of approximately780 parts per million by volume is based on the study of DeConto and Pollard (2003) using an ice sheet model and the GENESIS climate model. Proxy records suggest that atmospheric CO2 concentrations passed through this threshold across the Eocene-Oligocene transition approximately 34 million years. However, atmospheric CO2 concentrations may have been close to this threshold earlier than this transition, which is used by some to suggest the possibility of Antarctic ice sheets during the Eocene. Here we investigate the climate model dependency of the threshold for Antarctic glaciation by performing offline ice sheet model simulations using the climate from 7 different climate models with Eocene boundary conditions (HadCM3L, CCSM3, CESM1.0, GENESIS, FAMOUS, ECHAM5 and GISS_ER). These climate simulations are sourced from a number of independent studies, and as such the boundary conditions, which are poorly constrained during the Eocene, are not identical between simulations. The results of this study suggest that the atmospheric CO2 threshold for Antarctic glaciation is highly dependent on the climate model used and the climate model configuration. A large discrepancy between the climate model and ice sheet model grids for some simulations leads to a strong sensitivity to the lapse rate parameter.

  7. Modeling nonstructural carbohydrate reserve dynamics in forest trees

    NASA Astrophysics Data System (ADS)

    Richardson, Andrew; Keenan, Trevor; Carbone, Mariah; Pederson, Neil

    2013-04-01

    Understanding the factors influencing the availability of nonstructural carbohydrate (NSC) reserves is essential for predicting the resilience of forests to climate change and environmental stress. However, carbon allocation processes remain poorly understood and many models either ignore NSC reserves, or use simple and untested representations of NSC allocation and pool dynamics. Using model-data fusion techniques, we combined a parsimonious model of forest ecosystem carbon cycling with novel field sampling and laboratory analyses of NSCs. Simulations were conducted for an evergreen conifer forest and a deciduous broadleaf forest in New England. We used radiocarbon methods based on the 14C "bomb spike" to estimate the age of NSC reserves, and used this to constrain the mean residence time of modeled NSCs. We used additional data, including tower-measured fluxes of CO2, soil and biomass carbon stocks, woody biomass increment, and leaf area index and litterfall, to further constrain the model's parameters and initial conditions. Incorporation of fast- and slow-cycling NSC pools improved the ability of the model to reproduce the measured interannual variability in woody biomass increment. We show how model performance varies according to model structure and total pool size, and we use novel diagnostic criteria, based on autocorrelation statistics of annual biomass growth, to evaluate the model's ability to correctly represent lags and memory effects.

  8. Current Challenges in the First Principle Quantitative Modelling of the Lower Hybrid Current Drive in Tokamaks

    NASA Astrophysics Data System (ADS)

    Peysson, Y.; Bonoli, P. T.; Chen, J.; Garofalo, A.; Hillairet, J.; Li, M.; Qian, J.; Shiraiwa, S.; Decker, J.; Ding, B. J.; Ekedahl, A.; Goniche, M.; Zhai, X.

    2017-10-01

    The Lower Hybrid (LH) wave is widely used in existing tokamaks for tailoring current density profile or extending pulse duration to steady-state regimes. Its high efficiency makes it particularly attractive for a fusion reactor, leading to consider it for this purpose in ITER tokamak. Nevertheless, if basics of the LH wave in tokamak plasma are well known, quantitative modeling of experimental observations based on first principles remains a highly challenging exercise, despite considerable numerical efforts achieved so far. In this context, a rigorous methodology must be carried out in the simulations to identify the minimum number of physical mechanisms that must be considered to reproduce experimental shot to shot observations and also scalings (density, power spectrum). Based on recent simulations carried out for EAST, Alcator C-Mod and Tore Supra tokamaks, the state of the art in LH modeling is reviewed. The capability of fast electron bremsstrahlung, internal inductance li and LH driven current at zero loop voltage to constrain all together LH simulations is discussed, as well as the needs of further improvements (diagnostics, codes, LH model), for robust interpretative and predictive simulations.

  9. Discretely Integrated Condition Event (DICE) Simulation for Pharmacoeconomics.

    PubMed

    Caro, J Jaime

    2016-07-01

    Several decision-analytic modeling techniques are in use for pharmacoeconomic analyses. Discretely integrated condition event (DICE) simulation is proposed as a unifying approach that has been deliberately designed to meet the modeling requirements in a straightforward transparent way, without forcing assumptions (e.g., only one transition per cycle) or unnecessary complexity. At the core of DICE are conditions that represent aspects that persist over time. They have levels that can change and many may coexist. Events reflect instantaneous occurrences that may modify some conditions or the timing of other events. The conditions are discretely integrated with events by updating their levels at those times. Profiles of determinant values allow for differences among patients in the predictors of the disease course. Any number of valuations (e.g., utility, cost, willingness-to-pay) of conditions and events can be applied concurrently in a single run. A DICE model is conveniently specified in a series of tables that follow a consistent format and the simulation can be implemented fully in MS Excel, facilitating review and validation. DICE incorporates both state-transition (Markov) models and non-resource-constrained discrete event simulation in a single formulation; it can be executed as a cohort or a microsimulation; and deterministically or stochastically.

  10. Automated parameter tuning applied to sea ice in a global climate model

    NASA Astrophysics Data System (ADS)

    Roach, Lettie A.; Tett, Simon F. B.; Mineter, Michael J.; Yamazaki, Kuniko; Rae, Cameron D.

    2018-01-01

    This study investigates the hypothesis that a significant portion of spread in climate model projections of sea ice is due to poorly-constrained model parameters. New automated methods for optimization are applied to historical sea ice in a global coupled climate model (HadCM3) in order to calculate the combination of parameters required to reduce the difference between simulation and observations to within the range of model noise. The optimized parameters result in a simulated sea-ice time series which is more consistent with Arctic observations throughout the satellite record (1980-present), particularly in the September minimum, than the standard configuration of HadCM3. Divergence from observed Antarctic trends and mean regional sea ice distribution reflects broader structural uncertainty in the climate model. We also find that the optimized parameters do not cause adverse effects on the model climatology. This simple approach provides evidence for the contribution of parameter uncertainty to spread in sea ice extent trends and could be customized to investigate uncertainties in other climate variables.

  11. Constraints on Biogenic Emplacement of Crystalline Calcium Carbonate and Dolomite

    NASA Astrophysics Data System (ADS)

    Colas, B.; Clark, S. M.; Jacob, D. E.

    2015-12-01

    Amorphous calcium carbonate (ACC) is a biogenic precursor of calcium carbonates forming shells and skeletons of marine organisms, which are key components of the whole marine environment. Understanding carbonate formation is an essential prerequisite to quantify the effect climate change and pollution have on marine population. Water is a critical component of the structure of ACC and the key component controlling the stability of the amorphous state. Addition of small amounts of magnesium (1-5% of the calcium content) is known to promote the stability of ACC presumably through stabilization of the hydrogen bonding network. Understanding the hydrogen bonding network in ACC is fundamental to understand the stability of ACC. Our approach is to use Monte-Carlo simulations constrained by X-ray and neutron scattering data to determine hydrogen bonding networks in ACC as a function of magnesium doping. We have already successfully developed a synthesis protocol to make ACC, and have collected X-ray data, which is suitable for determining Ca, Mg and O correlations, and have collected neutron data, which gives information on the hydrogen/deuterium (as the interaction of X-rays with hydrogen is too low for us to be able to constrain hydrogen atom positions with only X-rays). The X-ray and neutron data are used to constrain reverse Monte-Carlo modelling of the ACC structure using the Empirical Potential Structure Refinement program, in order to yield a complete structural model for ACC including water molecule positions. We will present details of our sample synthesis and characterization methods, X-ray and neutron scattering data, and reverse Monte-Carlo simulations results, together with a discussion of the role of hydrogen bonding in ACC stability.

  12. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    NASA Astrophysics Data System (ADS)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  13. Significant impacts of irrigation water sources and methods on modeling irrigation effects in the ACME Land Model

    DOE PAGES

    Leng, Guoyong; Leung, L. Ruby; Huang, Maoyi

    2017-06-20

    An irrigation module that considers both irrigation water sources and irrigation methods has been incorporated into the ACME Land Model (ALM). Global numerical experiments were conducted to evaluate the impacts of irrigation water sources and irrigation methods on the simulated irrigation effects. All simulations shared the same irrigation soil moisture target constrained by a global census dataset of irrigation amounts. Irrigation has large impacts on terrestrial water balances especially in regions with extensive irrigation. Such effects depend on the irrigation water sources: surface-water-fed irrigation leads to decreases in runoff and water table depth, while groundwater-fed irrigation increases water table depth,more » with positive or negative effects on runoff depending on the pumping intensity. Irrigation effects also depend significantly on the irrigation methods. Flood irrigation applies water in large volumes within short durations, resulting in much larger impacts on runoff and water table depth than drip and sprinkler irrigations. Differentiating the irrigation water sources and methods is important not only for representing the distinct pathways of how irrigation influences the terrestrial water balances, but also for estimating irrigation water use efficiency. Specifically, groundwater pumping has lower irrigation water use efficiency due to enhanced recharge rates. Different irrigation methods also affect water use efficiency, with drip irrigation the most efficient followed by sprinkler and flood irrigation. Furthermore, our results highlight the importance of explicitly accounting for irrigation sources and irrigation methods, which are the least understood and constrained aspects in modeling irrigation water demand, water scarcity and irrigation effects in Earth System Models.« less

  14. Significant impacts of irrigation water sources and methods on modeling irrigation effects in the ACME Land Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leng, Guoyong; Leung, L. Ruby; Huang, Maoyi

    An irrigation module that considers both irrigation water sources and irrigation methods has been incorporated into the ACME Land Model (ALM). Global numerical experiments were conducted to evaluate the impacts of irrigation water sources and irrigation methods on the simulated irrigation effects. All simulations shared the same irrigation soil moisture target constrained by a global census dataset of irrigation amounts. Irrigation has large impacts on terrestrial water balances especially in regions with extensive irrigation. Such effects depend on the irrigation water sources: surface-water-fed irrigation leads to decreases in runoff and water table depth, while groundwater-fed irrigation increases water table depth,more » with positive or negative effects on runoff depending on the pumping intensity. Irrigation effects also depend significantly on the irrigation methods. Flood irrigation applies water in large volumes within short durations, resulting in much larger impacts on runoff and water table depth than drip and sprinkler irrigations. Differentiating the irrigation water sources and methods is important not only for representing the distinct pathways of how irrigation influences the terrestrial water balances, but also for estimating irrigation water use efficiency. Specifically, groundwater pumping has lower irrigation water use efficiency due to enhanced recharge rates. Different irrigation methods also affect water use efficiency, with drip irrigation the most efficient followed by sprinkler and flood irrigation. Furthermore, our results highlight the importance of explicitly accounting for irrigation sources and irrigation methods, which are the least understood and constrained aspects in modeling irrigation water demand, water scarcity and irrigation effects in Earth System Models.« less

  15. Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise

    NASA Astrophysics Data System (ADS)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2014-11-01

    Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.

  16. Slab stagnation and detachment under northeast China

    NASA Astrophysics Data System (ADS)

    Honda, Satoru

    2016-03-01

    Results of tomography models around the Japanese Islands show the existence of a gap between the horizontally lying (stagnant) slab extending under northeastern China and the fast seismic velocity anomaly in the lower mantle. A simple conversion from the fast velocity anomaly to the low-temperature anomaly shows a similar feature. This feature appears to be inconsistent with the results of numerical simulations on the interaction between the slab and phase transitions with temperature-dependent viscosity. Such numerical models predict a continuous slab throughout the mantle. I extend previous analyses of the tomography model and model calculations to infer the origins of the gap beneath northeastern China. Results of numerical simulations that take the geologic history of the subduction zone into account suggest two possible origins for the gap: (1) the opening of the Japan Sea led to a breaking off of the otherwise continuous subducting slab, or (2) the western edge of the stagnant slab is the previous subducted ridge, which was the plate boundary between the extinct Izanagi and the Pacific plates. Origin (2) suggesting the present horizontally lying slab has accumulated since the ridge subduction, is preferable for explaining the present length of the horizontally lying slab in the upper mantle. Numerical models of origin (1) predict a stagnant slab in the upper mantle that is too short, and a narrow or non-existent gap. Preferred models require rather stronger flow resistance of the 660-km phase change than expected from current estimates of the phase transition property. Future detailed estimates of the amount of the subducted Izanagi plate and the present stagnant slab would be useful to constrain models. A systematic along-arc variation of the slab morphology from the northeast Japan to Kurile arcs is also recognized, and its understanding may constrain the 3D mantle flow there.

  17. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    NASA Astrophysics Data System (ADS)

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  18. Model based systems engineering (MBSE) applied to Radio Aurora Explorer (RAX) CubeSat mission operational scenarios

    NASA Astrophysics Data System (ADS)

    Spangelo, S. C.; Cutler, J.; Anderson, L.; Fosse, E.; Cheng, L.; Yntema, R.; Bajaj, M.; Delp, C.; Cole, B.; Soremekum, G.; Kaslow, D.

    Small satellites are more highly resource-constrained by mass, power, volume, delivery timelines, and financial cost relative to their larger counterparts. Small satellites are operationally challenging because subsystem functions are coupled and constrained by the limited available commodities (e.g. data, energy, and access times to ground resources). Furthermore, additional operational complexities arise because small satellite components are physically integrated, which may yield thermal or radio frequency interference. In this paper, we extend our initial Model Based Systems Engineering (MBSE) framework developed for a small satellite mission by demonstrating the ability to model different behaviors and scenarios. We integrate several simulation tools to execute SysML-based behavior models, including subsystem functions and internal states of the spacecraft. We demonstrate utility of this approach to drive the system analysis and design process. We demonstrate applicability of the simulation environment to capture realistic satellite operational scenarios, which include energy collection, the data acquisition, and downloading to ground stations. The integrated modeling environment enables users to extract feasibility, performance, and robustness metrics. This enables visualization of both the physical states (e.g. position, attitude) and functional states (e.g. operating points of various subsystems) of the satellite for representative mission scenarios. The modeling approach presented in this paper offers satellite designers and operators the opportunity to assess the feasibility of vehicle and network parameters, as well as the feasibility of operational schedules. This will enable future missions to benefit from using these models throughout the full design, test, and fly cycle. In particular, vehicle and network parameters and schedules can be verified prior to being implemented, during mission operations, and can also be updated in near real-time with oper- tional performance feedback.

  19. An observational constraint on stomatal function in forests: evaluating coupled carbon and water vapor exchange with carbon isotopes in the Community Land Model (CLM4.5)

    NASA Astrophysics Data System (ADS)

    Raczka, Brett; Duarte, Henrique F.; Koven, Charles D.; Ricciuto, Daniel; Thornton, Peter E.; Lin, John C.; Bowling, David R.

    2016-09-01

    Land surface models are useful tools to quantify contemporary and future climate impact on terrestrial carbon cycle processes, provided they can be appropriately constrained and tested with observations. Stable carbon isotopes of CO2 offer the potential to improve model representation of the coupled carbon and water cycles because they are strongly influenced by stomatal function. Recently, a representation of stable carbon isotope discrimination was incorporated into the Community Land Model component of the Community Earth System Model. Here, we tested the model's capability to simulate whole-forest isotope discrimination in a subalpine conifer forest at Niwot Ridge, Colorado, USA. We distinguished between isotopic behavior in response to a decrease of δ13C within atmospheric CO2 (Suess effect) vs. photosynthetic discrimination (Δcanopy), by creating a site-customized atmospheric CO2 and δ13C of CO2 time series. We implemented a seasonally varying Vcmax model calibration that best matched site observations of net CO2 carbon exchange, latent heat exchange, and biomass. The model accurately simulated observed δ13C of needle and stem tissue, but underestimated the δ13C of bulk soil carbon by 1-2 ‰. The model overestimated the multiyear (2006-2012) average Δcanopy relative to prior data-based estimates by 2-4 ‰. The amplitude of the average seasonal cycle of Δcanopy (i.e., higher in spring/fall as compared to summer) was correctly modeled but only when using a revised, fully coupled An - gs (net assimilation rate, stomatal conductance) version of the model in contrast to the partially coupled An - gs version used in the default model. The model attributed most of the seasonal variation in discrimination to An, whereas interannual variation in simulated Δcanopy during the summer months was driven by stomatal response to vapor pressure deficit (VPD). The model simulated a 10 % increase in both photosynthetic discrimination and water-use efficiency (WUE) since 1850 which is counter to established relationships between discrimination and WUE. The isotope observations used here to constrain CLM suggest (1) the model overestimated stomatal conductance and (2) the default CLM approach to representing nitrogen limitation (partially coupled model) was not capable of reproducing observed trends in discrimination. These findings demonstrate that isotope observations can provide important information related to stomatal function driven by environmental stress from VPD and nitrogen limitation. Future versions of CLM that incorporate carbon isotope discrimination are likely to benefit from explicit inclusion of mesophyll conductance.

  20. Photosynthetic productivity and its efficiencies in ISIMIP2a biome models: benchmarking for impact assessment studies

    NASA Astrophysics Data System (ADS)

    Ito, Akihiko; Nishina, Kazuya; Reyer, Christopher P. O.; François, Louis; Henrot, Alexandra-Jane; Munhoven, Guy; Jacquemin, Ingrid; Tian, Hanqin; Yang, Jia; Pan, Shufen; Morfopoulos, Catherine; Betts, Richard; Hickler, Thomas; Steinkamp, Jörg; Ostberg, Sebastian; Schaphoff, Sibyll; Ciais, Philippe; Chang, Jinfeng; Rafique, Rashid; Zeng, Ning; Zhao, Fang

    2017-08-01

    Simulating vegetation photosynthetic productivity (or gross primary production, GPP) is a critical feature of the biome models used for impact assessments of climate change. We conducted a benchmarking of global GPP simulated by eight biome models participating in the second phase of the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP2a) with four meteorological forcing datasets (30 simulations), using independent GPP estimates and recent satellite data of solar-induced chlorophyll fluorescence as a proxy of GPP. The simulated global terrestrial GPP ranged from 98 to 141 Pg C yr-1 (1981-2000 mean); considerable inter-model and inter-data differences were found. Major features of spatial distribution and seasonal change of GPP were captured by each model, showing good agreement with the benchmarking data. All simulations showed incremental trends of annual GPP, seasonal-cycle amplitude, radiation-use efficiency, and water-use efficiency, mainly caused by the CO2 fertilization effect. The incremental slopes were higher than those obtained by remote sensing studies, but comparable with those by recent atmospheric observation. Apparent differences were found in the relationship between GPP and incoming solar radiation, for which forcing data differed considerably. The simulated GPP trends co-varied with a vegetation structural parameter, leaf area index, at model-dependent strengths, implying the importance of constraining canopy properties. In terms of extreme events, GPP anomalies associated with a historical El Niño event and large volcanic eruption were not consistently simulated in the model experiments due to deficiencies in both forcing data and parameterized environmental responsiveness. Although the benchmarking demonstrated the overall advancement of contemporary biome models, further refinements are required, for example, for solar radiation data and vegetation canopy schemes.

  1. The Trend Odds Model for Ordinal Data‡

    PubMed Central

    Capuano, Ana W.; Dawson, Jeffrey D.

    2013-01-01

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520

  2. The trend odds model for ordinal data.

    PubMed

    Capuano, Ana W; Dawson, Jeffrey D

    2013-06-15

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Quantifying atmospheric pollutant emissions from open biomass burning with multiple methods: a case study for Yangtze River Delta region, China

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Zhao, Y.

    2017-12-01

    To understand the differences and their origins of emission inventories based on various methods for the source, emissions of PM10, PM2.5, OC, BC, CH4, VOCs, CO, CO2, NOX, SO2 and NH3 from open biomass burning (OBB) in Yangtze River Delta (YRD) are calculated for 2005-2012 using three (bottom-up, FRP-based and constraining) approaches. The inter-annual trends in emissions with FRP-based and constraining methods are similar with the fire counts in 2005-2012, while that with bottom-up method is different. For most years, emissions of all species estimated with constraining method are smaller than those with bottom-up method (except for VOCs), while they are larger than those with FRP-based (except for EC, CH4 and NH3). Such discrepancies result mainly from different masses of crop residues burned in the field (CRBF) estimated in the three methods. Among the three methods, the simulated concentrations from chemistry transport modeling with the constrained emissions are the closest to available observations, implying the result from constraining method is the best estimation for OBB emissions. CO emissions in the three methods are compared with other studies. Similar temporal variations were found for the constrained emissions, FRP-based emissions, GFASv1.0 and GFEDv4.1s, with the largest and the lowest emissions estimated for 2012 and 2006, respectively. The constrained CO emissions in this study are smaller than those in other studies based on bottom-up method and larger than those based on burned area and FRP derived from satellite. The contributions of OBB to two particulate pollution events in 2010 and 2012 are analyzed with the brute-force method. The average contribution of OBB to PM10 mass concentrations in June 8-14 2012 was estimated at 38.9% (74.8 μg m-3), larger than that in June 17-24, 2010 at 23.6 % (38.5 μg m-3). Influences of diurnal curves and meteorology on air pollution caused by OBB are also evaluated, and the results suggest that air pollution caused by OBB will become heavier if the meteorological conditions are unfavorable, and that more attention should be paid to the supervision in night. Quantified with the Monte-Carlo simulation, the uncertainties of OBB emissions with constraining method are significantly lower than those with bottom-up or FRP-based methods.

  4. Multivariate constrained shape optimization: Application to extrusion bell shape for pasta production

    NASA Astrophysics Data System (ADS)

    Sarghini, Fabrizio; De Vivo, Angela; Marra, Francesco

    2017-10-01

    Computational science and engineering methods have allowed a major change in the way products and processes are designed, as validated virtual models - capable to simulate physical, chemical and bio changes occurring during production processes - can be realized and used in place of real prototypes and performing experiments, often time and money consuming. Among such techniques, Optimal Shape Design (OSD) (Mohammadi & Pironneau, 2004) represents an interesting approach. While most classical numerical simulations consider fixed geometrical configurations, in OSD a certain number of geometrical degrees of freedom is considered as a part of the unknowns: this implies that the geometry is not completely defined, but part of it is allowed to move dynamically in order to minimize or maximize the objective function. The applications of optimal shape design (OSD) are uncountable. For systems governed by partial differential equations, they range from structure mechanics to electromagnetism and fluid mechanics or to a combination of the three. This paper presents one of possible applications of OSD, particularly how extrusion bell shape, for past production, can be designed by applying a multivariate constrained shape optimization.

  5. Improving Hydrological Simulations by Incorporating GRACE Data for Parameter Calibration

    NASA Astrophysics Data System (ADS)

    Bai, P.

    2017-12-01

    Hydrological model parameters are commonly calibrated by observed streamflow data. This calibration strategy is questioned when the modeled hydrological variables of interest are not limited to streamflow. Well-performed streamflow simulations do not guarantee the reliable reproduction of other hydrological variables. One of the reasons is that hydrological model parameters are not reasonably identified. The Gravity Recovery and Climate Experiment (GRACE) satellite-derived total water storage change (TWSC) data provide an opportunity to constrain hydrological model parameterizations in combination with streamflow observations. We constructed a multi-objective calibration scheme based on GRACE-derived TWSC and streamflow observations, with the aim of improving the parameterizations of hydrological models. The multi-objective calibration scheme was compared with the traditional single-objective calibration scheme, which is based only on streamflow observations. Two monthly hydrological models were employed on 22 Chinese catchments with different hydroclimatic conditions. The model evaluation was performed using observed streamflows, GRACE-derived TWSC, and evapotranspiraiton (ET) estimates from flux towers and from the water balance approach. Results showed that the multi-objective calibration provided more reliable TWSC and ET simulations without significant deterioration in the accuracy of streamflow simulations than the single-objective calibration. In addition, the improvements of TWSC and ET simulations were more significant in relatively dry catchments than in relatively wet catchments. This study highlights the importance of including additional constraints besides streamflow observations in the parameter estimation to improve the performances of hydrological models.

  6. Evaluation of Global Observations-Based Evapotranspiration Datasets and IPCC AR4 Simulations

    NASA Technical Reports Server (NTRS)

    Mueller, B.; Seneviratne, S. I.; Jimenez, C.; Corti, T.; Hirschi, M.; Balsamo, G.; Ciais, P.; Dirmeyer, P.; Fisher, J. B.; Guo, Z.; hide

    2011-01-01

    Quantification of global land evapotranspiration (ET) has long been associated with large uncertainties due to the lack of reference observations. Several recently developed products now provide the capacity to estimate ET at global scales. These products, partly based on observational data, include satellite ]based products, land surface model (LSM) simulations, atmospheric reanalysis output, estimates based on empirical upscaling of eddycovariance flux measurements, and atmospheric water balance datasets. The LandFlux-EVAL project aims to evaluate and compare these newly developed datasets. Additionally, an evaluation of IPCC AR4 global climate model (GCM) simulations is presented, providing an assessment of their capacity to reproduce flux behavior relative to the observations ]based products. Though differently constrained with observations, the analyzed reference datasets display similar large-scale ET patterns. ET from the IPCC AR4 simulations was significantly smaller than that from the other products for India (up to 1 mm/d) and parts of eastern South America, and larger in the western USA, Australia and China. The inter-product variance is lower across the IPCC AR4 simulations than across the reference datasets in several regions, which indicates that uncertainties may be underestimated in the IPCC AR4 models due to shared biases of these simulations.

  7. Recent Progress in Measuring and Modeling Patterns of Biomass and Soil Carbon Pools Across the Amazon Basin

    NASA Technical Reports Server (NTRS)

    Potter, Christopher; Malhi, Yadvinder

    2004-01-01

    Ever more detailed representations of above-ground biomass and soil carbon pools have been developed during the LBA project. Environmental controls such as regional climate, land cover history, secondary forest regrowth, and soil fertility are now being taken into account in regional inventory studies. This paper will review the evolution of measurement-extrapolation approaches, remote sensing, and simulation modeling techniques for biomass and soil carbon pools, which together help constrain regional carbon budgets and enhance in our understanding of uncertainty at the regional level.

  8. Analysis of modern and Pleistocene hydrologic exchange between Saginaw Bay (Lake Huron) and the Saginaw Lowlands area

    USGS Publications Warehouse

    Hoaglund, J. R.; Kolak, J.J.; Long, D.T.; Larson, G.J.

    2004-01-01

    Two numerical models, one simulating present groundwater flow conditions and one simulating ice-induced hydraulic loading from the Port Huron ice advance, were used to characterize both modern and Pleistocene groundwater exchange between the Michigan Basin and near-surface water systems of Saginaw Bay (Lake Huron) and the surrounding Saginaw Lowlands area. These models were further used to constrain the origin of saline, isotopically light groundwater, and porewater from the study area. Output from the groundwater-flow model indicates that, at present conditions, head in the Marshall aquifer beneath Saginaw Bay exceeds the modern lake elevation by as much as 21 m. Despite this potential for flow, simulated groundwater discharge through the Saginaw Bay floor constitutes only 0.028 m3 s-1 (???1 cfs). Bedrock lithology appears to regulate the rate of groundwater discharge, as the portion of the Saginaw Bay floor underlain by the Michigan confining unit exhibits an order of magnitude lower flux than the portion underlain by the Saginaw aquifer. The calculated shoreline discharge of groundwater to Saginaw Bay is also relatively small (1.13 m3 s-1 or ???40 cfs) because of low gradients across the Saginaw Lowlands area and the low hydraulic conductivities of lodgement tills and glacial-lake clays surrounding the bay. In contrast to the present groundwater flow conditions, the Port Huron ice-induced hydraulic-loading model generates a groundwater-flow reversal that is localized to the region of a Pleistocene ice sheet and proglacial lake. This area of reversed vertical gradient is largely commensurate with the distribution of isotopically light groundwater presently found in the study area. Mixing scenarios, constrained by chloride concentrations and ??18O values in porewater samples, demonstrate that a mixing event involving subglacial recharge could have produced the groundwater chemistry currently observed in the Saginaw Lowlands area. The combination of models and mixing scenarios indicates that structural control is a major influence on both the present and Pleistocene flow systems.

  9. Single-particle dispersion in stably stratified turbulence

    NASA Astrophysics Data System (ADS)

    Sujovolsky, N. E.; Mininni, P. D.; Rast, M. P.

    2018-03-01

    We present models for single-particle dispersion in vertical and horizontal directions of stably stratified flows. The model in the vertical direction is based on the observed Lagrangian spectrum of the vertical velocity, while the model in the horizontal direction is a combination of a continuous-time eddy-constrained random walk process with a contribution to transport from horizontal winds. Transport at times larger than the Lagrangian turnover time is not universal and dependent on these winds. The models yield results in good agreement with direct numerical simulations of stratified turbulence, for which single-particle dispersion differs from the well-studied case of homogeneous and isotropic turbulence.

  10. Modelling baryonic effects on galaxy cluster mass profiles

    NASA Astrophysics Data System (ADS)

    Shirasaki, Masato; Lau, Erwin T.; Nagai, Daisuke

    2018-06-01

    Gravitational lensing is a powerful probe of the mass distribution of galaxy clusters and cosmology. However, accurate measurements of the cluster mass profiles are limited by uncertainties in cluster astrophysics. In this work, we present a physically motivated model of baryonic effects on the cluster mass profiles, which self-consistently takes into account the impact of baryons on the concentration as well as mass accretion histories of galaxy clusters. We calibrate this model using the Omega500 hydrodynamical cosmological simulations of galaxy clusters with varying baryonic physics. Our model will enable us to simultaneously constrain cluster mass, concentration, and cosmological parameters using stacked weak lensing measurements from upcoming optical cluster surveys.

  11. A chemical model for generating the sources of mare basalts - Combined equilibrium and fractional crystallization of the lunar magmasphere

    NASA Technical Reports Server (NTRS)

    Snyder, Gregory A.; Taylor, Lawrence A.; Neal, Clive R.

    1992-01-01

    A chemical model for simulating the sources of the lunar mare basalts was developed by considering a modified mafic cumulate source formed during the combined equilibrium and fractional crystallization of a lunar magma ocean (LMO). The parameters which influence the initial LMO and its subsequent crystallization are examined, and both trace and major elements are modeled. It is shown that major elements tightly constrain the composition of mare basalt sources and the pathways to their creation. The ability of this LMO model to generate viable mare basalt source regions was tested through a case study involving the high-Ti basalts.

  12. Probing cosmic anisotropy with gravitational waves as standard sirens

    NASA Astrophysics Data System (ADS)

    Cai, Rong-Gen; Liu, Tong-Bo; Liu, Xue-Wen; Wang, Shao-Jiang; Yang, Tao

    2018-05-01

    The gravitational wave (GW) as a standard siren directly determines the luminosity distance from the gravitational waveform without reference to the specific cosmological model, of which the redshift can be obtained separately by means of the electromagnetic counterpart like GW events from binary neutron stars and massive black hole binaries (MBHBs). To see to what extent the standard siren can reproduce the presumed dipole anisotropy written in the simulated data of standard siren events from typical configurations of GW detectors, we find that (1) for the Laser Interferometer Space Antenna with different MBHB models during five-year observations, the cosmic isotropy can be ruled out at 3 σ confidence level (C.L.) and the dipole direction can be constrained roughly around 20% at 2 σ C.L., as long as the dipole amplitude is larger than 0.04, 0.06 and 0.03 for MBHB models Q3d, pop III and Q3nod with increasing constraining ability, respectively; (2) for the Einstein telescope with no less than 200 standard siren events, the cosmic isotropy can be ruled out at 3 σ C.L. if the dipole amplitude is larger than 0.06, and the dipole direction can be constrained within 20% at 3 σ C.L. if the dipole amplitude is near 0.1; (3) for the Deci-Hertz Interferometer Gravitational wave Observatory with no less than 100 standard siren events, the cosmic isotropy can be ruled out at 3 σ C.L. for dipole amplitude larger than 0.03, and the dipole direction can even be constrained within 10% at 3 σ C.L. if the dipole amplitude is larger than 0.07. Our work manifests the promising perspective of the constraint ability on the cosmic anisotropy from the standard siren approach.

  13. Optimizing Fukushima Emissions Through Pattern Matching and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Simpson, M. D.; Philip, C. S.; Baskett, R.

    2017-12-01

    Hazardous conditions during the Fukushima Daiichi nuclear power plant (NPP) accident hindered direct observations of the emissions of radioactive materials into the atmosphere. A wide range of emissions are estimated from bottom-up studies using reactor inventories and top-down approaches based on inverse modeling. We present a new inverse modeling estimate of cesium-137 emitted from the Fukushima NPP. Our estimate considers weather uncertainty through a large ensemble of Weather Research and Forecasting model simulations and uses the FLEXPART atmospheric dispersion model to transport and deposit cesium. The simulations are constrained by observations of the spatial distribution of cumulative cesium deposited on the surface of Japan through April 2, 2012. Multiple spatial metrics are used to quantify differences between observed and simulated deposition patterns. In order to match the observed pattern, we use a multi-objective genetic algorithm to optimize the time-varying emissions. We find that large differences with published bottom-up estimates are required to explain the observations. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  14. The Challenge of Identifying Controls on Cloud Properties and Precipitation Onset for Cumulus Congestus Sampled During MC3E

    DOE PAGES

    Mechem, David B.; Giangrande, Scott E.

    2018-03-01

    Here, the controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large–eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud topmore » occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time–varying three–dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.« less

  15. The Challenge of Identifying Controls on Cloud Properties and Precipitation Onset for Cumulus Congestus Sampled During MC3E

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mechem, David B.; Giangrande, Scott E.

    Here, the controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large–eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud topmore » occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time–varying three–dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.« less

  16. The Challenge of Identifying Controls on Cloud Properties and Precipitation Onset for Cumulus Congestus Sampled During MC3E

    NASA Astrophysics Data System (ADS)

    Mechem, David B.; Giangrande, Scott E.

    2018-03-01

    Controls on precipitation onset and the transition from shallow cumulus to congestus are explored using a suite of 16 large-eddy simulations based on the 25 May 2011 event from the Midlatitude Continental Convective Clouds Experiment (MC3E). The thermodynamic variables in the model are relaxed at various timescales to observationally constrained temperature and moisture profiles in order to better reproduce the observed behavior of precipitation onset and total precipitation. Three of the simulations stand out as best matching the precipitation observations and also perform well for independent comparisons of cloud fraction, precipitation area fraction, and evolution of cloud top occurrence. All three simulations exhibit a destabilization over time, which leads to a transition to deeper clouds, but the evolution of traditional stability metrics by themselves is not able to explain differences in the simulations. Conditionally sampled cloud properties (in particular, mean cloud buoyancy), however, do elicit differences among the simulations. The inability of environmental profiles alone to discern subtle differences among the simulations and the usefulness of conditionally sampled model quantities argue for hybrid observational/modeling approaches. These combined approaches enable a more complete physical understanding of cloud systems by combining observational sampling of time-varying three-dimensional meteorological quantities and cloud properties, along with detailed representation of cloud microphysical and dynamical processes from numerical models.

  17. Perturbations of the optical properties of mineral dust particles by mixing with black carbon: a numerical simulation study

    DOE PAGES

    Scarnato, B. V.; China, S.; Nielsen, K.; ...

    2015-06-25

    Field observations show that individual aerosol particles are a complex mixture of a wide variety of species, reflecting different sources and physico-chemical transformations. The impacts of individual aerosol morphology and mixing characteristics on the Earth system are not yet fully understood. Here we present a sensitivity study on climate-relevant aerosols optical properties to various approximations. Based on aerosol samples collected in various geographical locations, we have observationally constrained size, morphology and mixing, and accordingly simulated, using the discrete dipole approximation model (DDSCAT), optical properties of three aerosols types: (1) bare black carbon (BC) aggregates, (2) bare mineral dust, and (3)more » an internal mixture of a BC aggregate laying on top of a mineral dust particle, also referred to as polluted dust. DDSCAT predicts optical properties and their spectral dependence consistently with observations for all the studied cases. Predicted values of mass absorption, scattering and extinction coefficients (MAC, MSC, MEC) for bare BC show a weak dependence on the BC aggregate size, while the asymmetry parameter ( g) shows the opposite behavior. The simulated optical properties of bare mineral dust present a large variability depending on the modeled dust shape, confirming the limited range of applicability of spheroids over different types and size of mineral dust aerosols, in agreement with previous modeling studies. The polluted dust cases show a strong decrease in MAC values with the increase in dust particle size (for the same BC size) and an increase of the single scattering albedo (SSA). Furthermore, particles with a radius between 180 and 300 nm are characterized by a decrease in SSA values compared to bare dust, in agreement with field observations.This paper demonstrates that observationally constrained DDSCAT simulations allow one to better understand the variability of the measured aerosol optical properties in ambient air and to define benchmark biases due to different approximations in aerosol parametrization.« less

  18. Overcoming potential energy distortions in constrained internal coordinate molecular dynamics simulations.

    PubMed

    Kandel, Saugat; Salomon-Ferrer, Romelia; Larsen, Adrien B; Jain, Abhinandan; Vaidehi, Nagarajan

    2016-01-28

    The Internal Coordinate Molecular Dynamics (ICMD) method is an attractive molecular dynamics (MD) method for studying the dynamics of bonded systems such as proteins and polymers. It offers a simple venue for coarsening the dynamics model of a system at multiple hierarchical levels. For example, large scale protein dynamics can be studied using torsional dynamics, where large domains or helical structures can be treated as rigid bodies and the loops connecting them as flexible torsions. ICMD with such a dynamic model of the protein, combined with enhanced conformational sampling method such as temperature replica exchange, allows the sampling of large scale domain motion involving high energy barrier transitions. Once these large scale conformational transitions are sampled, all-torsion, or even all-atom, MD simulations can be carried out for the low energy conformations sampled via coarse grained ICMD to calculate the energetics of distinct conformations. Such hierarchical MD simulations can be carried out with standard all-atom forcefields without the need for compromising on the accuracy of the forces. Using constraints to treat bond lengths and bond angles as rigid can, however, distort the potential energy landscape of the system and reduce the number of dihedral transitions as well as conformational sampling. We present here a two-part solution to overcome such distortions of the potential energy landscape with ICMD models. To alleviate the intrinsic distortion that stems from the reduced phase space in torsional MD, we use the Fixman compensating potential. To additionally alleviate the extrinsic distortion that arises from the coupling between the dihedral angles and bond angles within a force field, we propose a hybrid ICMD method that allows the selective relaxing of bond angles. This hybrid ICMD method bridges the gap between all-atom MD and torsional MD. We demonstrate with examples that these methods together offer a solution to eliminate the potential energy distortions encountered in constrained ICMD simulations of peptide molecules.

  19. Overcoming potential energy distortions in constrained internal coordinate molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Kandel, Saugat; Salomon-Ferrer, Romelia; Larsen, Adrien B.; Jain, Abhinandan; Vaidehi, Nagarajan

    2016-01-01

    The Internal Coordinate Molecular Dynamics (ICMD) method is an attractive molecular dynamics (MD) method for studying the dynamics of bonded systems such as proteins and polymers. It offers a simple venue for coarsening the dynamics model of a system at multiple hierarchical levels. For example, large scale protein dynamics can be studied using torsional dynamics, where large domains or helical structures can be treated as rigid bodies and the loops connecting them as flexible torsions. ICMD with such a dynamic model of the protein, combined with enhanced conformational sampling method such as temperature replica exchange, allows the sampling of large scale domain motion involving high energy barrier transitions. Once these large scale conformational transitions are sampled, all-torsion, or even all-atom, MD simulations can be carried out for the low energy conformations sampled via coarse grained ICMD to calculate the energetics of distinct conformations. Such hierarchical MD simulations can be carried out with standard all-atom forcefields without the need for compromising on the accuracy of the forces. Using constraints to treat bond lengths and bond angles as rigid can, however, distort the potential energy landscape of the system and reduce the number of dihedral transitions as well as conformational sampling. We present here a two-part solution to overcome such distortions of the potential energy landscape with ICMD models. To alleviate the intrinsic distortion that stems from the reduced phase space in torsional MD, we use the Fixman compensating potential. To additionally alleviate the extrinsic distortion that arises from the coupling between the dihedral angles and bond angles within a force field, we propose a hybrid ICMD method that allows the selective relaxing of bond angles. This hybrid ICMD method bridges the gap between all-atom MD and torsional MD. We demonstrate with examples that these methods together offer a solution to eliminate the potential energy distortions encountered in constrained ICMD simulations of peptide molecules.

  20. Marine N2O Emissions From Nitrification and Denitrification Constrained by Modern Observations and Projected in Multimillennial Global Warming Simulations

    NASA Astrophysics Data System (ADS)

    Battaglia, G.; Joos, F.

    2018-01-01

    Nitrous oxide (N2O) is a potent greenhouse gas (GHG) and ozone destructing agent; yet global estimates of N2O emissions are uncertain. Marine N2O stems from nitrification and denitrification processes which depend on organic matter cycling and dissolved oxygen (O2). We introduce N2O as an obligate intermediate product of denitrification and as an O2-dependent by-product from nitrification in the Bern3D ocean model. A large model ensemble is used to probabilistically constrain modern and to project marine N2O production for a low (Representative Concentration Pathway (RCP)2.6) and high GHG (RCP8.5) scenario extended to A.D. 10,000. Water column N2O and surface ocean partial pressure N2O data serve as constraints in this Bayesian framework. The constrained median for modern N2O production is 4.5 (±1σ range: 3.0 to 6.1) Tg N yr-1, where 4.5% stems from denitrification. Modeled denitrification is 65.1 (40.9 to 91.6) Tg N yr-1, well within current estimates. For high GHG forcing, N2O production decreases by 7.7% over this century due to decreasing organic matter export and remineralization. Thereafter, production increases slowly by 21% due to widespread deoxygenation and high remineralization. Deoxygenation peaks in two millennia, and the global O2 inventory is reduced by a factor of 2 compared to today. Net denitrification is responsible for 7.8% of the long-term increase in N2O production. On millennial timescales, marine N2O emissions constitute a small, positive feedback to climate change. Our simulations reveal tight coupling between the marine carbon cycle, O2, N2O, and climate.

  1. Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouri, Drew Philip; Surowiec, Thomas M.

    Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less

  2. Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization

    DOE PAGES

    Kouri, Drew Philip; Surowiec, Thomas M.

    2018-06-05

    Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less

  3. Experimentally modeling stochastic processes with less memory by the use of a quantum processor

    PubMed Central

    Palsson, Matthew S.; Gu, Mile; Ho, Joseph; Wiseman, Howard M.; Pryde, Geoff J.

    2017-01-01

    Computer simulation of observable phenomena is an indispensable tool for engineering new technology, understanding the natural world, and studying human society. However, the most interesting systems are often so complex that simulating their future behavior demands storing immense amounts of information regarding how they have behaved in the past. For increasingly complex systems, simulation becomes increasingly difficult and is ultimately constrained by resources such as computer memory. Recent theoretical work shows that quantum theory can reduce this memory requirement beyond ultimate classical limits, as measured by a process’ statistical complexity, C. We experimentally demonstrate this quantum advantage in simulating stochastic processes. Our quantum implementation observes a memory requirement of Cq = 0.05 ± 0.01, far below the ultimate classical limit of C = 1. Scaling up this technique would substantially reduce the memory required in simulations of more complex systems. PMID:28168218

  4. Phonotactic constraints: Implications for models of oral reading in Russian.

    PubMed

    Ulicheva, Anastasia; Coltheart, Max; Saunders, Steven; Perry, Conrad

    2016-04-01

    The present article investigates how phonotactic rules constrain oral reading in the Russian language. The pronunciation of letters in Russian is regular and consistent, but it is subject to substantial phonotactic influence: the position of a phoneme and its phonological context within a word can alter its pronunciation. In Part 1 of the article, we analyze the orthography-to-phonology and phonology-to-phonology (i.e., phonotactic) relationships in Russian monosyllabic words. In Part 2 of the article, we report empirical data from an oral word reading task that show an effect of phonotactic dependencies on skilled reading in Russian: humans are slower when reading words where letter-phoneme correspondences are highly constrained by phonotactic rules compared with those where there are few or no such constraints present. A further question of interest in this article is how computational models of oral reading deal with the phonotactics of the Russian language. To answer this question, in Part 3, we report simulations from the Russian dual-route cascaded model (DRC) and the Russian connectionist dual-process model (CDP++) and assess the performance of the 2 models by testing them against human data. (c) 2016 APA, all rights reserved).

  5. Network-constrained group lasso for high-dimensional multinomial classification with application to cancer subtype prediction.

    PubMed

    Tian, Xinyu; Wang, Xuefeng; Chen, Jun

    2014-01-01

    Classic multinomial logit model, commonly used in multiclass regression problem, is restricted to few predictors and does not take into account the relationship among variables. It has limited use for genomic data, where the number of genomic features far exceeds the sample size. Genomic features such as gene expressions are usually related by an underlying biological network. Efficient use of the network information is important to improve classification performance as well as the biological interpretability. We proposed a multinomial logit model that is capable of addressing both the high dimensionality of predictors and the underlying network information. Group lasso was used to induce model sparsity, and a network-constraint was imposed to induce the smoothness of the coefficients with respect to the underlying network structure. To deal with the non-smoothness of the objective function in optimization, we developed a proximal gradient algorithm for efficient computation. The proposed model was compared to models with no prior structure information in both simulations and a problem of cancer subtype prediction with real TCGA (the cancer genome atlas) gene expression data. The network-constrained mode outperformed the traditional ones in both cases.

  6. Land cover change or land-use intensification: simulating land system change with a global-scale land change model.

    PubMed

    van Asselen, Sanneke; Verburg, Peter H

    2013-12-01

    Land-use change is both a cause and consequence of many biophysical and socioeconomic changes. The CLUMondo model provides an innovative approach for global land-use change modeling to support integrated assessments. Demands for goods and services are, in the model, supplied by a variety of land systems that are characterized by their land cover mosaic, the agricultural management intensity, and livestock. Land system changes are simulated by the model, driven by regional demand for goods and influenced by local factors that either constrain or promote land system conversion. A characteristic of the new model is the endogenous simulation of intensification of agricultural management versus expansion of arable land, and urban versus rural settlements expansion based on land availability in the neighborhood of the location. Model results for the OECD Environmental Outlook scenario show that allocation of increased agricultural production by either management intensification or area expansion varies both among and within world regions, providing useful insight into the land sparing versus land sharing debate. The land system approach allows the inclusion of different types of demand for goods and services from the land system as a driving factor of land system change. Simulation results are compared to observed changes over the 1970-2000 period and projections of other global and regional land change models. © 2013 John Wiley & Sons Ltd.

  7. Northern African and Indian Precipitation at the end of the 21st Century: An Integrated Application of Regional and Global Climate Models

    NASA Astrophysics Data System (ADS)

    Patricola, C. M.; Cook, K. H.

    2008-12-01

    As greenhouse warming continues there is growing concern about the future climate of both Africa, which is highlighted by the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4) as exceptionally vulnerable to climate change, and India. Precipitation projections from the AOGCMs of the IPCC AR4 are relatively consistent over India, but not over northern Africa. Inconsistencies can be related to the model's inability to capture climate process correctly, deficiencies in physical parameterizations, different SST projections, or horizontal atmospheric resolution that is too coarse to realistically represent the tight gradients over West Africa and complex topography of East Africa and India. Treatment of the land surface in a model may also be an issue over West Africa and India where land-surface/atmosphere interactions are very important. Here a method for simulating future climate is developed and applied using a high-resolution regional model in conjunction with output from a suite of AOGCMs, drawing on the advantages of both the regional and global modeling approaches. Integration by the regional model allows for finer horizontal resolution and regionally appropriate selection of parameterizations and land-surface model. AOGCM output is used to provide SST projections and lateral boundary conditions to constrain the regional model. The control simulation corresponds to 1981-2000, and eight future simulations representing 2081-2100 are conducted, each constrained by a different AOGCM and forced by CO2 concentrations from the SRES A2 emissions scenario. After model spin-up, May through October remain for investigation. Analysis is focused on climate change parameters important for impacts on agriculture and water resource management, and is presented in a format compatible with the IPCC reports. Precipitation projections simulated by the regional model are quite consistent, with 75% or more ensemble members agreeing on the sign of the anomaly over vast regions of Africa and India. Over West Africa, where the regional model provides the greatest improvement over the AOGCMs in consistency of ensemble members, precipitation at the end of the century is generally projected to increase during May and decrease in June and July. Wetter conditions are simulated during August though October, with the exception of drying close to the Guinean Coast in August. In late summer, high rainfall rates are simulated more frequently in the future, indicating the possibility for increases in flooding events. The regional model's projections over India are in stark contrast to the AOGCM's, producing intense and generally widespread drying in August and September. The very promising method developed here is young and further potential developments are recognized, including the addition of ocean, vegetation, and dust models. Ensembles which employ other regional models, sets of parameterizations, and emissions scenarios should also be explored.

  8. Constraining the Sensitivity of Amazonian Rainfall with Observations of Surface Temperature

    NASA Astrophysics Data System (ADS)

    Dolman, A. J.; von Randow, C.; de Oliveira, G. S.; Martins, G.; Nobre, C. A.

    2016-12-01

    Earth System models generally do a poor job in predicting Amazonian rainfall, necessitating the need to look for observational constraints on their predictability. We use observed surface temperature and precipitation of the Amazon and a set of 21 CMIP5 models to derive an observational constraint of the sensitivity of rainfall to surface temperature (dP/dT). From first principles such a relation between the surface temperature of the earth and the amount of precipitation through the surface energy balance should exist, particularly in the tropics. When de-trended anomalies in surface temperature and precipitation from a set of datasets are plotted, a clear linear relation between surface temperature and precipitation appears. CMIP5 models show a similar relation with relatively cool models having a larger sensitivity, producing more rainfall. Using the ensemble of models and the observed surface temperature we were able to derive an emerging constraint, reducing the dPdt sensitivity of the CMIP5 model from -0.75 mm day-1 0C-1 (+/- 0.54 SD) to -0.77 mm day-1 0C-1 with a reduced uncertainty of about a factor 5. dPdT from the observation is -0.89 mm day-1 0C-1 . We applied the method to wet and dry season separately noticing that in the wet season we shifted the mean and reduced uncertainty, while in the dry season we very much reduced uncertainty only. The method can be applied to other model simulations such as specific deforestation scenarios to constrain the sensitivity of rainfall to surface temperature. We discuss the implications of the constrained sensitivity for future Amazonian predictions.

  9. Evaluating simulated functional trait patterns and quantifying modelled trait diversity effects on simulated ecosystem fluxes

    NASA Astrophysics Data System (ADS)

    Pavlick, R.; Schimel, D.

    2014-12-01

    Dynamic Global Vegetation Models (DGVMs) typically employ only a small set of Plant Functional Types (PFTs) to represent the vast diversity of observed vegetation forms and functioning. There is growing evidence, however, that this abstraction may not adequately represent the observed variation in plant functional traits, which is thought to play an important role for many ecosystem functions and for ecosystem resilience to environmental change. The geographic distribution of PFTs in these models is also often based on empirical relationships between present-day climate and vegetation patterns. Projections of future climate change, however, point toward the possibility of novel regional climates, which could lead to no-analog vegetation compositions incompatible with the PFT paradigm. Here, we present results from the Jena Diversity-DGVM (JeDi-DGVM), a novel traits-based vegetation model, which simulates a large number of hypothetical plant growth strategies constrained by functional tradeoffs, thereby allowing for a more flexible temporal and spatial representation of the terrestrial biosphere. First, we compare simulated present-day geographical patterns of functional traits with empirical trait observations (in-situ and from airborne imaging spectroscopy). The observed trait patterns are then used to improve the tradeoff parameterizations of JeDi-DGVM. Finally, focusing primarily on the simulated leaf traits, we run the model with various amounts of trait diversity. We quantify the effects of these modeled biodiversity manipulations on simulated ecosystem fluxes and stocks for both present-day conditions and transient climate change scenarios. The simulation results reveal that the coarse treatment of plant functional traits by current PFT-based vegetation models may contribute substantial uncertainty regarding carbon-climate feedbacks. Further development of trait-based models and further investment in global in-situ and spectroscopic plant trait observations are needed.

  10. Directly comparing GW150914 with numerical solutions of Einstein's equations for binary black hole coalescence

    NASA Astrophysics Data System (ADS)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Bejger, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bogan, C.; Bohe, A.; Bond, C.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Broida, J. E.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Cowan, E. E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Engels, W.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fan, X.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fenyvesi, E.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gehrels, N.; Gemme, G.; Geng, P.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; Holz, D. E.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jang, H.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jian, L.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Kapadia, S. J.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelley, D. B.; Kells, W.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chi-Woong; Kim, Chunglee; Kim, J.; Kim, K.; Kim, N.; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lange, J.; Lantz, B.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Lockerbie, N. A.; Lombardi, A. L.; Lord, J. E.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Zertuche, L. Magaña; Magee, R. M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Nedkova, K.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Neunzert, A.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Patrick, Z.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Perri, L. M.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sanchez, E. J.; Sandberg, V.; Sandeen, B.; Sanders, J. R.; Sassolas, B.; Saulson, P. R.; Sauter, O. E. S.; Savage, R. L.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T.; Shahriar, M. S.; Shaltev, M.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torres, C. V.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Trozzo, L.; Tse, M.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, L. E.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yablon, J.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yu, H.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.; Boyle, M.; Campanelli, M.; Chu, T.; Clark, M.; Fauchon-Jones, E.; Fong, H.; Healy, J.; Hemberger, D.; Hinder, I.; Husa, S.; Kalaghati, C.; Khan, S.; Kidder, L. E.; Kinsey, M.; Laguna, P.; London, L. T.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pannarale, F.; Pfeiffer, H. P.; Scheel, M.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Vinuales, A. Vano; Zlochower, Y.; LIGO Scientific Collaboration; Virgo Collaboration

    2016-09-01

    We compare GW150914 directly to simulations of coalescing binary black holes in full general relativity, including several performed specifically to reproduce this event. Our calculations go beyond existing semianalytic models, because for all simulations—including sources with two independent, precessing spins—we perform comparisons which account for all the spin-weighted quadrupolar modes, and separately which account for all the quadrupolar and octopolar modes. Consistent with the posterior distributions reported by Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016)] (at the 90% credible level), we find the data are compatible with a wide range of nonprecessing and precessing simulations. Follow-up simulations performed using previously estimated binary parameters most resemble the data, even when all quadrupolar and octopolar modes are included. Comparisons including only the quadrupolar modes constrain the total redshifted mass Mz∈[64 M⊙-82 M⊙] , mass ratio 1 /q =m2/m1∈[0.6 ,1 ], and effective aligned spin χeff∈[-0.3 ,0.2 ], where χeff=(S1/m1+S2/m2).L ^/M . Including both quadrupolar and octopolar modes, we find the mass ratio is even more tightly constrained. Even accounting for precession, simulations with extreme mass ratios and effective spins are highly inconsistent with the data, at any mass. Several nonprecessing and precessing simulations with similar mass ratio and χeff are consistent with the data. Though correlated, the components' spins (both in magnitude and directions) are not significantly constrained by the data: the data is consistent with simulations with component spin magnitudes a1 ,2 up to at least 0.8, with random orientations. Further detailed follow-up calculations are needed to determine if the data contain a weak imprint from transverse (precessing) spins. For nonprecessing binaries, interpolating between simulations, we reconstruct a posterior distribution consistent with previous results. The final black hole's redshifted mass is consistent with Mf ,z in the range 64.0 M⊙-73.5 M⊙ and the final black hole's dimensionless spin parameter is consistent with af=0.62 - 0.73 . As our approach invokes no intermediate approximations to general relativity and can strongly reject binaries whose radiation is inconsistent with the data, our analysis provides a valuable complement to Abbott et al. [Phys. Rev. Lett. 116, 241102 (2016)].

  11. Sensitivity of single column model simulations of Arctic springtime clouds to different cloud cover and mixed phase cloud parameterizations

    NASA Astrophysics Data System (ADS)

    Zhang, Junhua; Lohmann, Ulrike

    2003-08-01

    The single column model of the Canadian Centre for Climate Modeling and Analysis (CCCma) climate model is used to simulate Arctic spring cloud properties observed during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. The model is driven by the rawinsonde observations constrained European Center for Medium-Range Weather Forecasts (ECMWF) reanalysis data. Five cloud parameterizations, including three statistical and two explicit schemes, are compared and the sensitivity to mixed phase cloud parameterizations is studied. Using the original mixed phase cloud parameterization of the model, the statistical cloud schemes produce more cloud cover, cloud water, and precipitation than the explicit schemes and in general agree better with observations. The mixed phase cloud parameterization from ECMWF decreases the initial saturation specific humidity threshold of cloud formation. This improves the simulated cloud cover in the explicit schemes and reduces the difference between the different cloud schemes. On the other hand, because the ECMWF mixed phase cloud scheme does not consider the Bergeron-Findeisen process, less ice crystals are formed. This leads to a higher liquid water path and less precipitation than what was observed.

  12. Modelling cell motility and chemotaxis with evolving surface finite elements

    PubMed Central

    Elliott, Charles M.; Stinner, Björn; Venkataraman, Chandrasekhar

    2012-01-01

    We present a mathematical and a computational framework for the modelling of cell motility. The cell membrane is represented by an evolving surface, with the movement of the cell determined by the interaction of various forces that act normal to the surface. We consider external forces such as those that may arise owing to inhomogeneities in the medium and a pressure that constrains the enclosed volume, as well as internal forces that arise from the reaction of the cells' surface to stretching and bending. We also consider a protrusive force associated with a reaction–diffusion system (RDS) posed on the cell membrane, with cell polarization modelled by this surface RDS. The computational method is based on an evolving surface finite-element method. The general method can account for the large deformations that arise in cell motility and allows the simulation of cell migration in three dimensions. We illustrate applications of the proposed modelling framework and numerical method by reporting on numerical simulations of a model for eukaryotic chemotaxis and a model for the persistent movement of keratocytes in two and three space dimensions. Movies of the simulated cells can be obtained from http://homepages.warwick.ac.uk/∼maskae/CV_Warwick/Chemotaxis.html. PMID:22675164

  13. Cosmic shear as a probe of galaxy formation physics

    DOE PAGES

    Foreman, Simon; Becker, Matthew R.; Wechsler, Risa H.

    2016-09-01

    Here, we evaluate the potential for current and future cosmic shear measurements from large galaxy surveys to constrain the impact of baryonic physics on the matter power spectrum. We do so using a model-independent parametrization that describes deviations of the matter power spectrum from the dark-matter-only case as a set of principal components that are localized in wavenumber and redshift. We perform forecasts for a variety of current and future data sets, and find that at least ~90 per cent of the constraining power of these data sets is contained in no more than nine principal components. The constraining powermore » of different surveys can be quantified using a figure of merit defined relative to currently available surveys. With this metric, we find that the final Dark Energy Survey data set (DES Y5) and the Hyper Suprime-Cam Survey will be roughly an order of magnitude more powerful than existing data in constraining baryonic effects. Upcoming Stage IV surveys (Large Synoptic Survey Telescope, Euclid, and Wide Field Infrared Survey Telescope) will improve upon this by a further factor of a few. We show that this conclusion is robust to marginalization over several key systematics. The ultimate power of cosmic shear to constrain galaxy formation is dependent on understanding systematics in the shear measurements at small (sub-arcminute) scales. Lastly, if these systematics can be sufficiently controlled, cosmic shear measurements from DES Y5 and other future surveys have the potential to provide a very clean probe of galaxy formation and to strongly constrain a wide range of predictions from modern hydrodynamical simulations.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petiteau, Antoine; Babak, Stanislav; Sesana, Alberto

    Gravitational wave (GW) signals from coalescing massive black hole (MBH) binaries could be used as standard sirens to measure cosmological parameters. The future space-based GW observatory Laser Interferometer Space Antenna (LISA) will detect up to a hundred of those events, providing very accurate measurements of their luminosity distances. To constrain the cosmological parameters, we also need to measure the redshift of the galaxy (or cluster of galaxies) hosting the merger. This requires the identification of a distinctive electromagnetic event associated with the binary coalescence. However, putative electromagnetic signatures may be too weak to be observed. Instead, we study here themore » possibility of constraining the cosmological parameters by enforcing statistical consistency between all the possible hosts detected within the measurement error box of a few dozen of low-redshift (z < 3) events. We construct MBH populations using merger tree realizations of the dark matter hierarchy in a {Lambda}CDM universe, and we use data from the Millennium simulation to model the galaxy distribution in the LISA error box. We show that, assuming that all the other cosmological parameters are known, the parameter w describing the dark energy equation of state can be constrained to a 4%-8% level (2{sigma} error), competitive with current uncertainties obtained by type Ia supernovae measurements, providing an independent test of our cosmological model.« less

  15. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, R.; Hong, Seungkyu K.; Kwon, Hyoung-Ahn

    We used a 3-D regional atmospheric chemistry transport model (WRF-Chem) to examine processes that determine O3 in East Asia; in particular, we focused on O3 dry deposition, which is an uncertain research area due to insufficient observation and numerical studies in East Asia. Here, we compare two widely used dry deposition parameterization schemes, Wesely and M3DRY, which are used in the WRF-Chem and CMAQ models, respectively. The O3 dry deposition velocities simulated using the two aforementioned schemes under identical meteorological conditions show considerable differences (a factor of 2) due to surface resistance parameterization discrepancies. The O3 concentration differed by upmore » to 10 ppbv for the monthly mean. The simulated and observed dry deposition velocities were compared, which showed that the Wesely scheme model is consistent with the observations and successfully reproduces the observed diurnal variation. We conduct several sensitivity simulations by changing the land use data, the surface resistance of the water and the model’s spatial resolution to examine the factors that affect O3 concentrations in East Asia. As shown, the model was considerably sensitive to the input parameters, which indicates a high uncertainty for such O3 dry deposition simulations. Observations are necessary to constrain the dry deposition parameterization and input data to improve the East Asia air quality models.« less

  17. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    NASA Technical Reports Server (NTRS)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  18. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  19. Northeast Parallel Architectures Center (NPAC) at Syracuse University

    DTIC Science & Technology

    1990-12-01

    lattice models. On the CM-2 we will fun a lattice gauge theory simulation of quantum chromodynamics ( QCD ), and on the CM-1 we will investigate the...into a three-dimensional grid with the stipulation that adjacent processors in the lattice correspond to proximate regions of space. Light paths will...be constrained to follow lattice links and the sum over all paths from light sources to each lattice site will be computed inductively by all

  20. Constraining the Source of the M w 8.1 Chiapas, Mexico Earthquake of 8 September 2017 Using Teleseismic and Tsunami Observations

    NASA Astrophysics Data System (ADS)

    Heidarzadeh, Mohammad; Ishibe, Takeo; Harada, Tomoya

    2018-04-01

    The September 2017 Chiapas (Mexico) normal-faulting intraplate earthquake (M w 8.1) occurred within the Tehuantepec seismic gap offshore Mexico. We constrained the finite-fault slip model of this great earthquake using teleseismic and tsunami observations. First, teleseismic body-wave inversions were conducted for both steep (NP-1) and low-angle (NP-2) nodal planes for rupture velocities (V r) of 1.5-4.0 km/s. Teleseismic inversion guided us to NP-1 as the actual fault plane, but was not conclusive about the best V r. Tsunami simulations also confirmed that NP-1 is favored over NP-2 and guided the V r = 2.5 km/s as the best source model. Our model has a maximum and average slips of 13.1 and 3.7 m, respectively, over a 130 km × 80 km fault plane. Coulomb stress transfer analysis revealed that the probability for the occurrence of a future large thrust interplate earthquake at offshore of the Tehuantepec seismic gap had been increased following the 2017 Chiapas normal-faulting intraplate earthquake.

  1. Isoprene emissions over Asia 1979-2012 : impact of climate and land use changes

    NASA Astrophysics Data System (ADS)

    Stavrakou, Trissevgeni; Müller, Jean-Francois; Bauwens, Maite; Guenther, Alex; De Smedt, Isabelle; Van Roozendael, Michel

    2014-05-01

    Due to the scarcity of observational contraints and the rapidly changing environment in East and Southeast Asia, isoprene emissions predicted by models are expected to bear substantial uncertainties. This study aims at improving upon current bottom-up estimates, and investigate the temporal evolution of isoprene fluxes in Asia over 1979-2012. For that, we use the MEGAN model and incorporate (i) changes in land use, including the rapid expansion of oil palms, (ii) meteorological variability, (iii) long-term changes in solar radiation constrained by surface network measurements, and (iv) recent experimental evidence that South Asian forests are much weaker isoprene emitters than previously assumed. These effects lead to a significant reduction of the total isoprene fluxes over the studied domain compared to the standard simulation. The bottom-up emissions are evaluated using satellite-based emission estimates derived from inverse modelling constrained by GOME-2/MetOp-A formaldehyde columns through 2007-2012. The top-down estimates support our assumptions and confirm the lower isoprene emission rate in tropical forests of Indonesia and Malaysia.

  2. A Global Analysis of Light and Charge Yields in Liquid Xenon

    DOE PAGES

    Lenardo, Brian; Kazkaz, Kareem; Manalaysay, Aaron; ...

    2015-11-04

    Here, we present an updated model of light and charge yields from nuclear recoils in liquid xenon with a simultaneously constrained parameter set. A global analysis is performed using measurements of electron and photon yields compiled from all available historical data, as well as measurements of the ratio of the two. These data sweep over energies from keV and external applied electric fields from V/cm. The model is constrained by constructing global cost functions and using a simulated annealing algorithm and a Markov Chain Monte Carlo approach to optimize and find confidence intervals on all free parameters in the model.more » This analysis contrasts with previous work in that we do not unnecessarily exclude datasets nor impose artificially conservative assumptions, do not use spline functions, and reduce the number of parameters used in NEST v 0.98. Here, we report our results and the calculated best-fit charge and light yields. These quantities are crucial to understanding the response of liquid xenon detectors in the energy regime important for rare event searches such as the direct detection of dark matter particles.« less

  3. New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations

    NASA Technical Reports Server (NTRS)

    Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.

    2012-01-01

    In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.

  4. Model-Observation "Data Cubes" for the DOE Atmospheric Radiation Measurement Program's LES ARM Symbiotic Simulation and Observation (LASSO) Workflow

    NASA Astrophysics Data System (ADS)

    Vogelmann, A. M.; Gustafson, W. I., Jr.; Toto, T.; Endo, S.; Cheng, X.; Li, Z.; Xiao, H.

    2015-12-01

    The Department of Energy's Atmospheric Radiation Measurement (ARM) Climate Research Facilities' Large-Eddy Simulation (LES) ARM Symbiotic Simulation and Observation (LASSO) Workflow is currently being designed to provide output from routine LES to complement its extensive observations. The modeling portion of the LASSO workflow is presented by Gustafson et al., which will initially focus on shallow convection over the ARM megasite in Oklahoma, USA. This presentation describes how the LES output will be combined with observations to construct multi-dimensional and dynamically consistent "data cubes", aimed at providing the best description of the atmospheric state for use in analyses by the community. The megasite observations are used to constrain large-eddy simulations that provide a complete spatial and temporal coverage of observables and, further, the simulations also provide information on processes that cannot be observed. Statistical comparisons of model output with their observables are used to assess the quality of a given simulated realization and its associated uncertainties. A data cube is a model-observation package that provides: (1) metrics of model-observation statistical summaries to assess the simulations and the ensemble spread; (2) statistical summaries of additional model property output that cannot be or are very difficult to observe; and (3) snapshots of the 4-D simulated fields from the integration period. Searchable metrics are provided that characterize the general atmospheric state to assist users in finding cases of interest, such as categorization of daily weather conditions and their specific attributes. The data cubes will be accompanied by tools designed for easy access to cube contents from within the ARM archive and externally, the ability to compare multiple data streams within an event as well as across events, and the ability to use common grids and time sampling, where appropriate.

  5. Advances in the simulation and automated measurement of well-sorted granular material: 1. Simulation

    USGS Publications Warehouse

    Daniel Buscombe,; Rubin, David M.

    2012-01-01

    1. In this, the first of a pair of papers which address the simulation and automated measurement of well-sorted natural granular material, a method is presented for simulation of two-phase (solid, void) assemblages of discrete non-cohesive particles. The purpose is to have a flexible, yet computationally and theoretically simple, suite of tools with well constrained and well known statistical properties, in order to simulate realistic granular material as a discrete element model with realistic size and shape distributions, for a variety of purposes. The stochastic modeling framework is based on three-dimensional tessellations with variable degrees of order in particle-packing arrangement. Examples of sediments with a variety of particle size distributions and spatial variability in grain size are presented. The relationship between particle shape and porosity conforms to published data. The immediate application is testing new algorithms for automated measurements of particle properties (mean and standard deviation of particle sizes, and apparent porosity) from images of natural sediment, as detailed in the second of this pair of papers. The model could also prove useful for simulating specific depositional structures found in natural sediments, the result of physical alterations to packing and grain fabric, using discrete particle flow models. While the principal focus here is on naturally occurring sediment and sedimentary rock, the methods presented might also be useful for simulations of similar granular or cellular material encountered in engineering, industrial and life sciences.

  6. Antarctic contribution to meltwater pulse 1A from reduced Southern Ocean overturning.

    PubMed

    Golledge, N R; Menviel, L; Carter, L; Fogwill, C J; England, M H; Cortese, G; Levy, R H

    2014-09-29

    During the last glacial termination, the upwelling strength of the southern polar limb of the Atlantic Meridional Overturning Circulation varied, changing the ventilation and stratification of the high-latitude Southern Ocean. During the same period, at least two phases of abrupt global sea-level rise--meltwater pulses--took place. Although the timing and magnitude of these events have become better constrained, a causal link between ocean stratification, the meltwater pulses and accelerated ice loss from Antarctica has not been proven. Here we simulate Antarctic ice sheet evolution over the last 25 kyr using a data-constrained ice-sheet model forced by changes in Southern Ocean temperature from an Earth system model. Results reveal several episodes of accelerated ice-sheet recession, the largest being coincident with meltwater pulse 1A. This resulted from reduced Southern Ocean overturning following Heinrich Event 1, when warmer subsurface water thermally eroded grounded marine-based ice and instigated a positive feedback that further accelerated ice-sheet retreat.

  7. Gravitational Wave Signals from the First Massive Black Hole Seeds

    NASA Astrophysics Data System (ADS)

    Hartwig, Tilman; Agarwal, Bhaskar; Regan, John A.

    2018-05-01

    Recent numerical simulations reveal that the isothermal collapse of pristine gas in atomic cooling haloes may result in stellar binaries of supermassive stars with M* ≳ 104M⊙. For the first time, we compute the in-situ merger rate for such massive black hole remnants by combining their abundance and multiplicity estimates. For black holes with initial masses in the range 104 - 6M⊙ merging at redshifts z ≳ 15 our optimistic model predicts that LISA should be able to detect 0.6 mergers per year. This rate of detection can be attributed, without confusion, to the in-situ mergers of seeds from the collapse of very massive stars. Equally, in the case where LISA observes no mergers from heavy seeds at z ≳ 15 we can constrain the combined number density, multiplicity, and coalesence times of these high-redshift systems. This letter proposes gravitational wave signatures as a means to constrain theoretical models and processes that govern the abundance of massive black hole seeds in the early Universe.

  8. Using CATS Near-Real-time Lidar Observations to Monitor and Constrain Volcanic Sulfur Dioxide (SO2) Forecasts

    NASA Technical Reports Server (NTRS)

    Hughes, E. J.; Yorks, J.; Krotkov, N. A.; da Silva, A. M.; Mcgill, M.

    2016-01-01

    An eruption of Italian volcano Mount Etna on 3 December 2015 produced fast-moving sulfur dioxide (SO2) and sulfate aerosol clouds that traveled across Asia and the Pacific Ocean, reaching North America in just 5 days. The Ozone Profiler and Mapping Suite's Nadir Mapping UV spectrometer aboard the U.S. National Polar-orbiting Partnership satellite observed the horizontal transport of the SO2 cloud. Vertical profiles of the colocated volcanic sulfate aerosols were observed between 11.5 and 13.5 km by the new Cloud Aerosol Transport System (CATS) space-based lidar aboard the International Space Station. Backward trajectory analysis estimates the SO2 cloud altitude at 7-12 km. Eulerian model simulations of the SO2 cloud constrained by CATS measurements produced more accurate dispersion patterns compared to those initialized with the back trajectory height estimate. The near-real-time data processing capabilities of CATS are unique, and this work demonstrates the use of these observations to monitor and model volcanic clouds.

  9. Using CATS Near-Real-Time Lidar Observations to Monitor and Constrain Volcanic Sulfur Dioxide (SO2) Forecasts

    NASA Technical Reports Server (NTRS)

    Hughes, E. J.; Yorks, J.; Krotkov, N. A.; Da Silva, A. M.; McGill, M.

    2016-01-01

    An eruption of Italian volcano Mount Etna on 3 December 2015 produced fast-moving sulfur dioxide (SO2) and sulfate aerosol clouds that traveled across Asia and the Pacific Ocean, reaching North America in just 5days. The Ozone Profiler and Mapping Suite's Nadir Mapping UV spectrometer aboard the U.S. National Polar-orbiting Partnership satellite observed the horizontal transport of the SO2 cloud. Vertical profiles of the colocated volcanic sulfate aerosols were observed between 11.5 and 13.5 km by the new Cloud Aerosol Transport System (CATS) space-based lidar aboard the International Space Station. Backward trajectory analysis estimates the SO2 cloud altitude at 7-12 km. Eulerian model simulations of the SO2 cloud constrained by CATS measurements produced more accurate dispersion patterns compared to those initialized with the back trajectory height estimate. The near-real-time data processing capabilities of CATS are unique, and this work demonstrates the use of these observations to monitor and model volcanic clouds.

  10. Statistical Issues in Galaxy Cluster Cosmology

    NASA Technical Reports Server (NTRS)

    Mantz, Adam

    2013-01-01

    The number and growth of massive galaxy clusters are sensitive probes of cosmological structure formation. Surveys at various wavelengths can detect clusters to high redshift, but the fact that cluster mass is not directly observable complicates matters, requiring us to simultaneously constrain scaling relations of observable signals with mass. The problem can be cast as one of regression, in which the data set is truncated, the (cosmology-dependent) underlying population must be modeled, and strong, complex correlations between measurements often exist. Simulations of cosmological structure formation provide a robust prediction for the number of clusters in the Universe as a function of mass and redshift (the mass function), but they cannot reliably predict the observables used to detect clusters in sky surveys (e.g. X-ray luminosity). Consequently, observers must constrain observable-mass scaling relations using additional data, and use the scaling relation model in conjunction with the mass function to predict the number of clusters as a function of redshift and luminosity.

  11. Compton Reflection in AGN with Simbol-X

    NASA Astrophysics Data System (ADS)

    Beckmann, V.; Courvoisier, T. J.-L.; Gehrels, N.; Lubiński, P.; Malzac, J.; Petrucci, P. O.; Shrader, C. R.; Soldi, S.

    2009-05-01

    AGN exhibit complex hard X-ray spectra. Our current understanding is that the emission is dominated by inverse Compton processes which take place in the corona above the accretion disk, and that absorption and reflection in a distant absorber play a major role. These processes can be directly observed through the shape of the continuum, the Compton reflection hump around 30 keV, and the iron fluorescence line at 6.4 keV. We demonstrate the capabilities of Simbol-X to constrain complex models for cases like MCG-05-23-016, NGC 4151, NGC 2110, and NGC 4051 in short (10 ksec) observations. We compare the simulations with recent observations on these sources by INTEGRAL, Swift and Suzaku. Constraining reflection models for AGN with Simbol-X will help us to get a clear view of the processes and geometry near to the central engine in AGN, and will give insight to which sources are responsible for the Cosmic X-ray background at energies >20 keV.

  12. Analytical Model of Large Data Transactions in CoAP Networks

    PubMed Central

    Ludovici, Alessandro; Di Marco, Piergiuseppe; Calveras, Anna; Johansson, Karl H.

    2014-01-01

    We propose a novel analytical model to study fragmentation methods in wireless sensor networks adopting the Constrained Application Protocol (CoAP) and the IEEE 802.15.4 standard for medium access control (MAC). The blockwise transfer technique proposed in CoAP and the 6LoWPAN fragmentation are included in the analysis. The two techniques are compared in terms of reliability and delay, depending on the traffic, the number of nodes and the parameters of the IEEE 802.15.4 MAC. The results are validated trough Monte Carlo simulations. To the best of our knowledge this is the first study that evaluates and compares analytically the performance of CoAP blockwise transfer and 6LoWPAN fragmentation. A major contribution is the possibility to understand the behavior of both techniques with different network conditions. Our results show that 6LoWPAN fragmentation is preferable for delay-constrained applications. For highly congested networks, the blockwise transfer slightly outperforms 6LoWPAN fragmentation in terms of reliability. PMID:25153143

  13. Enriching Triangle Mesh Animations with Physically Based Simulation.

    PubMed

    Li, Yijing; Xu, Hongyi; Barbic, Jernej

    2017-10-01

    We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.

  14. Performance enhancement of fin attached ice-on-coil type thermal storage tank for different fin orientations using constrained and unconstrained simulations

    NASA Astrophysics Data System (ADS)

    Kim, M. H.; Duong, X. Q.; Chung, J. D.

    2017-03-01

    One of the drawbacks in latent thermal energy storage system is the slow charging and discharging time due to the low thermal conductivity of the phase change materials (PCM). This study numerically investigated the PCM melting process inside a finned tube to determine enhanced heat transfer performance. The influences of fin length and fin numbers were investigated. Also, two different fin orientations, a vertical and horizontal type, were examined, using two different simulation methods, constrained and unconstrained. The unconstrained simulation, which considers the density difference between the solid and liquid PCM showed approximately 40 % faster melting rate than that of constrained simulation. For a precise estimation of discharging performance, unconstrained simulation is essential. Thermal instability was found in the liquid layer below the solid PCM, which is contrary to the linear stability theory, due to the strong convection driven by heat flux from the coil wall. As the fin length increases, the area affected by the fin becomes larger, thus the discharging time becomes shorter. The discharging performance also increased as the fin number increased, but the enhancement of discharging performance by more than two fins was not discernible. The horizontal type shortened the complete melting time by approximately 10 % compared to the vertical type.

  15. North American water availability under stress and duress: building understanding from simulations, observations and data products

    NASA Astrophysics Data System (ADS)

    Maxwell, R. M.; Condon, L. E.; Atchley, A. L.; Hector, B.

    2017-12-01

    Quantifying the available freshwater for human use and ecological function depends on fluxes and stores that are hard to observe. Evapotranspiration (ET) is the largest terrestrial flux of water behind precipitation but is observed with low spatial density. Likewise, groundwater is the largest freshwater store, yet is equally uncertain. The ability to upscale observations of these variables is an additional complication; point measurements are made at scales orders of magnitude smaller than remote sensing data products. Integrated hydrologic models that simulate continental extents at fine spatial resolution are now becoming an additional tool to constrain fluxes and address interconnections. For example, recent work has shown connections between water table depth and transpiration partitioning, and demonstrated the ability to reconcile point observations and large-scale inferences. Here we explore the dynamics of large hydrologic systems experiencing change and stress across continental North America using integrated model simulations, observations and data products. Simulations of aquifer depletion due to pervasive groundwater pumping diagnose both stream depletion and changes in ET. Simulations of systematic increases in temperature are used to understand the relationship between snowpack dynamics, surface and groundwater flow, ET and a changing climate. Remotely sensed products including the GRACE estimates of total storage change are downscaled using model simulations to better understand human impacts to the hydrologic cycle. These example applications motivate a path forward to better use simulations to understand water availability.

  16. The biomechanics of an overarm throwing task: a simulation model examination of optimal timing of muscle activations.

    PubMed

    Chowdhary, A G; Challis, J H

    2001-07-07

    A series of overarm throws, constrained to the parasagittal plane, were simulated using a muscle model actuated two-segment model representing the forearm and hand plus projectile. The parameters defining the modeled muscles and the anthropometry of the two-segment models were specific to the two young male subjects. All simulations commenced from a position of full elbow flexion and full wrist extension. The study was designed to elucidate the optimal inter-muscular coordination strategies for throwing projectiles to achieve maximum range, as well as maximum projectile kinetic energy for a variety of projectile masses. A proximal to distal (PD) sequence of muscle activations was seen in many of the simulated throws but not all. Under certain conditions moment reversal produced a longer throw and greater projectile energy, and deactivation of the muscles resulted in increased projectile energy. Therefore, simple timing of muscle activation does not fully describe the patterns of muscle recruitment which can produce optimal throws. The models of the two subjects required different timings of muscle activations, and for some of the tasks used different coordination patterns. Optimal strategies were found to vary with the mass of the projectile, the anthropometry and the muscle characteristics of the subjects modeled. The tasks examined were relatively simple, but basic rules for coordinating these tasks were not evident. Copyright 2001 Academic Press.

  17. Enabling parallel simulation of large-scale HPC network systems

    DOE PAGES

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; ...

    2016-04-07

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  18. Enabling parallel simulation of large-scale HPC network systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks usedmore » in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations« less

  19. Application of constrained k-means clustering in ground motion simulation validation

    NASA Astrophysics Data System (ADS)

    Khoshnevis, N.; Taborda, R.

    2017-12-01

    The validation of ground motion synthetics has received increased attention over the last few years due to the advances in physics-based deterministic and hybrid simulation methods. Unlike for low frequency simulations (f ≤ 0.5 Hz), for which it has become reasonable to expect a good match between synthetics and data, in the case of high-frequency simulations (f ≥ 1 Hz) it is not possible to match results on a wiggle-by-wiggle basis. This is mostly due to the various complexities and uncertainties involved in earthquake ground motion modeling. Therefore, in order to compare synthetics with data we turn to different time series metrics, which are used as a means to characterize how the synthetics match the data on qualitative and statistical sense. In general, these metrics provide GOF scores that measure the level of similarity in the time and frequency domains. It is common for these scores to be scaled from 0 to 10, with 10 representing a perfect match. Although using individual metrics for particular applications is considered more adequate, there is no consensus or a unified method to classify the comparison between a set of synthetic and recorded seismograms when the various metrics offer different scores. We study the relationship among these metrics through a constrained k-means clustering approach. We define 4 hypothetical stations with scores 3, 5, 7, and 9 for all metrics. We put these stations in the category of cannot-link constraints. We generate the dataset through the validation of the results from a deterministic (physics-based) ground motion simulation for a moderate magnitude earthquake in the greater Los Angeles basin using three velocity models. The maximum frequency of the simulation is 4 Hz. The dataset involves over 300 stations and 11 metrics, or features, as they are understood in the clustering process, where the metrics form a multi-dimensional space. We address the high-dimensional feature effects with a subspace-clustering analysis, generate a final labeled dataset of stations, and discuss the within-class statistical characteristics of each metric. Labeling these stations is the first step towards developing a unified metric to evaluate ground motion simulations in an application-independent manner.

  20. Using Lidar and Radar measurements to constrain predictions of forest ecosystem structure and function.

    PubMed

    Antonarakis, Alexander S; Saatchi, Sassan S; Chazdon, Robin L; Moorcroft, Paul R

    2011-06-01

    Insights into vegetation and aboveground biomass dynamics within terrestrial ecosystems have come almost exclusively from ground-based forest inventories that are limited in their spatial extent. Lidar and synthetic-aperture Radar are promising remote-sensing-based techniques for obtaining comprehensive measurements of forest structure at regional to global scales. In this study we investigate how Lidar-derived forest heights and Radar-derived aboveground biomass can be used to constrain the dynamics of the ED2 terrestrial biosphere model. Four-year simulations initialized with Lidar and Radar structure variables were compared against simulations initialized from forest-inventory data and output from a long-term potential-vegtation simulation. Both height and biomass initializations from Lidar and Radar measurements significantly improved the representation of forest structure within the model, eliminating the bias of too many large trees that arose in the potential-vegtation-initialized simulation. The Lidar and Radar initializations decreased the proportion of larger trees estimated by the potential vegetation by approximately 20-30%, matching the forest inventory. This resulted in improved predictions of ecosystem-scale carbon fluxes and structural dynamics compared to predictions from the potential-vegtation simulation. The Radar initialization produced biomass values that were 75% closer to the forest inventory, with Lidar initializations producing canopy height values closest to the forest inventory. Net primary production values for the Radar and Lidar initializations were around 6-8% closer to the forest inventory. Correcting the Lidar and Radar initializations for forest composition resulted in improved biomass and basal-area dynamics as well as leaf-area index. Correcting the Lidar and Radar initializations for forest composition and fine-scale structure by combining the remote-sensing measurements with ground-based inventory data further improved predictions, suggesting that further improvements of structural and carbon-flux metrics will also depend on obtaining reliable estimates of forest composition and accurate representation of the fine-scale vertical and horizontal structure of plant canopies.

  1. LSST: Cadence Design and Simulation

    NASA Astrophysics Data System (ADS)

    Cook, Kem H.; Pinto, P. A.; Delgado, F.; Miller, M.; Petry, C.; Saha, A.; Gee, P. A.; Tyson, J. A.; Ivezic, Z.; Jones, L.; LSST Collaboration

    2009-01-01

    The LSST Project has developed an operations simulator to investigate how best to observe the sky to achieve its multiple science goals. The simulator has a sophisticated model of the telescope and dome to properly constrain potential observing cadences. This model has also proven useful for investigating various engineering issues ranging from sizing of slew motors, to design of cryogen lines to the camera. The simulator is capable of balancing cadence goals from multiple science programs, and attempts to minimize time spent slewing as it carries out these goals. The operations simulator has been used to demonstrate a 'universal' cadence which delivers the science requirements for a deep cosmology survey, a Near Earth Object Survey and good sampling in the time domain. We will present the results of simulating 10 years of LSST operations using realistic seeing distributions, historical weather data, scheduled engineering downtime and current telescope and camera parameters. These simulations demonstrate the capability of the LSST to deliver a 25,000 square degree survey probing the time domain including 20,000 square degrees for a uniform deep, wide, fast survey, while effectively surveying for NEOs over the same area. We will also present our plans for future development of the simulator--better global minimization of slew time and eventual transition to a scheduler for the real LSST.

  2. Models for small-scale structure on cosmic strings. II. Scaling and its stability

    NASA Astrophysics Data System (ADS)

    Vieira, J. P. P.; Martins, C. J. A. P.; Shellard, E. P. S.

    2016-11-01

    We make use of the formalism described in a previous paper [Martins et al., Phys. Rev. D 90, 043518 (2014)] to address general features of wiggly cosmic string evolution. In particular, we highlight the important role played by poorly understood energy loss mechanisms and propose a simple Ansatz which tackles this problem in the context of an extended velocity-dependent one-scale model. We find a general procedure to determine all the scaling solutions admitted by a specific string model and study their stability, enabling a detailed comparison with future numerical simulations. A simpler comparison with previous Goto-Nambu simulations supports earlier evidence that scaling is easier to achieve in the matter era than in the radiation era. In addition, we also find that the requirement that a scaling regime be stable seems to notably constrain the allowed range of energy loss parameters.

  3. Assessing the applicability of WRF optimal parameters under the different precipitation simulations in the Greater Beijing Area

    NASA Astrophysics Data System (ADS)

    Di, Zhenhua; Duan, Qingyun; Wang, Chen; Ye, Aizhong; Miao, Chiyuan; Gong, Wei

    2018-03-01

    Forecasting skills of the complex weather and climate models have been improved by tuning the sensitive parameters that exert the greatest impact on simulated results based on more effective optimization methods. However, whether the optimal parameter values are still work when the model simulation conditions vary, which is a scientific problem deserving of study. In this study, a highly-effective optimization method, adaptive surrogate model-based optimization (ASMO), was firstly used to tune nine sensitive parameters from four physical parameterization schemes of the Weather Research and Forecasting (WRF) model to obtain better summer precipitation forecasting over the Greater Beijing Area in China. Then, to assess the applicability of the optimal parameter values, simulation results from the WRF model with default and optimal parameter values were compared across precipitation events, boundary conditions, spatial scales, and physical processes in the Greater Beijing Area. The summer precipitation events from 6 years were used to calibrate and evaluate the optimal parameter values of WRF model. Three boundary data and two spatial resolutions were adopted to evaluate the superiority of the calibrated optimal parameters to default parameters under the WRF simulations with different boundary conditions and spatial resolutions, respectively. Physical interpretations of the optimal parameters indicating how to improve precipitation simulation results were also examined. All the results showed that the optimal parameters obtained by ASMO are superior to the default parameters for WRF simulations for predicting summer precipitation in the Greater Beijing Area because the optimal parameters are not constrained by specific precipitation events, boundary conditions, and spatial resolutions. The optimal values of the nine parameters were determined from 127 parameter samples using the ASMO method, which showed that the ASMO method is very highly-efficient for optimizing WRF model parameters.

  4. Constraining the temperature history of the past millennium using early instrumental observations

    NASA Astrophysics Data System (ADS)

    Brohan, P.

    2012-12-01

    The current assessment that twentieth-century global temperature change is unusual in the context of the last thousand years relies on estimates of temperature changes from natural proxies (tree-rings, ice-cores etc.) and climate model simulations. Confidence in such estimates is limited by difficulties in calibrating the proxies and systematic differences between proxy reconstructions and model simulations - notable differences include large differences in multi-decadal variability between proxy reconstructions, and big uncertainties in the effect of volcanic eruptions. Because the difference between the estimates extends into the relatively recent period of the early nineteenth century it is possible to compare them with a reliable instrumental estimate of the temperature change over that period, provided that enough early thermometer observations, covering a wide enough expanse of the world, can be collected. By constraining key aspects of the reconstructions and simulations, instrumental observations, inevitably from a limited period, can reduce reconstruction uncertainty throughout the millennium. A considerable quantity of early instrumental observations are preserved in the world's archives. One organisation which systematically made observations and collected the results was the English East-India Company (EEIC), and 900 log-books of EEIC ships containing daily instrumental measurements of temperature and pressure have been preserved in the British Library. Similar records from voyages of exploration and scientific investigation are preserved in published literature and the records in National Archives. Some of these records have been extracted and digitised, providing hundreds of thousands of new weather records offering an unprecedentedly detailed view of the weather and climate of the late eighteenth and early nineteenth centuries. The new thermometer observations demonstrate that the large-scale temperature response to the Tambora eruption and the 1809 eruption was modest (perhaps 0.5C). This provides a powerful out-of-sample validation for the proxy reconstructions --- supporting their use for longer-term climate reconstructions. However, some of the climate model simulations in the CMIP5 ensemble show much larger volcanic effects than this --- such simulations are unlikely to be accurate in this respect.

  5. Warm Dark Matter and Cosmic Reionization

    DOE PAGES

    Villanueva-Domingo, Pablo; Gnedin, Nickolay Y.; Mena, Olga

    2018-01-10

    In models with dark matter made of particles with keV masses, such as a sterile neutrino, small-scale density perturbations are suppressed, delaying the period at which the lowest mass galaxies are formed and therefore shifting the reionization processes to later epochs. In this study, focusing on Warm Dark Matter (WDM) with masses close to its present lower bound, i.e., around the 3 keV region, we derive constraints from galaxy luminosity functions, the ionization history and the Gunn–Peterson effect. We show that even if star formation efficiency in the simulations is adjusted to match the observed UV galaxy luminosity functions in bothmore » CDM and WDM models, the full distribution of Gunn–Peterson optical depth retains the strong signature of delayed reionization in the WDM model. Furthermore, until the star formation and stellar feedback model used in modern galaxy formation simulations is constrained better, any conclusions on the nature of dark matter derived from reionization observables remain model-dependent.« less

  6. Warm Dark Matter and Cosmic Reionization

    NASA Astrophysics Data System (ADS)

    Villanueva-Domingo, Pablo; Gnedin, Nickolay Y.; Mena, Olga

    2018-01-01

    In models with dark matter made of particles with keV masses, such as a sterile neutrino, small-scale density perturbations are suppressed, delaying the period at which the lowest mass galaxies are formed and therefore shifting the reionization processes to later epochs. In this study, focusing on Warm Dark Matter (WDM) with masses close to its present lower bound, i.e., around the 3 keV region, we derive constraints from galaxy luminosity functions, the ionization history and the Gunn–Peterson effect. We show that even if star formation efficiency in the simulations is adjusted to match the observed UV galaxy luminosity functions in both CDM and WDM models, the full distribution of Gunn–Peterson optical depth retains the strong signature of delayed reionization in the WDM model. However, until the star formation and stellar feedback model used in modern galaxy formation simulations is constrained better, any conclusions on the nature of dark matter derived from reionization observables remain model-dependent.

  7. Modelling Management Practices in Viticulture while Considering Resource Limitations: The Dhivine Model

    PubMed Central

    Martin-Clouaire, Roger; Rellier, Jean-Pierre; Paré, Nakié; Voltz, Marc; Biarnès, Anne

    2016-01-01

    Many farming-system studies have investigated the design and evaluation of crop-management practices with respect to economic performance and reduction in environmental impacts. In contrast, little research has been devoted to analysing these practices in terms of matching the recurrent context-dependent demand for resources (labour in particular) with those available on the farm. This paper presents Dhivine, a simulation model of operational management of grape production at the vineyard scale. Particular attention focuses on representing a flexible plan, which organises activities temporally, the resources available to the vineyard manager and the process of scheduling and executing the activities. The model relies on a generic production-system ontology used in several agricultural production domains. The types of investigations that the model supports are briefly illustrated. The enhanced realism of the production-management situations simulated makes it possible to examine and understand properties of resource-constrained work-organisation strategies and possibilities for improving them. PMID:26990089

  8. Warm Dark Matter and Cosmic Reionization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Villanueva-Domingo, Pablo; Gnedin, Nickolay Y.; Mena, Olga

    In models with dark matter made of particles with keV masses, such as a sterile neutrino, small-scale density perturbations are suppressed, delaying the period at which the lowest mass galaxies are formed and therefore shifting the reionization processes to later epochs. In this study, focusing on Warm Dark Matter (WDM) with masses close to its present lower bound, i.e., around the 3 keV region, we derive constraints from galaxy luminosity functions, the ionization history and the Gunn–Peterson effect. We show that even if star formation efficiency in the simulations is adjusted to match the observed UV galaxy luminosity functions in bothmore » CDM and WDM models, the full distribution of Gunn–Peterson optical depth retains the strong signature of delayed reionization in the WDM model. Furthermore, until the star formation and stellar feedback model used in modern galaxy formation simulations is constrained better, any conclusions on the nature of dark matter derived from reionization observables remain model-dependent.« less

  9. The effect of stochiastic technique on estimates of population viability from transition matrix models

    USGS Publications Warehouse

    Kaye, T.N.; Pyke, David A.

    2003-01-01

    Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.

  10. A Simulation Tool for Dynamic Contrast Enhanced MRI

    PubMed Central

    Mauconduit, Franck; Christen, Thomas; Barbier, Emmanuel Luc

    2013-01-01

    The quantification of bolus-tracking MRI techniques remains challenging. The acquisition usually relies on one contrast and the analysis on a simplified model of the various phenomena that arise within a voxel, leading to inaccurate perfusion estimates. To evaluate how simplifications in the interstitial model impact perfusion estimates, we propose a numerical tool to simulate the MR signal provided by a dynamic contrast enhanced (DCE) MRI experiment. Our model encompasses the intrinsic and relaxations, the magnetic field perturbations induced by susceptibility interfaces (vessels and cells), the diffusion of the water protons, the blood flow, the permeability of the vessel wall to the the contrast agent (CA) and the constrained diffusion of the CA within the voxel. The blood compartment is modeled as a uniform compartment. The different blocks of the simulation are validated and compared to classical models. The impact of the CA diffusivity on the permeability and blood volume estimates is evaluated. Simulations demonstrate that the CA diffusivity slightly impacts the permeability estimates ( for classical blood flow and CA diffusion). The effect of long echo times is investigated. Simulations show that DCE-MRI performed with an echo time may already lead to significant underestimation of the blood volume (up to 30% lower for brain tumor permeability values). The potential and the versatility of the proposed implementation are evaluated by running the simulation with realistic vascular geometry obtained from two photons microscopy and with impermeable cells in the extravascular environment. In conclusion, the proposed simulation tool describes DCE-MRI experiments and may be used to evaluate and optimize acquisition and processing strategies. PMID:23516414

  11. Relative frequencies of constrained events in stochastic processes: An analytical approach.

    PubMed

    Rusconi, S; Akhmatskaya, E; Sokolovski, D; Ballard, N; de la Cal, J C

    2015-10-01

    The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They relies on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least ≈10(4)). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications.

  12. Study of Interpolated Timing Recovery Phase-Locked Loop with Linearly Constrained Adaptive Prefilter for Higher-Density Optical Disc

    NASA Astrophysics Data System (ADS)

    Kajiwara, Yoshiyuki; Shiraishi, Junya; Kobayashi, Shoei; Yamagami, Tamotsu

    2009-03-01

    A digital phase-locked loop (PLL) with a linearly constrained adaptive filter (LCAF) has been studied for higher-linear-density optical discs. LCAF has been implemented before an interpolated timing recovery (ITR) PLL unit in order to improve the quality of phase error calculation by using an adaptively equalized partial response (PR) signal. Coefficient update of an asynchronous sampled adaptive FIR filter with a least-mean-square (LMS) algorithm has been constrained by a projection matrix in order to suppress the phase shift of the tap coefficients of the adaptive filter. We have developed projection matrices that are suitable for Blu-ray disc (BD) drive systems by numerical simulation. Results have shown the properties of the projection matrices. Then, we have designed the read channel system of the ITR PLL with an LCAF model on the FPGA board for experiments. Results have shown that the LCAF improves the tilt margins of 30 gigabytes (GB) recordable BD (BD-R) and 33 GB BD read-only memory (BD-ROM) with a sufficient LMS adaptation stability.

  13. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  14. Ice-free Arctic projections under the Paris Agreement

    NASA Astrophysics Data System (ADS)

    Sigmond, Michael; Fyfe, John C.; Swart, Neil C.

    2018-05-01

    Under the Paris Agreement, emissions scenarios are pursued that would stabilize the global mean temperature at 1.5-2.0 °C above pre-industrial levels, but current emission reduction policies are expected to limit warming by 2100 to approximately 3.0 °C. Whether such emissions scenarios would prevent a summer sea-ice-free Arctic is unknown. Here we employ stabilized warming simulations with an Earth System Model to obtain sea-ice projections under stabilized global warming, and correct biases in mean sea-ice coverage by constraining with observations. Although there is some sensitivity to details in the constraining method, the observationally constrained projections suggest that the benefits of going from 2.0 °C to 1.5 °C stabilized warming are substantial; an eightfold decrease in the frequency of ice-free conditions is expected, from once in every five to once in every forty years. Under 3.0 °C global mean warming, however, permanent summer ice-free conditions are likely, which emphasizes the need for nations to increase their commitments to the Paris Agreement.

  15. AirSWOT observations versus hydrodynamic model outputs of water surface elevation and slope in a multichannel river

    NASA Astrophysics Data System (ADS)

    Altenau, Elizabeth H.; Pavelsky, Tamlin M.; Moller, Delwyn; Lion, Christine; Pitcher, Lincoln H.; Allen, George H.; Bates, Paul D.; Calmant, Stéphane; Durand, Michael; Neal, Jeffrey C.; Smith, Laurence C.

    2017-04-01

    Anabranching rivers make up a large proportion of the world's major rivers, but quantifying their flow dynamics is challenging due to their complex morphologies. Traditional in situ measurements of water levels collected at gauge stations cannot capture out of bank flows and are limited to defined cross sections, which presents an incomplete picture of water fluctuations in multichannel systems. Similarly, current remotely sensed measurements of water surface elevations (WSEs) and slopes are constrained by resolutions and accuracies that limit the visibility of surface waters at global scales. Here, we present new measurements of river WSE and slope along the Tanana River, AK, acquired from AirSWOT, an airborne analogue to the Surface Water and Ocean Topography (SWOT) mission. Additionally, we compare the AirSWOT observations to hydrodynamic model outputs of WSE and slope simulated across the same study area. Results indicate AirSWOT errors are significantly lower than model outputs. When compared to field measurements, RMSE for AirSWOT measurements of WSEs is 9.0 cm when averaged over 1 km squared areas and 1.0 cm/km for slopes along 10 km reaches. Also, AirSWOT can accurately reproduce the spatial variations in slope critical for characterizing reach-scale hydraulics, while model outputs of spatial variations in slope are very poor. Combining AirSWOT and future SWOT measurements with hydrodynamic models can result in major improvements in model simulations at local to global scales. Scientists can use AirSWOT measurements to constrain model parameters over long reach distances, improve understanding of the physical processes controlling the spatial distribution of model parameters, and validate models' abilities to reproduce spatial variations in slope. Additionally, AirSWOT and SWOT measurements can be assimilated into lower-complexity models to try and approach the accuracies achieved by higher-complexity models.

  16. Martian atmospheric gravity waves simulated by a high-resolution general circulation model

    NASA Astrophysics Data System (ADS)

    Kuroda, Takeshi; Yiǧit, Erdal; Medvedev, Alexander S.; Hartogh, Paul

    2016-07-01

    Gravity waves (GWs) significantly affect temperature and wind fields in the Martian middle and upper atmosphere. They are also one of the observational targets of the MAVEN mission. We report on the first simulations with a high-resolution general circulation model (GCM) and present a global distributions of small-scale GWs in the Martian atmosphere. The simulated GW-induced temperature variances are in a good agreement with available radio occultation data in the lower atmosphere between 10 and 30 km. For the northern winter solstice, the model reveals a latitudinal asymmetry with stronger wave generation in the winter hemisphere and two distinctive sources of GWs: mountainous regions and the meandering winter polar jet. Orographic GWs are filtered upon propagating upward, and the mesosphere is primarily dominated by harmonics with faster horizontal phase velocities. Wave fluxes are directed mainly against the local wind. GW dissipation in the upper mesosphere generates a body force per unit mass of tens of m s^{-1} per Martian solar day (sol^{-1}), which tends to close the simulated jets. The results represent a realistic surrogate for missing observations, which can be used for constraining GW parameterizations and validating GCMs.

  17. Episodic fluid flow in the Nankai accretionary complex: Timescale, geochemistry, flow rates, and fluid budget

    USGS Publications Warehouse

    Saffer, D.M.; Bekins, B.A.

    1998-01-01

    Down-hole geochemical anomalies encountered in active accretionary systems can be used to constrain the timing, rates, and localization of fluid flow. Here we combine a coupled flow and solute transport model with a kinetic model for smectite dehydration to better understand and quantify fluid flow in the Nankai accretionary complex offshore of Japan. Compaction of sediments and clay dehydration provide fluid sources which drive the model flow system. We explicitly include the consolidation rate of underthrust sediments in our calculations to evaluate the impact that variations in this unknown quantity have on pressure and chloride distribution. Sensitivity analysis of steady state pressure solutions constrains bulk and flow conduit permeabilities. Steady state simulations with 30% smectite in the incoming sedimentary sequence result in minimum chloride concentrations at site 808 of 550 mM, but measured chlorinity is as low as 447 mM. We simulate the transient effects of hydrofracture or a strain event by assuming an instantaneous permeability increase of 3-4 orders of magnitude along a flow conduit (in this case the de??collement), using steady state results as initial conditions. Transient results with an increase in de??collement permeability from 10-16 m2 to 10-13 m2 and 20% smectite reproduce the observed chloride profile at site 808 after 80-160 kyr. Modeled chloride concentrations are highly sensitive to the consolidation rate of underthrust sediments, such that rapid compaction of underthrust material leads to increased freshening. Pressures within the de??collement during transient simulations rise rapidly to a significant fraction of lithostatic and remain high for at least 160 kyr, providing a mechanism for maintaining high permeability. Flow rates at the deformation front for transient simulations are in good agreement with direct measurements, but steady state flow rates are 2-3 orders of magnitude smaller than observed. Fluid budget calculations indicate that nearly 71% of the incoming water in the sediments leaves the accretionary wedge via diffuse flow out the seafloor, 0-5% escapes by focused flow along the de??collement, and roughly 1% is subducted. Copyright 1998 by the American Geophysical Union.

  18. Secular trends and climate drift in coupled ocean-atmosphere general circulation models

    NASA Astrophysics Data System (ADS)

    Covey, Curt; Gleckler, Peter J.; Phillips, Thomas J.; Bader, David C.

    2006-02-01

    Coupled ocean-atmosphere general circulation models (coupled GCMs) with interactive sea ice are the primary tool for investigating possible future global warming and numerous other issues in climate science. A long-standing problem with such models is that when different components of the physical climate system are linked together, the simulated climate can drift away from observation unless constrained by ad hoc adjustments to interface fluxes. However, 11 modern coupled GCMs, including three that do not employ flux adjustments, behave much better in this respect than the older generation of models. Surface temperature trends in control run simulations (with external climate forcing such as solar brightness and atmospheric carbon dioxide held constant) are small compared with observed trends, which include 20th century climate change due to both anthropogenic and natural factors. Sea ice changes in the models are dominated by interannual variations. Deep ocean temperature and salinity trends are small enough for model control runs to extend over 1000 simulated years or more, but trends in some regions, most notably the Arctic, differ substantially among the models and may be problematic. Methods used to initialize coupled GCMs can mitigate climate drift but cannot eliminate it. Lengthy "spin-ups" of models, made possible by increasing computer power, are one reason for the improvements this paper documents.

  19. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback: The PCN Incubation-Panarctic Thermal (PInc-PanTher) Scaling Approach

    NASA Astrophysics Data System (ADS)

    Koven, C. D.; Schuur, E.; Schaedel, C.; Bohn, T. J.; Burke, E.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A.; Marchenko, S. S.; McGuire, A. D.; Natali, S.; Nicolsky, D.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M. R.

    2015-12-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a 3-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100.

  20. Double quick, double click reversible peptide “stapling”† †Electronic supplementary information (ESI) available: Synthesis and characterization, additional biophysical and biochemical analyses. See DOI: 10.1039/c7sc01342f Click here for additional data file. Click here for additional data file. Click here for additional data file.

    PubMed Central

    Grison, Claire M.; Burslem, George M.; Miles, Jennifer A.; Pilsl, Ludwig K. A.; Yeo, David J.; Imani, Zeynab; Warriner, Stuart L.; Webb, Michael E.

    2017-01-01

    The development of constrained peptides for inhibition of protein–protein interactions is an emerging strategy in chemical biology and drug discovery. This manuscript introduces a versatile, rapid and reversible approach to constrain peptides in a bioactive helical conformation using BID and RNase S peptides as models. Dibromomaleimide is used to constrain BID and RNase S peptide sequence variants bearing cysteine (Cys) or homocysteine (hCys) amino acids spaced at i and i + 4 positions by double substitution. The constraint can be readily removed by displacement of the maleimide using excess thiol. This new constraining methodology results in enhanced α-helical conformation (BID and RNase S peptide) as demonstrated by circular dichroism and molecular dynamics simulations, resistance to proteolysis (BID) as demonstrated by trypsin proteolysis experiments and retained or enhanced potency of inhibition for Bcl-2 family protein–protein interactions (BID), or greater capability to restore the hydrolytic activity of the RNAse S protein (RNase S peptide). Finally, use of a dibromomaleimide functionalized with an alkyne permits further divergent functionalization through alkyne–azide cycloaddition chemistry on the constrained peptide with fluorescein, oligoethylene glycol or biotin groups to facilitate biophysical and cellular analyses. Hence this methodology may extend the scope and accessibility of peptide stapling. PMID:28970902

  1. THE IMPACT OF SURFACE TEMPERATURE INHOMOGENEITIES ON QUIESCENT NEUTRON STAR RADIUS MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elshamouty, K. G.; Heinke, C. O.; Morsink, S. M.

    Fitting the thermal X-ray spectra of neutron stars (NSs) in quiescent X-ray binaries can constrain the masses and radii of NSs. The effect of undetected hot spots on the spectrum, and thus on the inferred NS mass and radius, has not yet been explored for appropriate atmospheres and spectra. A hot spot would harden the observed spectrum, so that spectral modeling tends to infer radii that are too small. However, a hot spot may also produce detectable pulsations. We simulated the effects of a hot spot on the pulsed fraction and spectrum of the quiescent NSs X5 and X7 inmore » the globular cluster 47 Tucanae, using appropriate spectra and beaming for hydrogen atmosphere models, incorporating special and general relativistic effects, and sampling a range of system angles. We searched for pulsations in archival Chandra HRC-S observations of X5 and X7, placing 90% confidence upper limits on their pulsed fractions below 16%. We use these pulsation limits to constrain the temperature differential of any hot spots, and to then constrain the effects of possible hot spots on the X-ray spectrum and the inferred radius from spectral fitting. We find that hot spots below our pulsation limit could bias the spectroscopically inferred radius downward by up to 28%. For Cen X-4 (which has deeper published pulsation searches), an undetected hot spot could bias its inferred radius downward by up to 10%. Improving constraints on pulsations from quiescent LMXBs may be essential for progress in constraining their radii.« less

  2. A new space-time characterization of Northern Hemisphere drought in model simulations of the past and future as compared to the paleoclimate record

    NASA Astrophysics Data System (ADS)

    Coats, S.; Smerdon, J. E.; Stevenson, S.; Fasullo, J.; Otto-Bliesner, B. L.

    2017-12-01

    The observational record, which provides only limited sampling of past climate variability, has made it difficult to quantitatively analyze the complex spatio-temporal character of drought. To provide a more complete characterization of drought, machine learning based methods that identify drought in three-dimensional space-time are applied to climate model simulations of the last millennium and future, as well as tree-ring based reconstructions of hydroclimate over the Northern Hemisphere extratropics. A focus is given to the most persistent and severe droughts of the past 1000 years. Analyzing reconstructions and simulations in this context allows for a validation of the spatio-temporal character of persistent and severe drought in climate model simulations. Furthermore, the long records provided by the reconstructions and simulations, allows for sufficient sampling to constrain projected changes to the spatio-temporal character of these features using the reconstructions. Along these lines, climate models suggest that there will be large increases in the persistence and severity of droughts over the coming century, but little change in their spatial extent. These models, however, exhibit biases in the spatio-temporal character of persistent and severe drought over parts of the Northern Hemisphere, which may undermine their usefulness for future projections. Despite these limitations, and in contrast to previous claims, there are no systematic changes in the character of persistent and severe droughts in simulations of the historical interval. This suggests that climate models are not systematically overestimating the hydroclimate response to anthropogenic forcing over this period, with critical implications for confidence in hydroclimate projections.

  3. Constraining the dynamics of the water budget at high spatial resolution in the world's water towers using models and remote sensing data; Snake River Basin, USA

    NASA Astrophysics Data System (ADS)

    Watson, K. A.; Masarik, M. T.; Flores, A. N.

    2016-12-01

    Mountainous, snow-dominated basins are often referred to as the water towers of the world because they store precipitation in seasonal snowpacks, which gradually melt and provide water supplies to downstream communities. Yet significant uncertainties remain in terms of quantifying the stores and fluxes of water in these regions as well as the associated energy exchanges. Constraining these stores and fluxes is crucial for advancing process understanding and managing these water resources in a changing climate. Remote sensing data are particularly important to these efforts due to the remoteness of these landscapes and high spatial variability in water budget components. We have developed a high resolution regional climate dataset extending from 1986 to the present for the Snake River Basin in the northwestern USA. The Snake River Basin is the largest tributary of the Columbia River by volume and a critically important basin for regional economies and communities. The core of the dataset was developed using a regional climate model, forced by reanalysis data. Specifically the Weather Research and Forecasting (WRF) model was used to dynamically downscale the North American Regional Reanalysis (NARR) over the region at 3 km horizontal resolution for the period of interest. A suite of satellite remote sensing products provide independent, albeit uncertain, constraint on a number of components of the water and energy budgets for the region across a range of spatial and temporal scales. For example, GRACE data are used to constrain basinwide terrestrial water storage and MODIS products are used to constrain the spatial and temporal evolution of evapotranspiration and snow cover. The joint use of both models and remote sensing products allows for both better understanding of water cycle dynamics and associated hydrometeorologic processes, and identification of limitations in both the remote sensing products and regional climate simulations.

  4. Constraining properties of disintegrating exoplanets

    NASA Astrophysics Data System (ADS)

    Veras, D.; Carter, P. J.; Leinhardt, Z. M.; Gänsicke, B. T.

    2017-09-01

    Evaporating and disintegrating planets provide unique insights into chemical makeup and physical constraints. The striking variability, depth (˜10 - 60%) and shape of the photometric transit curves due to the disintegrating minor planet orbiting white dwarf WD 1145+017 has galvanised the post-main- sequence exoplanetary science community. We have performed the first tidal disruption simulations of this planetary object, and have succeeded in constraining its mass, density, eccentricity and physical nature. We illustrate how our simulations can bound these properties, and be used in the future for other exoplanetary systems.

  5. Refining multi-model projections of temperature extremes by evaluation against land-atmosphere coupling diagnostics

    NASA Astrophysics Data System (ADS)

    Sippel, Sebastian; Zscheischler, Jakob; Mahecha, Miguel D.; Orth, Rene; Reichstein, Markus; Vogel, Martha; Seneviratne, Sonia I.

    2017-05-01

    The Earth's land surface and the atmosphere are strongly interlinked through the exchange of energy and matter. This coupled behaviour causes various land-atmosphere feedbacks, and an insufficient understanding of these feedbacks contributes to uncertain global climate model projections. For example, a crucial role of the land surface in exacerbating summer heat waves in midlatitude regions has been identified empirically for high-impact heat waves, but individual climate models differ widely in their respective representation of land-atmosphere coupling. Here, we compile an ensemble of 54 combinations of observations-based temperature (T) and evapotranspiration (ET) benchmarking datasets and investigate coincidences of T anomalies with ET anomalies as a proxy for land-atmosphere interactions during periods of anomalously warm temperatures. First, we demonstrate that a large fraction of state-of-the-art climate models from the Coupled Model Intercomparison Project (CMIP5) archive produces systematically too frequent coincidences of high T anomalies with negative ET anomalies in midlatitude regions during the warm season and in several tropical regions year-round. These coincidences (high T, low ET) are closely related to the representation of temperature variability and extremes across the multi-model ensemble. Second, we derive a land-coupling constraint based on the spread of the T-ET datasets and consequently retain only a subset of CMIP5 models that produce a land-coupling behaviour that is compatible with these benchmark estimates. The constrained multi-model simulations exhibit more realistic temperature extremes of reduced magnitude in present climate in regions where models show substantial spread in T-ET coupling, i.e. biases in the model ensemble are consistently reduced. Also the multi-model simulations for the coming decades display decreased absolute temperature extremes in the constrained ensemble. On the other hand, the differences between projected and present-day climate extremes are affected to a lesser extent by the applied constraint, i.e. projected changes are reduced locally by around 0.5 to 1 °C - but this remains a local effect in regions that are highly sensitive to land-atmosphere coupling. In summary, our approach offers a physically consistent, diagnostic-based avenue to evaluate multi-model ensembles and subsequently reduce model biases in simulated and projected extreme temperatures.

  6. The electrochemistry of carbon steel in simulated concrete pore water in boom clay repository environments

    NASA Astrophysics Data System (ADS)

    MacDonald, D. D.; Saleh, A.; Lee, S. K.; Azizi, O.; Rosas-Camacho, O.; Al-Marzooqi, A.; Taylor, M.

    2011-04-01

    The prediction of corrosion damage of canisters to experimentally inaccessible times is vitally important in assessing various concepts for the disposal of High Level Nuclear Waste. Such prediction can only be made using deterministic models, whose predictions are constrained by the time-invariant natural laws. In this paper, we describe the measurement of experimental electrochemical data that will allow the prediction of damage to the carbon steel overpack of the super container in Belgium's proposed Boom Clay repository by using the Point Defect Model (PDM). PDM parameter values are obtained by optimizing the model on experimental, wide-band electrochemical impedance spectroscopy data.

  7. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    NASA Astrophysics Data System (ADS)

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape parameter of the retention curve n was highly constrained whilst other parameters of the retention curve showed a large equifinality. The root and storage dry matter observations were predicted with a NSE of 0.94, a low bias of 58.2 kg ha-1 and a high R2 of 0.98. Dry matters of stem and leaves were predicted with less, but still high accuracy (NSE=0.79, bias=221.7 kg ha-1, R2=0.87). We attribute this slightly poorer model performance to missing leaf senescence which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use-efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need of including agricultural management options in the coupled model.

  8. Monte Carlo based calibration and uncertainty analysis of a coupled plant growth and hydrological model

    NASA Astrophysics Data System (ADS)

    Houska, T.; Multsch, S.; Kraft, P.; Frede, H.-G.; Breuer, L.

    2013-12-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 × 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape parameter of the retention curve n was highly constrained whilst other parameters of the retention curve showed a large equifinality. The root and storage dry matter observations were predicted with a NSE of 0.94, a low bias of -58.2 kg ha-1 and a high R2 of 0.98. Dry matters of stem and leaves were predicted with less, but still high accuracy (NSE = 0.79, bias = 221.7 kg ha-1, R2 = 0.87). We attribute this slightly poorer model performance to missing leaf senescence which is currently not implemented in PMF. The most constrained parameters for the plant growth model were the radiation-use-efficiency and the base temperature. Cross validation helped to identify deficits in the model structure, pointing out the need of including agricultural management options in the coupled model.

  9. Non Linear Programming (NLP) Formulation for Quantitative Modeling of Protein Signal Transduction Pathways

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239

  10. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  11. Constraining the intermediate-mass range of the Initial Mass Function using Galactic Cepheids

    NASA Astrophysics Data System (ADS)

    Mor, R.; Figueras, F.; Robin, A. C.; Lemasle, B.

    2015-05-01

    Aims. To use the Besançon Galaxy Model (Robin A.C. et al., 2003) and the most complete observational catalogues of Galactic Cepheids to constrain the intermediate-mass range of the Initial Mass Function (IMF) in the Milky Way Galactic thin disc. Methods. We have optimized the flexibility of the new Besançon Galaxy Model (Czekaj et al., 2014) to simulate magnitude and distance complete samples of young intermediate mass stars assuming different IMFs and Star Formation Histories (SFH). Comparing the simulated synthetic catalogues with the observational data, we studied which IMF reproduces better the observational number of Cepheids in the Galactic thin disc. We analysed three different IMFs: (1) Salpeter, (2) Kroupa-Haywood and (3) Haywood-Robin, all of them with a decreasing SFH from Aumer and Binney, 2009. Results. For the first time the Besançon Galaxy Model is used to characterize the Galactic Cepheids. We find that for most of the cases the Salpeter IMF overestimates the number of observed Cepheids and Haywood-Robin IMF underestimates it. The Kroupa-Haywood IMF, with a slope α=3.2, is the one that best reproduces the observed Cepheids. From the comparison of the predicted and observed number of Cepheids up to V=12, we point that the model might underestimate the scale-height of the young population. The effects of the variation of the model ingredients need to be quantified. Conclusions. In agreement with Kroupa and Weidner (2003), our study shows that the Salpeter IMF (α=2.35) overestimates the star counts in the range 4 ≤ M/M_{⊙} ≤ 10 and supports the idea that the slope of the intermediate and massive stars IMF is steeper than the Salpeter IMF.

  12. Modeling the evolution of the Laurentide Ice Sheet from MIS 3 to the Last Glacial Maximum: an approach using sea level modeling and ice flow dynamics

    NASA Astrophysics Data System (ADS)

    Weisenberg, J.; Pico, T.; Birch, L.; Mitrovica, J. X.

    2017-12-01

    The history of the Laurentide Ice Sheet since the Last Glacial Maximum ( 26 ka; LGM) is constrained by geological evidence of ice margin retreat in addition to relative sea-level (RSL) records in both the near and far field. Nonetheless, few observations exist constraining the ice sheet's extent across the glacial build-up phase preceding the LGM. Recent work correcting RSL records along the U.S. mid-Atlantic dated to mid-MIS 3 (50-35 ka) for glacial-isostatic adjustment (GIA) infer that the Laurentide Ice Sheet grew by more than three-fold in the 15 ky leading into the LGM. Here we test the plausibility of a late and extremely rapid glaciation by driving a high-resolution ice sheet model, based on a nonlinear diffusion equation for the ice thickness. We initialize this model at 44 ka with the mid-MIS 3 ice sheet configuration proposed by Pico et al. (2017), GIA-corrected basal topography, and mass balance representative of mid-MIS 3 conditions. These simulations predict rapid growth of the eastern Laurentide Ice Sheet, with rates consistent with achieving LGM ice volumes within 15 ky. We use these simulations to refine the initial ice configuration and present an improved and higher resolution model for North American ice cover during mid-MIS 3. In addition we show that assumptions of ice loads during the glacial phase, and the associated reconstructions of GIA-corrected basal topography, produce a bias that can underpredict ice growth rates in the late stages of the glaciation, which has important consequences for our understanding of the speed limit for ice growth on glacial timescales.

  13. Simulating Ice Dynamics in the Amundsen Sea Sector

    NASA Astrophysics Data System (ADS)

    Schwans, E.; Parizek, B. R.; Morlighem, M.; Alley, R. B.; Pollard, D.; Walker, R. T.; Lin, P.; St-Laurent, P.; LaBirt, T.; Seroussi, H. L.

    2017-12-01

    Thwaites and Pine Island Glaciers (TG; PIG) exhibit patterns of dynamic retreat forced from their floating margins, and could act as gateways for destabilization of deep marine basins in the West Antarctic Ice Sheet (WAIS). Poorly constrained basal conditions can cause model predictions to diverge. Thus, there is a need for efficient simulations that account for shearing within the ice column, and include adequate basal sliding and ice-shelf melting parameterizations. To this end, UCI/NASA JPL's Ice Sheet System Model (ISSM) with coupled SSA/higher-order physics is used in the Amundsen Sea Embayment (ASE) to examine threshold behavior of TG and PIG, highlighting areas particularly vulnerable to retreat from oceanic warming and ice-shelf removal. These moving-front experiments will aid in targeting critical areas for additional data collection in ASE as well as for weighting accuracy in further melt parameterization development. Furthermore, a sub-shelf melt parameterization, resulting from Regional Ocean Modeling System (ROMS; St-Laurent et al., 2015) and coupled ISSM-Massachusetts Institute of Technology general circulation model (MITgcm; Seroussi et al., 2017) output, is incorporated and initially tested in ISSM. Data-guided experiments include variable basal conditions and ice hardness, and are also forced with constant modern climate in ISSM, providing valuable insight into i) effects of different basal friction parameterizations on ice dynamics, illustrating the importance of constraining the variable bed character beneath TG and PIG; ii) the impact of including vertical shear in ice flow models of outlet glaciers, confirming its role in capturing complex feedbacks proximal to the grounding zone; and iii) ASE's sensitivity to sub-shelf melt and ice-front retreat, possible thresholds, and how these affect ice-flow evolution.

  14. Construction of a 3D model of nattokinase, a novel fibrinolytic enzyme from Bacillus natto. A novel nucleophilic catalytic mechanism for nattokinase.

    PubMed

    Zheng, Zhong-liang; Zuo, Zhen-yu; Liu, Zhi-gang; Tsai, Keng-chang; Liu, Ai-fu; Zou, Guo-lin

    2005-01-01

    A three-dimensional structural model of nattokinase (NK) from Bacillus natto was constructed by homology modeling. High-resolution X-ray structures of Subtilisin BPN' (SB), Subtilisin Carlsberg (SC), Subtilisin E (SE) and Subtilisin Savinase (SS), four proteins with sequential, structural and functional homology were used as templates. Initial models of NK were built by MODELLER and analyzed by the PROCHECK programs. The best quality model was chosen for further refinement by constrained molecular dynamics simulations. The overall quality of the refined model was evaluated. The refined model NKC1 was analyzed by different protein analysis programs including PROCHECK for the evaluation of Ramachandran plot quality, PROSA for testing interaction energies and WHATIF for the calculation of packing quality. This structure was found to be satisfactory and also stable at room temperature as demonstrated by a 300ps long unconstrained molecular dynamics (MD) simulation. Further docking analysis promoted the coming of a new nucleophilic catalytic mechanism for NK, which is induced by attacking of hydroxyl rich in catalytic environment and locating of S221.

  15. Simulating oil droplet dispersal from the Deepwater Horizon spill with a Lagrangian approach

    USGS Publications Warehouse

    North, Elizabeth W.; Schlag, Zachary; Adams, E. Eric; Sherwood, Christopher R.; He, Ruoying; Hyun, Hoon; Socolofsky, Scott A.

    2011-01-01

    An analytical multiphase plume model, combined with time-varying flow and hydrographic fields generated by the 3-D South Atlantic Bight and Gulf of Mexico model (SABGOM) hydrodynamic model, were used as input to a Lagrangian transport model (LTRANS), to simulate transport of oil droplets dispersed at depth from the recent Deepwater Horizon MC 252 oil spill. The plume model predicts a stratification-dominated near field, in which small oil droplets detrain from the central plume containing faster rising large oil droplets and gas bubbles and become trapped by density stratification. Simulated intrusion (trap) heights of ∼ 310–370 m agree well with the midrange of conductivity-temperature-depth observations, though the simulated variation in trap height was lower than observed, presumably in part due to unresolved variability in source composition (percentage oil versus gas) and location (multiple leaks during first half of spill). Simulated droplet trajectories by the SABGOM-LTRANS modeling system showed that droplets with diameters between 10 and 50 μm formed a distinct subsurface plume, which was transported horizontally and remained in the subsurface for >1 month. In contrast, droplets with diameters ≥90 μm rose rapidly to the surface. Simulated trajectories of droplets ≤50 μm in diameter were found to be consistent with field observations of a southwest-tending subsurface plume in late June 2010 reported by Camilli et al. [2010]. Model results suggest that the subsurface plume looped around to the east, with potential subsurface oil transport to the northeast and southeast. Ongoing work is focusing on adding degradation processes to the model to constrain droplet dispersal.

  16. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.

  17. One-Dimensional Fast Transient Simulator for Modeling Cadmium Sulfide/Cadmium Telluride Solar Cells

    NASA Astrophysics Data System (ADS)

    Guo, Da

    Solar energy, including solar heating, solar architecture, solar thermal electricity and solar photovoltaics, is one of the primary alternative energy sources to fossil fuel. Being one of the most important techniques, significant research has been conducted in solar cell efficiency improvement. Simulation of various structures and materials of solar cells provides a deeper understanding of device operation and ways to improve their efficiency. Over the last two decades, polycrystalline thin-film Cadmium-Sulfide and Cadmium-Telluride (CdS/CdTe) solar cells fabricated on glass substrates have been considered as one of the most promising candidate in the photovoltaic technologies, for their similar efficiency and low costs when compared to traditional silicon-based solar cells. In this work a fast one dimensional time-dependent/steady-state drift-diffusion simulator, accelerated by adaptive non-uniform mesh and automatic time-step control, for modeling solar cells has been developed and has been used to simulate a CdS/CdTe solar cell. These models are used to reproduce transients of carrier transport in response to step-function signals of different bias and varied light intensity. The time-step control models are also used to help convergence in steady-state simulations where constrained material constants, such as carrier lifetimes in the order of nanosecond and carrier mobility in the order of 100 cm2/Vs, must be applied.

  18. Lyapunov optimal feedback control of a nonlinear inverted pendulum

    NASA Technical Reports Server (NTRS)

    Grantham, W. J.; Anderson, M. J.

    1989-01-01

    Liapunov optimal feedback control is applied to a nonlinear inverted pendulum in which the control torque was constrained to be less than the nonlinear gravity torque in the model. This necessitates a control algorithm which 'rocks' the pendulum out of its potential wells, in order to stabilize it at a unique vertical position. Simulation results indicate that a preliminary Liapunov feedback controller can successfully overcome the nonlinearity and bring almost all trajectories to the target.

  19. Mesh-Sequenced Realizations for Evaluation of Subgrid-Scale Models for Turbulent Combustion (Short Term Innovative Research Program)

    DTIC Science & Technology

    2018-02-15

    conservation equations. The closure problem hinges on the evaluation of the filtered chemical production rates. In MRA/MSR, simultaneous large-eddy...simulations of a reactive flow are performed at different mesh resolution levels. The solutions at each coarser mesh level are constrained by the filtered ...include the replacement of chemical production rates with those filtered from the underlying fine mesh and the construction of ‘exact’ forms for

  20. Observational Signatures of Mass-loading in Jets Launched by Rotating Black Holes

    NASA Astrophysics Data System (ADS)

    O’ Riordan, Michael; Pe’er, Asaf; McKinney, Jonathan C.

    2018-01-01

    It is widely believed that relativistic jets in X-ray binaries (XRBs) and active-galactic nuclei are powered by the rotational energy of black holes. This idea is supported by general-relativistic magnetohydrodynamic (GRMHD) simulations of accreting black holes, which demonstrate efficient energy extraction via the Blandford–Znajek mechanism. However, due to uncertainties in the physics of mass loading, and the failure of GRMHD numerical schemes in the highly magnetized funnel region, the matter content of the jet remains poorly constrained. We investigate the observational signatures of mass loading in the funnel by performing general-relativistic radiative transfer calculations on a range of 3D GRMHD simulations of accreting black holes. We find significant observational differences between cases in which the funnel is empty and cases where the funnel is filled with plasma, particularly in the optical and X-ray bands. In the context of Sgr A*, current spectral data constrains the jet filling only if the black hole is rapidly rotating with a ≳ 0.9. In this case, the limits on the infrared flux disfavor a strong contribution from material in the funnel. We comment on the implications of our models for interpreting future Event Horizon Telescope observations. We also scale our models to stellar-mass black holes, and discuss their applicability to the low-luminosity state in XRBs.

  1. Warming caused by cumulative carbon emissions towards the trillionth tonne.

    PubMed

    Allen, Myles R; Frame, David J; Huntingford, Chris; Jones, Chris D; Lowe, Jason A; Meinshausen, Malte; Meinshausen, Nicolai

    2009-04-30

    Global efforts to mitigate climate change are guided by projections of future temperatures. But the eventual equilibrium global mean temperature associated with a given stabilization level of atmospheric greenhouse gas concentrations remains uncertain, complicating the setting of stabilization targets to avoid potentially dangerous levels of global warming. Similar problems apply to the carbon cycle: observations currently provide only a weak constraint on the response to future emissions. Here we use ensemble simulations of simple climate-carbon-cycle models constrained by observations and projections from more comprehensive models to simulate the temperature response to a broad range of carbon dioxide emission pathways. We find that the peak warming caused by a given cumulative carbon dioxide emission is better constrained than the warming response to a stabilization scenario. Furthermore, the relationship between cumulative emissions and peak warming is remarkably insensitive to the emission pathway (timing of emissions or peak emission rate). Hence policy targets based on limiting cumulative emissions of carbon dioxide are likely to be more robust to scientific uncertainty than emission-rate or concentration targets. Total anthropogenic emissions of one trillion tonnes of carbon (3.67 trillion tonnes of CO(2)), about half of which has already been emitted since industrialization began, results in a most likely peak carbon-dioxide-induced warming of 2 degrees C above pre-industrial temperatures, with a 5-95% confidence interval of 1.3-3.9 degrees C.

  2. Refined Use of Satellite Aerosol Optical Depth Snapshots to Constrain Biomass Burning Emissions in the GOCART Model

    NASA Astrophysics Data System (ADS)

    Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James

    2017-10-01

    Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength for BB aerosol sources. Our previous work shows that to first order, satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the smoke source strength. We now refine the satellite-snapshot method and investigate where applying simple multiplicative emission adjustment factors alone to the widely used Global Fire Emission Database version 3 emission inventory can achieve regional-scale consistency between Moderate Resolution Imaging Spectroradiometer (MODIS) AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport model. The model and satellite AOD are compared globally, over a set of BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. Regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. We refine our approach to address physically based limitations of our earlier work (1) by expanding the number of fire cases from 124 to almost 900, (2) by using scaled reanalysis-model simulations to fill missing AOD retrievals in the MODIS observations, (3) by distinguishing the BB components of the total aerosol load from background aerosol in the near-source regions, and (4) by including emissions from fires too small to be identified explicitly in the satellite observations. The small-fire emission adjustment shows the complimentary nature of correcting for source strength and adding geographically distinct missing sources. Our analysis indicates that the method works best for fire cases where the BB fraction of total AOD is high, primarily evergreen or deciduous forests. In heavily polluted or agricultural burning regions, where smoke and background AOD values tend to be comparable, this approach encounters large uncertainties, and in some regions, other model- or measurement-related factors might contribute significantly to model-satellite discrepancies. This work sets the stage for a larger study within the Aerosol Comparison between Observations and Models (AeroCOM) multimodel biomass burning experiment. By comparing multiple model results using the refined technique presented here, we aim to separate BB inventory from model-specific contributions to the remaining discrepancies.

  3. Simulating carbon flows in Amazonian rainforests: how intensive C-cycle data can help to reduce vegetation model uncertainty

    NASA Astrophysics Data System (ADS)

    Galbraith, D.; Levine, N. M.; Christoffersen, B. O.; Imbuzeiro, H. A.; Powell, T.; Costa, M. H.; Saleska, S. R.; Moorcroft, P. R.; Malhi, Y.

    2014-12-01

    The mathematical codes embedded within different vegetation models ultimately represent alternative hypotheses of biosphere functioning. While formulations for some processes (e.g. leaf-level photosynthesis) are often shared across vegetation models, other processes (e.g. carbon allocation) are much more variable in their representation across models. This creates the opportunity for equifinality - models can simulate similar values of key metrics such as NPP or biomass through very different underlying causal pathways. Intensive carbon cycle measurements allow for quantification of a comprehensive suite of carbon fluxes such as the productivity and respiration of leaves, roots and wood, allowing for in-depth assessment of carbon flows within ecosystems. Thus, they provide important information on poorly-constrained C-cycle processes such as allocation. We conducted an in-depth evaluation of the ability of four commonly used dynamic global vegetation models (CLM, ED2, IBIS, JULES) to simulate carbon cycle processes at ten lowland Amazonian rainforest sites where individual C-cycle components have been measured. The rigorous model-data comparison procedure allowed identification of biases which were specific to different models, providing clear avenues for model improvement and allowing determination of internal C-cycling pathways that were better supported by data. Furthermore, the intensive C-cycle data allowed for explicit testing of the validity of a number of assumptions made by specific models in the simulation of carbon allocation and plant respiration. For example, the ED2 model assumes that maintenance respiration of stems is negligible while JULES assumes equivalent allocation of NPP to fine roots and leaves. We argue that field studies focusing on simultaneous measurement of a large number of component fluxes are fundamentally important for reducing uncertainty in vegetation model simulations.

  4. Constraining the noise-free distribution of halo spin parameters

    NASA Astrophysics Data System (ADS)

    Benson, Andrew J.

    2017-11-01

    Any measurement made using an N-body simulation is subject to noise due to the finite number of particles used to sample the dark matter distribution function, and the lack of structure below the simulation resolution. This noise can be particularly significant when attempting to measure intrinsically small quantities, such as halo spin. In this work, we develop a model to describe the effects of particle noise on halo spin parameters. This model is calibrated using N-body simulations in which the particle noise can be treated as a Poisson process on the underlying dark matter distribution function, and we demonstrate that this calibrated model reproduces measurements of halo spin parameter error distributions previously measured in N-body convergence studies. Utilizing this model, along with previous measurements of the distribution of halo spin parameters in N-body simulations, we place constraints on the noise-free distribution of halo spins. We find that the noise-free median spin is 3 per cent lower than that measured directly from the N-body simulation, corresponding to a shift of approximately 40 times the statistical uncertainty in this measurement arising purely from halo counting statistics. We also show that measurement of the spin of an individual halo to 10 per cent precision requires at least 4 × 104 particles in the halo - for haloes containing 200 particles, the fractional error on spins measured for individual haloes is of order unity. N-body simulations should be viewed as the results of a statistical experiment applied to a model of dark matter structure formation. When viewed in this way, it is clear that determination of any quantity from such a simulation should be made through forward modelling of the effects of particle noise.

  5. Impact of an equality constraint on the class-specific residual variances in regression mixtures: A Monte Carlo simulation study.

    PubMed

    Kim, Minjung; Lamont, Andrea E; Jaki, Thomas; Feaster, Daniel; Howe, George; Van Horn, M Lee

    2016-06-01

    Regression mixture models are a novel approach to modeling the heterogeneous effects of predictors on an outcome. In the model-building process, often residual variances are disregarded and simplifying assumptions are made without thorough examination of the consequences. In this simulation study, we investigated the impact of an equality constraint on the residual variances across latent classes. We examined the consequences of constraining the residual variances on class enumeration (finding the true number of latent classes) and on the parameter estimates, under a number of different simulation conditions meant to reflect the types of heterogeneity likely to exist in applied analyses. The results showed that bias in class enumeration increased as the difference in residual variances between the classes increased. Also, an inappropriate equality constraint on the residual variances greatly impacted on the estimated class sizes and showed the potential to greatly affect the parameter estimates in each class. These results suggest that it is important to make assumptions about residual variances with care and to carefully report what assumptions are made.

  6. Transient Earth system responses to cumulative carbon dioxide emissions: linearities, uncertainties, and probabilities in an observation-constrained model ensemble

    NASA Astrophysics Data System (ADS)

    Steinacher, M.; Joos, F.

    2016-02-01

    Information on the relationship between cumulative fossil CO2 emissions and multiple climate targets is essential to design emission mitigation and climate adaptation strategies. In this study, the transient response of a climate or environmental variable per trillion tonnes of CO2 emissions, termed TRE, is quantified for a set of impact-relevant climate variables and from a large set of multi-forcing scenarios extended to year 2300 towards stabilization. An ˜ 1000-member ensemble of the Bern3D-LPJ carbon-climate model is applied and model outcomes are constrained by 26 physical and biogeochemical observational data sets in a Bayesian, Monte Carlo-type framework. Uncertainties in TRE estimates include both scenario uncertainty and model response uncertainty. Cumulative fossil emissions of 1000 Gt C result in a global mean surface air temperature change of 1.9 °C (68 % confidence interval (c.i.): 1.3 to 2.7 °C), a decrease in surface ocean pH of 0.19 (0.18 to 0.22), and a steric sea level rise of 20 cm (13 to 27 cm until 2300). Linearity between cumulative emissions and transient response is high for pH and reasonably high for surface air and sea surface temperatures, but less pronounced for changes in Atlantic meridional overturning, Southern Ocean and tropical surface water saturation with respect to biogenic structures of calcium carbonate, and carbon stocks in soils. The constrained model ensemble is also applied to determine the response to a pulse-like emission and in idealized CO2-only simulations. The transient climate response is constrained, primarily by long-term ocean heat observations, to 1.7 °C (68 % c.i.: 1.3 to 2.2 °C) and the equilibrium climate sensitivity to 2.9 °C (2.0 to 4.2 °C). This is consistent with results by CMIP5 models but inconsistent with recent studies that relied on short-term air temperature data affected by natural climate variability.

  7. Soil warming response: field experiments to Earth system models

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Bradford, M.; Wieder, W. R.; Crowther, T. W.

    2017-12-01

    The soil carbon response to climate change is extremely uncertain at the global scale, in part because of the uncertainty in the magnitude of the temperature response. To address this uncertainty we collected data from 48 soil warming manipulations studies and examined the temperature response using two different methods. First, we constructed a mixed effects model and extrapolated the effect of soil warming on soil carbon stocks under anticipated shifts in surface temperature during the 21st century. We saw significant vulnerability of soil carbon stocks, especially in high carbon soils. To place this effect in the context of anticipated changes in carbon inputs and moisture shifts, we applied a one pool decay model with temperature sensitivities to the field data and imposed a post-hoc correction on the Earth system model simulations to integrate the field with the simulated temperature response. We found that there was a slight elevation in the overall soil carbon losses, but that the field uncertainty of the temperature sensitivity parameter was as large as the variation in the among model soil carbon projections. This implies that model-data integration is unlikely to constrain soil carbon simulations and highlights the importance of representing parameter uncertainty in these Earth system models to inform emissions targets.

  8. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    NASA Astrophysics Data System (ADS)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2015-09-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data; and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the Statistical Oxidation Model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional UCD/CIT air quality model and applied to air quality episodes in California and the eastern US. The mass, composition and properties of SOA predicted using SOM are compared to SOA predictions generated by a traditional "two-product" model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation. Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions tested. Consequently, the use of low and high NOx yields perturbs SOA concentrations by a factor of two and are probably a much stronger determinant in 3-D models than constrained multi-generational oxidation. While total predicted SOA mass is similar for the SOM and two-product models, the SOM model predicts increased SOA contributions from anthropogenic (alkane, aromatic) and sesquiterpenes and decreased SOA contributions from isoprene and monoterpene relative to the two-product model calculations. The SOA predicted by SOM has a much lower volatility than that predicted by the traditional model resulting in better qualitative agreement with volatility measurements of ambient OA. On account of its lower-volatility, the SOA mass produced by SOM does not appear to be as strongly influenced by the inclusion of oligomerization reactions, whereas the two-product model relies heavily on oligomerization to form low volatility SOA products. Finally, an unconstrained contemporary hybrid scheme to model multi-generational oxidation within the framework of a two-product model in which "ageing" reactions are added on top of the existing two-product parameterization is considered. This hybrid scheme formed at least three times more SOA than the SOM during regional simulations as a result of excessive transformation of semi-volatile vapors into lower volatility material that strongly partitions to the particle phase. This finding suggests that these "hybrid" multi-generational schemes should be used with great caution in regional models.

  9. Assessing stratospheric transport in the CMAM30 simulations using ACE-FTS measurements

    NASA Astrophysics Data System (ADS)

    Kolonjari, Felicia; Plummer, David A.; Walker, Kaley A.; Boone, Chris D.; Elkins, James W.; Hegglin, Michaela I.; Manney, Gloria L.; Moore, Fred L.; Pendlebury, Diane; Ray, Eric A.; Rosenlof, Karen H.; Stiller, Gabriele P.

    2018-05-01

    Stratospheric transport in global circulation models and chemistry-climate models is an important component in simulating the recovery of the ozone layer as well as changes in the climate system. The Brewer-Dobson circulation is not well constrained by observations and further investigation is required to resolve uncertainties related to the mechanisms driving the circulation. This study has assessed the specified dynamics mode of the Canadian Middle Atmosphere Model (CMAM30) by comparing to the Atmospheric Chemistry Experiment Fourier transform spectrometer (ACE-FTS) profile measurements of CFC-11 (CCl3F), CFC-12 (CCl2F2), and N2O. In the CMAM30 specified dynamics simulation, the meteorological fields are nudged using the ERA-Interim reanalysis and a specified tracer was employed for each species, with hemispherically defined surface measurements used as the boundary condition. A comprehensive sampling technique along the line of sight of the ACE-FTS measurements has been utilized to allow for direct comparisons between the simulated and measured tracer concentrations. The model consistently overpredicts tracer concentrations of CFC-11, CFC-12, and N2O in the lower stratosphere, particularly in the northern hemispheric winter and spring seasons. The three mixing barriers investigated, including the polar vortex, the extratropical tropopause, and the tropical pipe, show that there are significant inconsistencies between the measurements and the simulations. In particular, the CMAM30 simulation underpredicts mixing efficiency in the tropical lower stratosphere during the June-July-August season.

  10. Simulations of Tidally Driven Formation of Binary Planet Systems

    NASA Astrophysics Data System (ADS)

    Murray, R. Zachary P.; Guillochon, James

    2018-01-01

    In the last decade there have been hundreds of exoplanets discovered by the Kepler, CoRoT and many other initiatives. This wealth of data suggests the possibility of detecting exoplanets with large satellites. This project seeks to model the interactions between orbiting planets using the FLASH hydrodynamics code developed by The Flash Center for Computational Science at University of Chicago. We model the encounters in a wide variety of encounter scenarios and initial conditions including variations in encounter depth, mass ratio, and encounter velocity and attempt to constrain what sorts of binary planet configurations are possible and stable.

  11. Evaluation of Cloud-Resolving Model Intercomparison Simulations Using TWP-ICE Observations: Precipitation and Cloud Structure

    NASA Technical Reports Server (NTRS)

    Varble, Adam; Fridlind, Ann M.; Zipser, Edward J.; Ackerman, Andrew S.; Chaboureau, Jean-Pierre; Fan, Jiwen; Hill, Adrian; McFarlane, Sally A.; Pinty, Jean-Pierre; Shipway, Ben

    2011-01-01

    The Tropical Warm Pool.International Cloud Experiment (TWP ]ICE) provided extensive observational data sets designed to initialize, force, and constrain atmospheric model simulations. In this first of a two ]part study, precipitation and cloud structures within nine cloud ]resolving model simulations are compared with scanning radar reflectivity and satellite infrared brightness temperature observations during an active monsoon period from 19 to 25 January 2006. Seven of nine simulations overestimate convective area by 20% or more leading to general overestimation of convective rainfall. This is balanced by underestimation of stratiform rainfall by 5% to 50% despite overestimation of stratiform area by up to 65% because of a preponderance of very low stratiform rain rates in all simulations. All simulations fail to reproduce observed radar reflectivity distributions above the melting level in convective regions and throughout the troposphere in stratiform regions. Observed precipitation ]sized ice reaches higher altitudes than simulated precipitation ]sized ice despite some simulations that predict lower than observed top ]of ]atmosphere infrared brightness temperatures. For the simulations that overestimate radar reflectivity aloft, graupel is the cause with one ]moment microphysics schemes whereas snow is the cause with two ]moment microphysics schemes. Differences in simulated radar reflectivity are more highly correlated with differences in mass mean melted diameter (Dm) than differences in ice water content. Dm is largely dependent on the mass ]dimension relationship and gamma size distribution parameters such as size intercept (N0) and shape parameter (m). Having variable density, variable N0, or m greater than zero produces radar reflectivities closest to those observed.

  12. Management of groundwater in-situ bioremediation system using reactive transport modelling under parametric uncertainty: field scale application

    NASA Astrophysics Data System (ADS)

    Verardo, E.; Atteia, O.; Rouvreau, L.

    2015-12-01

    In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.

  13. An Agent-Based Modeling Framework and Application for the Generic Nuclear Fuel Cycle

    NASA Astrophysics Data System (ADS)

    Gidden, Matthew J.

    Key components of a novel methodology and implementation of an agent-based, dynamic nuclear fuel cycle simulator, Cyclus , are presented. The nuclear fuel cycle is a complex, physics-dependent supply chain. To date, existing dynamic simulators have not treated constrained fuel supply, time-dependent, isotopic-quality based demand, or fuel fungibility particularly well. Utilizing an agent-based methodology that incorporates sophisticated graph theory and operations research techniques can overcome these deficiencies. This work describes a simulation kernel and agents that interact with it, highlighting the Dynamic Resource Exchange (DRE), the supply-demand framework at the heart of the kernel. The key agent-DRE interaction mechanisms are described, which enable complex entity interaction through the use of physics and socio-economic models. The translation of an exchange instance to a variant of the Multicommodity Transportation Problem, which can be solved feasibly or optimally, follows. An extensive investigation of solution performance and fidelity is then presented. Finally, recommendations for future users of Cyclus and the DRE are provided.

  14. Collective degrees of freedom involved in absorption and desorption of surfactant molecules in spherical non-ionic micelles

    NASA Astrophysics Data System (ADS)

    Ahn, Yong Nam; Mohan, Gunjan; Kopelevich, Dmitry I.

    2012-10-01

    Dynamics of absorption and desorption of a surfactant monomer into and out of a spherical non-ionic micelle is investigated by coarse-grained molecular dynamics (MD) simulations. It is shown that these processes involve a complex interplay between the micellar structure and the monomer configuration. A quantitative model for collective dynamics of these degrees of freedom is developed. This is accomplished by reconstructing a multi-dimensional free energy landscape of the surfactant-micelle system using constrained MD simulations in which the distance between the micellar and monomer centers of mass is held constant. Results of this analysis are verified by direct (unconstrained) MD simulations of surfactant absorption in the micelle. It is demonstrated that the system dynamics is likely to deviate from the minimum energy path on the energy landscape. These deviations create an energy barrier for the monomer absorption and increase an existing barrier for the monomer desorption. A reduced Fokker-Planck equation is proposed to model these effects.

  15. Constraints on Lobate Debris Apron Evolution and Rheology from Numerical Modeling of Ice Flow

    NASA Astrophysics Data System (ADS)

    Parsons, R.; Nimmo, F.

    2010-12-01

    Recent radar observations of mid-latitude lobate debris aprons (LDAs) have confirmed the presence of ice within these deposits. Radar observations in Deuteronilus Mensae have constrained the concentration of dust found within the ice deposits to <30% by volume based on the strength of the returned signal. In addition to constraining the dust fraction, these radar observations can measure the ice thickness - providing an opportunity to more accurately estimate the flow behavior of ice responsible for the formation of LDAs. In order to further constrain the age and rheology of LDA ice, we developed a numerical model simulating ice flow under Martian conditions using results from ice deformation experiments, theory of ice grain growth based on terrestrial ice cores, and observational constraints from radar profiles and laser altimetry. This finite difference model calculates the LDA profile shape as it flows over time assuming no basal slip. In our model, the ice rheology is determined by the concentration of dust which influences the ice grain size by pinning the ice grain boundaries and halting ice grain growth. By varying the dust fraction (and therefore the ice grain size), the ice temperature, the subsurface slope, and the initial ice volume we are able to determine the combination of parameters that best reproduce the observed LDA lengths and thicknesses over a period of time comparable to crater age dates of LDA surfaces (90 - 300 My, see figure). Based on simulations using different combinations of ice temperature, ice grain size, and basal slope, we find that an ice temperature of 205 K, a dust volume fraction of 0.5% (resulting in an ice grain size of 5 mm), and a flat subsurface slope give reasonable model LDA ages for many LDAs in the northern mid-latitudes of Mars. However, we find that there is no single combination of dust fraction, temperature, and subsurface slope which can give realistic ages for all LDAs suggesting that all or some of these variables are spatially heterogeneous. We conclude that there are important regional differences in either the amount of dust mixed in with the ice, or in the presence of a basal slope below the LDA ice. Alternatively, the ice temperature and/or timing of ice deposition may vary significantly between different mid-latitude regions. a) Topographic profiles plotted every 200 My (thin, solid lines) from a 1 Gy simulation of ice flow for an initial ice deposit (thick, solid line) 5 km long and 1 km thick using an ice temperature of 205 K and a dust fraction, φ, of 0.047%. A MOLA profile of an LDA at 38.6oN, 24.3oE (dashed line) is shown for comparison. b) Final profiles for simulations lasting 100 My using temperatures of 195, 205 and 215 K illustrate the effect of both temperature and increasing the dust volume fraction to 1.2% (resulting in an ice grain size of 1 mm).

  16. Constraints on cosmic ray propagation in the galaxy

    NASA Technical Reports Server (NTRS)

    Cordes, James M.

    1992-01-01

    The goal was to derive a more detailed picture of magnetohydrodynamic turbulence in the interstellar medium and its effects on cosmic ray propagation. To do so, radio astronomical observations (scattering and Faraday rotation) were combined with knowledge of solar system spacecraft observations of MHD turbulence, simulations of wave propagation, and modeling of the galactic distribution to improve the knowledge. A more sophisticated model was developed for the galactic distribution of electron density turbulence. Faraday rotation measure data was analyzed to constrain magnetic field fluctuations in the ISM. VLBI observations were acquired of compact sources behind the supernova remnant CTA1. Simple calculations were made about the energies of the turbulence assuming a direct link between electron density and magnetic field variations. A simulation is outlined of cosmic ray propagation through the galaxy using the above results.

  17. Simulating flight boundary conditions for orbiter payload modal survey

    NASA Technical Reports Server (NTRS)

    Chung, Y. T.; Sernaker, M. L.; Peebles, J. H.

    1993-01-01

    An approach to simulate the characteristics of the payload/orbiter interfaces for the payload modal survey was developed. The flexure designed for this approach is required to provide adequate stiffness separation in the free and constrained interface degrees of freedom to closely resemble the flight boundary condition. Payloads will behave linearly and demonstrate similar modal effective mass distribution and load path as the flight if the flexure fixture is used for the payload modal survey. The potential non-linearities caused by the trunnion slippage during the conventional fixed base modal survey may be eliminated. Consequently, the effort to correlate the test and analysis models can be significantly reduced. An example is given to illustrate the selection and the sensitivity of the flexure stiffness. The advantages of using flexure fixtures for the modal survey and for the analytical model verification are also demonstrated.

  18. Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale

    DOE PAGES

    Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.; ...

    2017-01-26

    Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less

  19. Topology-Aware Performance Optimization and Modeling of Adaptive Mesh Refinement Codes for Exascale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Cy P.; Bachan, John D.; Kenny, Joseph P.

    Here, we introduce a topology-aware performance optimization and modeling workflow for AMR simulation that includes two new modeling tools, ProgrAMR and Mota Mapper, which interface with the BoxLib AMR framework and the SSTmacro network simulator. ProgrAMR allows us to generate and model the execution of task dependency graphs from high-level specifications of AMR-based applications, which we demonstrate by analyzing two example AMR-based multigrid solvers with varying degrees of asynchrony. Mota Mapper generates multiobjective, network topology-aware box mappings, which we apply to optimize the data layout for the example multigrid solvers. While the sensitivity of these solvers to layout and executionmore » strategy appears to be modest for balanced scenarios, the impact of better mapping algorithms can be significant when performance is highly constrained by network hop latency. Furthermore, we show that network latency in the multigrid bottom solve is the main contributing factor preventing good scaling on exascale-class machines.« less

  20. Positive tropical marine low-cloud cover feedback inferred from cloud-controlling factors

    DOE PAGES

    Qu, Xin; Hall, Alex; Klein, Stephen A.; ...

    2015-09-28

    Differences in simulations of tropical marine low-cloud cover (LCC) feedback are sources of significant spread in temperature responses of climate models to anthropogenic forcing. Here we show that in models the feedback is mainly driven by three large-scale changes—a strengthening tropical inversion, increasing surface latent heat flux, and an increasing vertical moisture gradient. Variations in the LCC response to these changes alone account for most of the spread in model-projected 21st century LCC changes. A methodology is devised to constrain the LCC response observationally using sea surface temperature (SST) as a surrogate for the latent heat flux and moisture gradient.more » In models where the current climate's LCC sensitivities to inversion strength and SST variations are consistent with observed, LCC decreases systematically, which would increase absorption of solar radiation. These results support a positive LCC feedback. Finally, correcting biases in the sensitivities will be an important step toward more credible simulation of cloud feedbacks.« less

Top