NASA Technical Reports Server (NTRS)
Tapiador, Francisco; Tao, Wei-Kuo; Angelis, Carlos F.; Martinez, Miguel A.; Cecilia Marcos; Antonio Rodriguez; Hou, Arthur; Jong Shi, Jain
2012-01-01
Ensembles of numerical model forecasts are of interest to operational early warning forecasters as the spread of the ensemble provides an indication of the uncertainty of the alerts, and the mean value is deemed to outperform the forecasts of the individual models. This paper explores two ensembles on a severe weather episode in Spain, aiming to ascertain the relative usefulness of each one. One ensemble uses sensible choices of physical parameterizations (precipitation microphysics, land surface physics, and cumulus physics) while the other follows a perturbed initial conditions approach. The results show that, depending on the parameterizations, large differences can be expected in terms of storm location, spatial structure of the precipitation field, and rain intensity. It is also found that the spread of the perturbed initial conditions ensemble is smaller than the dispersion due to physical parameterizations. This confirms that in severe weather situations operational forecasts should address moist physics deficiencies to realize the full benefits of the ensemble approach, in addition to optimizing initial conditions. The results also provide insights into differences in simulations arising from ensembles of weather models using several combinations of different physical parameterizations.
NASA Astrophysics Data System (ADS)
Zheng, Fei; Zhu, Jiang
2017-04-01
How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.
NASA Astrophysics Data System (ADS)
Stainforth, D. A.; Allen, M.; Kettleborough, J.; Collins, M.; Heaps, A.; Stott, P.; Wehner, M.
2001-12-01
The climateprediction.com project is preparing to carry out the first systematic uncertainty analysis of climate forecasts using large ensembles of GCM climate simulations. This will be done by involving schools, businesses and members of the public, and utilizing the novel technology of distributed computing. Each participant will be asked to run one member of the ensemble on their PC. The model used will initially be the UK Met Office's Unified Model (UM). It will be run under Windows and software will be provided to enable those involved to view their model output as it develops. The project will use this method to carry out large perturbed physics GCM ensembles and thereby analyse the uncertainty in the forecasts from such models. Each participant/ensemble member will therefore have a version of the UM in which certain aspects of the model physics have been perturbed from their default values. Of course the non-linear nature of the system means that it will be necessary to look not just at perturbations to individual parameters in specific schemes, such as the cloud parameterization, but also to the many combinations of perturbations. This rapidly leads to the need for very large, perhaps multi-million member ensembles, which could only be undertaken using the distributed computing methodology. The status of the project will be presented and the Windows client will be demonstrated. In addition, initial results will be presented from beta test runs using a demo release for Linux PCs and Alpha workstations. Although small by comparison to the whole project, these pilot results constitute a 20-50 member perturbed physics climate ensemble with results indicating how climate sensitivity can be substantially affected by individual parameter values in the cloud scheme.
Stochastic Approaches Within a High Resolution Rapid Refresh Ensemble
NASA Astrophysics Data System (ADS)
Jankov, I.
2017-12-01
It is well known that global and regional numerical weather prediction (NWP) ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system is the use of stochastic physics. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), and Stochastic Perturbation of Physics Tendencies (SPPT). The focus of this study is to assess model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) using a variety of stochastic approaches. A single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model was utilized and ensemble members produced by employing stochastic methods. Parameter perturbations (using SPP) for select fields were employed in the Rapid Update Cycle (RUC) land surface model (LSM) and Mellor-Yamada-Nakanishi-Niino (MYNN) Planetary Boundary Layer (PBL) schemes. Within MYNN, SPP was applied to sub-grid cloud fraction, mixing length, roughness length, mass fluxes and Prandtl number. In the RUC LSM, SPP was applied to hydraulic conductivity and tested perturbing soil moisture at initial time. First iterative testing was conducted to assess the initial performance of several configuration settings (e.g. variety of spatial and temporal de-correlation lengths). Upon selection of the most promising candidate configurations using SPP, a 10-day time period was run and more robust statistics were gathered. SKEB and SPPT were included in additional retrospective tests to assess the impact of using all three stochastic approaches to address model uncertainty. Results from the stochastic perturbation testing were compared to a baseline multi-physics control ensemble. For probabilistic forecast performance the Model Evaluation Tools (MET) verification package was used.
On the generation of climate model ensembles
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy; Phipps, Steven J.
2014-10-01
Climate model ensembles are used to estimate uncertainty in future projections, typically by interpreting the ensemble distribution for a particular variable probabilistically. There are, however, different ways to produce climate model ensembles that yield different results, and therefore different probabilities for a future change in a variable. Perhaps equally importantly, there are different approaches to interpreting the ensemble distribution that lead to different conclusions. Here we use a reduced-resolution climate system model to compare three common ways to generate ensembles: initial conditions perturbation, physical parameter perturbation, and structural changes. Despite these three approaches conceptually representing very different categories of uncertainty within a modelling system, when comparing simulations to observations of surface air temperature they can be very difficult to separate. Using the twentieth century CMIP5 ensemble for comparison, we show that initial conditions ensembles, in theory representing internal variability, significantly underestimate observed variance. Structural ensembles, perhaps less surprisingly, exhibit over-dispersion in simulated variance. We argue that future climate model ensembles may need to include parameter or structural perturbation members in addition to perturbed initial conditions members to ensure that they sample uncertainty due to internal variability more completely. We note that where ensembles are over- or under-dispersive, such as for the CMIP5 ensemble, estimates of uncertainty need to be treated with care.
NASA Astrophysics Data System (ADS)
Wolff, J.; Jankov, I.; Beck, J.; Carson, L.; Frimel, J.; Harrold, M.; Jiang, H.
2016-12-01
It is well known that global and regional numerical weather prediction ensemble systems are under-dispersive, producing unreliable and overconfident ensemble forecasts. Typical approaches to alleviate this problem include the use of multiple dynamic cores, multiple physics suite configurations, or a combination of the two. While these approaches may produce desirable results, they have practical and theoretical deficiencies and are more difficult and costly to maintain. An active area of research that promotes a more unified and sustainable system for addressing the deficiencies in ensemble modeling is the use of stochastic physics to represent model-related uncertainty. Stochastic approaches include Stochastic Parameter Perturbations (SPP), Stochastic Kinetic Energy Backscatter (SKEB), Stochastic Perturbation of Physics Tendencies (SPPT), or some combination of all three. The focus of this study is to assess the model performance within a convection-permitting ensemble at 3-km grid spacing across the Contiguous United States (CONUS) when using stochastic approaches. For this purpose, the test utilized a single physics suite configuration based on the operational High-Resolution Rapid Refresh (HRRR) model, with ensemble members produced by employing stochastic methods. Parameter perturbations were employed in the Rapid Update Cycle (RUC) land surface model and Mellor-Yamada-Nakanishi-Niino (MYNN) planetary boundary layer scheme. Results will be presented in terms of bias, error, spread, skill, accuracy, reliability, and sharpness using the Model Evaluation Tools (MET) verification package. Due to the high level of complexity of running a frequently updating (hourly), high spatial resolution (3 km), large domain (CONUS) ensemble system, extensive high performance computing (HPC) resources were needed to meet this objective. Supercomputing resources were provided through the National Center for Atmospheric Research (NCAR) Strategic Capability (NSC) project support, allowing for a more extensive set of tests over multiple seasons, consequently leading to more robust results. Through the use of these stochastic innovations and powerful supercomputing at NCAR, further insights and advancements in ensemble forecasting at convection-permitting scales will be possible.
NASA Astrophysics Data System (ADS)
Vervatis, Vassilios; De Mey, Pierre; Ayoub, Nadia; Kailas, Marios; Sofianos, Sarantis
2017-04-01
The project entitled Stochastic Coastal/Regional Uncertainty Modelling (SCRUM) aims at strengthening CMEMS in the areas of ocean uncertainty quantification, ensemble consistency verification and ensemble data assimilation. The project has been initiated by the University of Athens and LEGOS/CNRS research teams, in the framework of CMEMS Service Evolution. The work is based on stochastic modelling of ocean physics and biogeochemistry in the Bay of Biscay, on an identical sub-grid configuration of the IBI-MFC system in its latest CMEMS operational version V2. In a first step, we use a perturbed tendencies scheme to generate ensembles describing uncertainties in open ocean and on the shelf, focusing on upper ocean processes. In a second step, we introduce two methodologies (i.e. rank histograms and array modes) aimed at checking the consistency of the above ensembles with respect to TAC data and arrays. Preliminary results highlight that wind uncertainties dominate all other atmosphere-ocean sources of model errors. The ensemble spread in medium-range ensembles is approximately 0.01 m for SSH and 0.15 °C for SST, though these values vary depending on season and cross shelf regions. Ecosystem model uncertainties emerging from perturbations in physics appear to be moderately larger than those perturbing the concentration of the biogeochemical compartments, resulting in total chlorophyll spread at about 0.01 mg.m-3. First consistency results show that the model ensemble and the pseudo-ensemble of OSTIA (L4) observation SSTs appear to exhibit nonzero joint probabilities with each other since error vicinities overlap. Rank histograms show that the model ensemble is initially under-dispersive, though results improve in the context of seasonal-range ensembles.
Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach.
Chertkov, Michael; Chernyak, Vladimir
2017-08-17
Thermostatically controlled loads, e.g., air conditioners and heaters, are by far the most widespread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control - changing from on to off, and vice versa, depending on temperature. We considered aggregation of a large group of similar devices into a statistical ensemble, where the devices operate following the same dynamics, subject to stochastic perturbations and randomized, Poisson on/off switching policy. Using theoretical and computational tools of statistical physics, we analyzed how the ensemble relaxes to a stationary distribution and established a relationship between the relaxation and the statistics of the probability flux associated with devices' cycling in the mixed (discrete, switch on/off, and continuous temperature) phase space. This allowed us to derive the spectrum of the non-equilibrium (detailed balance broken) statistical system and uncover how switching policy affects oscillatory trends and the speed of the relaxation. Relaxation of the ensemble is of practical interest because it describes how the ensemble recovers from significant perturbations, e.g., forced temporary switching off aimed at utilizing the flexibility of the ensemble to provide "demand response" services to change consumption temporarily to balance a larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.
Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach
Chertkov, Michael; Chernyak, Vladimir
2017-01-17
Thermostatically Controlled Loads (TCL), e.g. air-conditioners and heaters, are by far the most wide-spread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control of temperature - changing from on to off , and vice versa, depending on temperature. Aggregation of a large group of similar devices into a statistical ensemble is considered, where the devices operate following the same dynamics subject to stochastic perturbations and randomized, Poisson on/off switching policy. We analyze, using theoretical and computational tools of statistical physics, how the ensemble relaxes to a stationary distribution and establish relation between the re- laxationmore » and statistics of the probability flux, associated with devices' cycling in the mixed (discrete, switch on/off , and continuous, temperature) phase space. This allowed us to derive and analyze spec- trum of the non-equilibrium (detailed balance broken) statistical system. and uncover how switching policy affects oscillatory trend and speed of the relaxation. Relaxation of the ensemble is of a practical interest because it describes how the ensemble recovers from significant perturbations, e.g. forceful temporary switching o aimed at utilizing flexibility of the ensemble in providing "demand response" services relieving consumption temporarily to balance larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.« less
Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael; Chernyak, Vladimir
Thermostatically Controlled Loads (TCL), e.g. air-conditioners and heaters, are by far the most wide-spread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control of temperature - changing from on to off , and vice versa, depending on temperature. Aggregation of a large group of similar devices into a statistical ensemble is considered, where the devices operate following the same dynamics subject to stochastic perturbations and randomized, Poisson on/off switching policy. We analyze, using theoretical and computational tools of statistical physics, how the ensemble relaxes to a stationary distribution and establish relation between the re- laxationmore » and statistics of the probability flux, associated with devices' cycling in the mixed (discrete, switch on/off , and continuous, temperature) phase space. This allowed us to derive and analyze spec- trum of the non-equilibrium (detailed balance broken) statistical system. and uncover how switching policy affects oscillatory trend and speed of the relaxation. Relaxation of the ensemble is of a practical interest because it describes how the ensemble recovers from significant perturbations, e.g. forceful temporary switching o aimed at utilizing flexibility of the ensemble in providing "demand response" services relieving consumption temporarily to balance larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.« less
An operational mesoscale ensemble data assimilation and prediction system: E-RTFDDA
NASA Astrophysics Data System (ADS)
Liu, Y.; Hopson, T.; Roux, G.; Hacker, J.; Xu, M.; Warner, T.; Swerdlin, S.
2009-04-01
Mesoscale (2-2000 km) meteorological processes differ from synoptic circulations in that mesoscale weather changes rapidly in space and time, and physics processes that are parameterized in NWP models play a great role. Complex interactions of synoptic circulations, regional and local terrain, land-surface heterogeneity, and associated physical properties, and the physical processes of radiative transfer, cloud and precipitation and boundary layer mixing, are crucial in shaping regional weather and climate. Mesoscale ensemble analysis and prediction should sample the uncertainties of mesoscale modeling systems in representing these factors. An innovative mesoscale Ensemble Real-Time Four Dimensional Data Assimilation (E-RTFDDA) and forecasting system has been developed at NCAR. E-RTFDDA contains diverse ensemble perturbation approaches that consider uncertainties in all major system components to produce multi-scale continuously-cycling probabilistic data assimilation and forecasting. A 30-member E-RTFDDA system with three nested domains with grid sizes of 30, 10 and 3.33 km has been running on a Department of Defense high-performance computing platform since September 2007. It has been applied at two very different US geographical locations; one in the western inter-mountain area and the other in the northeastern states, producing 6 hour analyses and 48 hour forecasts, with 4 forecast cycles a day. The operational model outputs are analyzed to a) assess overall ensemble performance and properties, b) study terrain effect on mesoscale predictability, c) quantify the contribution of different ensemble perturbation approaches to the overall forecast skill, and d) assess the additional contributed skill from an ensemble calibration process based on a quantile-regression algorithm. The system and the results will be reported at the meeting.
NASA Astrophysics Data System (ADS)
Kunii, Masaru; Saito, Kazuo; Seko, Hiromu; Hara, Masahiro; Hara, Tabito; Yamaguchi, Munehiko; Gong, Jiandong; Charron, Martin; Du, Jun; Wang, Yong; Chen, Dehui
2011-05-01
During the period around the Beijing 2008 Olympic Games, the Beijing 2008 Olympics Research and Development Project (B08RDP) was conducted as part of the World Weather Research Program short-range weather forecasting research project. Mesoscale ensemble prediction (MEP) experiments were carried out by six organizations in near-real time, in order to share their experiences in the development of MEP systems. The purpose of this study is to objectively verify these experiments and to clarify the problems associated with the current MEP systems through the same experiences. Verification was performed using the MEP outputs interpolated into a common verification domain with a horizontal resolution of 15 km. For all systems, the ensemble spreads grew as the forecast time increased, and the ensemble mean improved the forecast errors compared with individual control forecasts in the verification against the analysis fields. However, each system exhibited individual characteristics according to the MEP method. Some participants used physical perturbation methods. The significance of these methods was confirmed by the verification. However, the mean error (ME) of the ensemble forecast in some systems was worse than that of the individual control forecast. This result suggests that it is necessary to pay careful attention to physical perturbations.
Ideas for a pattern-oriented approach towards a VERA analysis ensemble
NASA Astrophysics Data System (ADS)
Gorgas, T.; Dorninger, M.
2010-09-01
Ideas for a pattern-oriented approach towards a VERA analysis ensemble For many applications in meteorology and especially for verification purposes it is important to have some information about the uncertainties of observation and analysis data. A high quality of these "reference data" is an absolute necessity as the uncertainties are reflected in verification measures. The VERA (Vienna Enhanced Resolution Analysis) scheme includes a sophisticated quality control tool which accounts for the correction of observational data and provides an estimation of the observation uncertainty. It is crucial for meteorologically and physically reliable analysis fields. VERA is based on a variational principle and does not need any first guess fields. It is therefore NWP model independent and can also be used as an unbiased reference for real time model verification. For downscaling purposes VERA uses an a priori knowledge on small-scale physical processes over complex terrain, the so called "fingerprint technique", which transfers information from rich to data sparse regions. The enhanced Joint D-PHASE and COPS data set forms the data base for the analysis ensemble study. For the WWRP projects D-PHASE and COPS a joint activity has been started to collect GTS and non-GTS data from the national and regional meteorological services in Central Europe for 2007. Data from more than 11.000 stations are available for high resolution analyses. The usage of random numbers as perturbations for ensemble experiments is a common approach in meteorology. In most implementations, like for NWP-model ensemble systems, the focus lies on error growth and propagation on the spatial and temporal scale. When defining errors in analysis fields we have to consider the fact that analyses are not time dependent and that no perturbation method aimed at temporal evolution is possible. Further, the method applied should respect two major sources of analysis errors: Observation errors AND analysis or interpolation errors. With the concept of an analysis ensemble we hope to get a more detailed sight on both sources of analysis errors. For the computation of the VERA ensemble members a sample of Gaussian random perturbations is produced for each station and parameter. The deviation of perturbations is based on the correction proposals by the VERA QC scheme to provide some "natural" limits for the ensemble. In order to put more emphasis on the weather situation we aim to integrate the main synoptic field structures as weighting factors for the perturbations. Two widely approved approaches are used for the definition of these main field structures: The Principal Component Analysis and a 2D-Discrete Wavelet Transform. The results of tests concerning the implementation of this pattern-supported analysis ensemble system and a comparison of the different approaches are given in the presentation.
NASA Astrophysics Data System (ADS)
Subramanian, Aneesh C.; Palmer, Tim N.
2017-06-01
Stochastic schemes to represent model uncertainty in the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble prediction system has helped improve its probabilistic forecast skill over the past decade by both improving its reliability and reducing the ensemble mean error. The largest uncertainties in the model arise from the model physics parameterizations. In the tropics, the parameterization of moist convection presents a major challenge for the accurate prediction of weather and climate. Superparameterization is a promising alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud-resolving model (CRM) embedded within a global climate model (GCM). In this paper, we compare the impact of initial random perturbations in embedded CRMs, within the ECMWF ensemble prediction system, with stochastically perturbed physical tendency (SPPT) scheme as a way to represent model uncertainty in medium-range tropical weather forecasts. We especially focus on forecasts of tropical convection and dynamics during MJO events in October-November 2011. These are well-studied events for MJO dynamics as they were also heavily observed during the DYNAMO field campaign. We show that a multiscale ensemble modeling approach helps improve forecasts of certain aspects of tropical convection during the MJO events, while it also tends to deteriorate certain large-scale dynamic fields with respect to stochastically perturbed physical tendencies approach that is used operationally at ECMWF.
Comparison of two perturbation methods to estimate the land surface modeling uncertainty
NASA Astrophysics Data System (ADS)
Su, H.; Houser, P.; Tian, Y.; Kumar, S.; Geiger, J.; Belvedere, D.
2007-12-01
In land surface modeling, it is almost impossible to simulate the land surface processes without any error because the earth system is highly complex and the physics of the land processes has not yet been understood sufficiently. In most cases, people want to know not only the model output but also the uncertainty in the modeling, to estimate how reliable the modeling is. Ensemble perturbation is an effective way to estimate the uncertainty in land surface modeling, since land surface models are highly nonlinear which makes the analytical approach not applicable in this estimation. The ideal perturbation noise is zero mean Gaussian distribution, however, this requirement can't be satisfied if the perturbed variables in land surface model have physical boundaries because part of the perturbation noises has to be removed to feed the land surface models properly. Two different perturbation methods are employed in our study to investigate their impact on quantifying land surface modeling uncertainty base on the Land Information System (LIS) framework developed by NASA/GSFC land team. One perturbation method is the built-in algorithm named "STATIC" in LIS version 5; the other is a new perturbation algorithm which was recently developed to minimize the overall bias in the perturbation by incorporating additional information from the whole time series for the perturbed variable. The statistical properties of the perturbation noise generated by the two different algorithms are investigated thoroughly by using a large ensemble size on a NASA supercomputer and then the corresponding uncertainty estimates based on the two perturbation methods are compared. Their further impacts on data assimilation are also discussed. Finally, an optimal perturbation method is suggested.
NASA Astrophysics Data System (ADS)
Christensen, H. M.; Moroz, I.; Palmer, T.
2015-12-01
It is now acknowledged that representing model uncertainty in atmospheric simulators is essential for the production of reliable probabilistic ensemble forecasts, and a number of different techniques have been proposed for this purpose. Stochastic convection parameterization schemes use random numbers to represent the difference between a deterministic parameterization scheme and the true atmosphere, accounting for the unresolved sub grid-scale variability associated with convective clouds. An alternative approach varies the values of poorly constrained physical parameters in the model to represent the uncertainty in these parameters. This study presents new perturbed parameter schemes for use in the European Centre for Medium Range Weather Forecasts (ECMWF) convection scheme. Two types of scheme are developed and implemented. Both schemes represent the joint uncertainty in four of the parameters in the convection parametrisation scheme, which was estimated using the Ensemble Prediction and Parameter Estimation System (EPPES). The first scheme developed is a fixed perturbed parameter scheme, where the values of uncertain parameters are changed between ensemble members, but held constant over the duration of the forecast. The second is a stochastically varying perturbed parameter scheme. The performance of these schemes was compared to the ECMWF operational stochastic scheme, Stochastically Perturbed Parametrisation Tendencies (SPPT), and to a model which does not represent uncertainty in convection. The skill of probabilistic forecasts made using the different models was evaluated. While the perturbed parameter schemes improve on the stochastic parametrisation in some regards, the SPPT scheme outperforms the perturbed parameter approaches when considering forecast variables that are particularly sensitive to convection. Overall, SPPT schemes are the most skilful representations of model uncertainty due to convection parametrisation. Reference: H. M. Christensen, I. M. Moroz, and T. N. Palmer, 2015: Stochastic and Perturbed Parameter Representations of Model Uncertainty in Convection Parameterization. J. Atmos. Sci., 72, 2525-2544.
A comparison of breeding and ensemble transform vectors for global ensemble generation
NASA Astrophysics Data System (ADS)
Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan
2012-02-01
To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.
Using palaeoclimate data to improve models of the Antarctic Ice Sheet
NASA Astrophysics Data System (ADS)
Phipps, Steven; King, Matt; Roberts, Jason; White, Duanne
2017-04-01
Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modelling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how palaeoclimate data can improve our ability to predict the future evolution of the AIS. A 50-member perturbed-physics ensemble is generated, spanning uncertainty in the parameterisations of three key physical processes within the model: (i) the stress balance within the ice sheet, (ii) basal sliding and (iii) calving of ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Palaeoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.
Using paleoclimate data to improve models of the Antarctic Ice Sheet
NASA Astrophysics Data System (ADS)
King, M. A.; Phipps, S. J.; Roberts, J. L.; White, D.
2016-12-01
Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modeling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how paleoclimate data can improve our ability to predict the future evolution of the AIS. A large, perturbed-physics ensemble is generated, spanning uncertainty in the parameterizations of four key physical processes within ice sheet models: ice rheology, ice shelf calving, and the stress balances within ice sheets and ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Paleoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.
NASA Astrophysics Data System (ADS)
Lopez, Ana; Fung, Fai; New, Mark; Watts, Glenn; Weston, Alan; Wilby, Robert L.
2009-08-01
The majority of climate change impacts and adaptation studies so far have been based on at most a few deterministic realizations of future climate, usually representing different emissions scenarios. Large ensembles of climate models are increasingly available either as ensembles of opportunity or perturbed physics ensembles, providing a wealth of additional data that is potentially useful for improving adaptation strategies to climate change. Because of the novelty of this ensemble information, there is little previous experience of practical applications or of the added value of this information for impacts and adaptation decision making. This paper evaluates the value of perturbed physics ensembles of climate models for understanding and planning public water supply under climate change. We deliberately select water resource models that are already used by water supply companies and regulators on the assumption that uptake of information from large ensembles of climate models will be more likely if it does not involve significant investment in new modeling tools and methods. We illustrate the methods with a case study on the Wimbleball water resource zone in the southwest of England. This zone is sufficiently simple to demonstrate the utility of the approach but with enough complexity to allow a variety of different decisions to be made. Our research shows that the additional information contained in the climate model ensemble provides a better understanding of the possible ranges of future conditions, compared to the use of single-model scenarios. Furthermore, with careful presentation, decision makers will find the results from large ensembles of models more accessible and be able to more easily compare the merits of different management options and the timing of different adaptation. The overhead in additional time and expertise for carrying out the impacts analysis will be justified by the increased quality of the decision-making process. We remark that even though we have focused our study on a water resource system in the United Kingdom, our conclusions about the added value of climate model ensembles in guiding adaptation decisions can be generalized to other sectors and geographical regions.
NASA Astrophysics Data System (ADS)
Millar, R.; Ingram, W.; Allen, M. R.; Lowe, J.
2013-12-01
Temperature and precipitation patterns are the climate variables with the greatest impacts on both natural and human systems. Due to the small spatial scales and the many interactions involved in the global hydrological cycle, in general circulation models (GCMs) representations of precipitation changes are subject to considerable uncertainty. Quantifying and understanding the causes of uncertainty (and identifying robust features of predictions) in both global and local precipitation change is an essential challenge of climate science. We have used the huge distributed computing capacity of the climateprediction.net citizen science project to examine parametric uncertainty in an ensemble of 20,000 perturbed-physics versions of the HadCM3 general circulation model. The ensemble has been selected to have a control climate in top-of-atmosphere energy balance [Yamazaki et al. 2013, J.G.R.]. We force this ensemble with several idealised climate-forcing scenarios including carbon dioxide step and transient profiles, solar radiation management geoengineering experiments with stratospheric aerosols, and short-lived climate forcing agents. We will present the results from several of these forcing scenarios under GCM parametric uncertainty. We examine the global mean precipitation energy budget to understand the robustness of a simple non-linear global precipitation model [Good et al. 2012, Clim. Dyn.] as a better explanation of precipitation changes in transient climate projections under GCM parametric uncertainty than a simple linear tropospheric energy balance model. We will also present work investigating robust conclusions about precipitation changes in a balanced ensemble of idealised solar radiation management scenarios [Kravitz et al. 2011, Atmos. Sci. Let.].
NASA Astrophysics Data System (ADS)
Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong
2017-07-01
This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.
Parameter Uncertainty on AGCM-simulated Tropical Cyclones
NASA Astrophysics Data System (ADS)
He, F.
2015-12-01
This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.
Leptonic decay constants for D-mesons from 3-flavour CLS ensembles
NASA Astrophysics Data System (ADS)
Collins, Sara; Eckert, Kevin; Heitger, Jochen; Hofmann, Stefan; Söldner, Wolfgang
2018-03-01
e report on the status of an ongoing effort by the RQCD and ALPHA Collaborations, aimed at determining leptonic decay constants of charmed mesons. Our analysis is based on large-volume ensembles generated within the CLS effort, employing Nf = 2 + 1 non-perturbatively O(a) improved Wilson quarks, tree-level Symanzik-improved gauge action and open boundary conditions. The ensembles cover lattice spac-ings from a ≈ 0.09 fm to a ≈ 0.05 fm, with pion masses varied from 420 to 200 MeV. To extrapolate to the physical masses, we follow both the (2ml + ms) = const. and the ms = const. lines in parameter space.
NASA Astrophysics Data System (ADS)
Bustamante, J. F. F.; Chou, S. C.; Gomes, J. L.
2009-04-01
The Southeast Brazil, in the coastal and mountain region called Serra do Mar, between Sao Paulo and Rio de Janeiro, is subject to frequent events of landslides and floods. The Eta Model has been producing good quality forecasts over South America at about 40-km horizontal resolution. For that type of hazards, however, more detailed and probabilistic information on the risks should be provided with the forecasts. Thus, a short-range ensemble prediction system (SREPS) based on the Eta Model is being constructed. Ensemble members derived from perturbed initial and lateral boundary conditions did not provide enough spread for the forecasts. Members with model physics perturbation are being included and tested. The objective of this work is to construct more members for the Eta SREPS by adding physics perturbed members. The Eta Model is configured at 10-km resolution and 38 layers in the vertical. The domain covered is most of Southeast Brazil, centered over the Serra do Mar region. The constructed members comprise variations of the cumulus parameterization Betts-Miller-Janjic (BMJ) and Kain-Fritsch (KF) schemes. Three members were constructed from the BMJ scheme by varying the deficit of saturation pressure profile over land and sea, and 2 members of the KF scheme were included using the standard KF and a momentum flux added to KF scheme version. One of the runs with BMJ scheme is the control run as it was used for the initial condition perturbation SREPS. The forecasts were tested for 6 cases of South America Convergence Zone (SACZ) events. The SACZ is a common summer season feature of Southern Hemisphere that causes persistent rain for a few days over the Southeast Brazil and it frequently organizes over Serra do Mar region. These events are particularly interesting because of the persistent rains that can accumulate large amounts and cause generalized landslides and death. With respect to precipitation, the KF scheme versions have shown to be able to reach the larger precipitation peaks of the events. On the other hand, for predicted 850-hPa temperature, the KF scheme versions produce positive bias and BMJ versions produce negative bias. Therefore, the ensemble mean forecast of 850-hPa temperature of this SREPS exhibits smaller error than the control member. Specific humidity shows smaller bias in the KF scheme. In general, the ensemble mean produced forecasts closer to the observations than the control run.
NASA Technical Reports Server (NTRS)
Maggioni, V.; Anagnostou, E. N.; Reichle, R. H.
2013-01-01
The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.
NASA Astrophysics Data System (ADS)
Booth, B.; Collins, M.; Harris, G.; Chris, H.; Jones, C.
2007-12-01
A number of recent studies have highlighted the risk of abrupt dieback of the Amazon Rain Forest as the result of climate changes over the next century. The recent 2005 Amazon drought brought wider acceptance of the idea that that climate drivers will play a significant role in future rain forest stability, yet that stability is still subject to considerable degree of uncertainty. We present a study which seeks to explore some of the underlying uncertainties both in the climate drivers of dieback and in the terrestrial land surface formulation used in GCMs. We adopt a perturbed physics approach which forms part of a wider project which is covered in an accompanying abstract submitted to the multi-model ensembles session. We first couple the same interactive land surface model to a number of different versions of the Hadley Centre atmosphere-ocean model that exhibit a wide range of different physical climate responses in the future. The rainforest extent is shown to collapse in all model cases but the timing of the collapse is dependent on the magnitude of the climate drivers. In the second part, we explore uncertainties in the terrestrial land surface model using the perturbed physics ensemble approach, perturbing uncertain parameters which have an important role in the vegetation and soil response. Contrasting the two approaches enables a greater understanding of the relative importance of climatic and land surface model uncertainties in Amazon dieback.
Using a Very Large Ensemble to Examine the Role of the Ocean in Recent Warming Trends.
NASA Astrophysics Data System (ADS)
Sparrow, S. N.; Millar, R.; Otto, A.; Yamazaki, K.; Allen, M. R.
2014-12-01
Results from a very large (~10,000 member) perturbed physics and perturbed initial condition ensemble are presented for the period 1980 to present. A set of model versions that can shadow recent surface and upper ocean observations are identified and the range of uncertainty in the Atlantic Meridional Overturning Circulation (AMOC) assessed. This experiment uses the Met Office Hadley Centre Coupled Model version 3 (HadCM3), a coupled model with fully dynamic atmosphere and ocean components as part of the climateprediction.net distributive computing project. Parameters are selected so that the model has good top of atmosphere radiative balance and simulations are run without flux adjustments that "nudge" the climate towards a realistic state, but have an adverse effect on important ocean processes. This ensemble provides scientific insights on the possible role of the AMOC, among other factors, in climate trends, or lack thereof, over the past 20 years. This ensemble is also used to explore how the occurrence of hiatus events of different durations varies for models with different transient climate response (TCR). We show that models with a higher TCR are less likely to produce a 15-year warming hiatus in global surface temperature than those with a lower TCR.
Potentialities of ensemble strategies for flood forecasting over the Milano urban area
NASA Astrophysics Data System (ADS)
Ravazzani, Giovanni; Amengual, Arnau; Ceppi, Alessandro; Homar, Víctor; Romero, Romu; Lombardi, Gabriele; Mancini, Marco
2016-08-01
Analysis of ensemble forecasting strategies, which can provide a tangible backing for flood early warning procedures and mitigation measures over the Mediterranean region, is one of the fundamental motivations of the international HyMeX programme. Here, we examine two severe hydrometeorological episodes that affected the Milano urban area and for which the complex flood protection system of the city did not completely succeed. Indeed, flood damage have exponentially increased during the last 60 years, due to industrial and urban developments. Thus, the improvement of the Milano flood control system needs a synergism between structural and non-structural approaches. First, we examine how land-use changes due to urban development have altered the hydrological response to intense rainfalls. Second, we test a flood forecasting system which comprises the Flash-flood Event-based Spatially distributed rainfall-runoff Transformation, including Water Balance (FEST-WB) and the Weather Research and Forecasting (WRF) models. Accurate forecasts of deep moist convection and extreme precipitation are difficult to be predicted due to uncertainties arising from the numeric weather prediction (NWP) physical parameterizations and high sensitivity to misrepresentation of the atmospheric state; however, two hydrological ensemble prediction systems (HEPS) have been designed to explicitly cope with uncertainties in the initial and lateral boundary conditions (IC/LBCs) and physical parameterizations of the NWP model. No substantial differences in skill have been found between both ensemble strategies when considering an enhanced diversity of IC/LBCs for the perturbed initial conditions ensemble. Furthermore, no additional benefits have been found by considering more frequent LBCs in a mixed physics ensemble, as ensemble spread seems to be reduced. These findings could help to design the most appropriate ensemble strategies before these hydrometeorological extremes, given the computational cost of running such advanced HEPSs for operational purposes.
NASA Astrophysics Data System (ADS)
Saito, Kazuo; Hara, Masahiro; Kunii, Masaru; Seko, Hiromu; Yamaguchi, Munehiko
2011-05-01
Different initial perturbation methods for the mesoscale ensemble prediction were compared by the Meteorological Research Institute (MRI) as a part of the intercomparison of mesoscale ensemble prediction systems (EPSs) of the World Weather Research Programme (WWRP) Beijing 2008 Olympics Research and Development Project (B08RDP). Five initial perturbation methods for mesoscale ensemble prediction were developed for B08RDP and compared at MRI: (1) a downscaling method of the Japan Meteorological Agency (JMA)'s operational one-week EPS (WEP), (2) a targeted global model singular vector (GSV) method, (3) a mesoscale model singular vector (MSV) method based on the adjoint model of the JMA non-hydrostatic model (NHM), (4) a mesoscale breeding growing mode (MBD) method based on the NHM forecast and (5) a local ensemble transform (LET) method based on the local ensemble transform Kalman filter (LETKF) using NHM. These perturbation methods were applied to the preliminary experiments of the B08RDP Tier-1 mesoscale ensemble prediction with a horizontal resolution of 15 km. To make the comparison easier, the same horizontal resolution (40 km) was employed for the three mesoscale model-based initial perturbation methods (MSV, MBD and LET). The GSV method completely outperformed the WEP method, confirming the advantage of targeting in mesoscale EPS. The GSV method generally performed well with regard to root mean square errors of the ensemble mean, large growth rates of ensemble spreads throughout the 36-h forecast period, and high detection rates and high Brier skill scores (BSSs) for weak rains. On the other hand, the mesoscale model-based initial perturbation methods showed good detection rates and BSSs for intense rains. The MSV method showed a rapid growth in the ensemble spread of precipitation up to a forecast time of 6 h, which suggests suitability of the mesoscale SV for short-range EPSs, but the initial large growth of the perturbation did not last long. The performance of the MBD method was good for ensemble prediction of intense rain with a relatively small computing cost. The LET method showed similar characteristics to the MBD method, but the spread and growth rate were slightly smaller and the relative operating characteristic area skill score and BSS did not surpass those of MBD. These characteristic features of the five methods were confirmed by checking the evolution of the total energy norms and their growth rates. Characteristics of the initial perturbations obtained by four methods (GSV, MSV, MBD and LET) were examined for the case of a synoptic low-pressure system passing over eastern China. With GSV and MSV, the regions of large spread were near the low-pressure system, but with MSV, the distribution was more concentrated on the mesoscale disturbance. On the other hand, large-spread areas were observed southwest of the disturbance in MBD and LET. The horizontal pattern of LET perturbation was similar to that of MBD, but the amplitude of the LET perturbation reflected the observation density.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
NASA Astrophysics Data System (ADS)
Pazó, Diego; Rodríguez, Miguel A.; López, Juan M.
2010-05-01
We study the evolution of finite perturbations in the Lorenz ‘96 model, a meteorological toy model of the atmosphere. The initial perturbations are chosen to be aligned along different dynamic vectors: bred, Lyapunov, and singular vectors. Using a particular vector determines not only the amplification rate of the perturbation but also the spatial structure of the perturbation and its stability under the evolution of the flow. The evolution of perturbations is systematically studied by means of the so-called mean-variance of logarithms diagram that provides in a very compact way the basic information to analyse the spatial structure. We discuss the corresponding advantages of using those different vectors for preparing initial perturbations to be used in ensemble prediction systems, focusing on key properties: dynamic adaptation to the flow, robustness, equivalence between members of the ensemble, etc. Among all the vectors considered here, the so-called characteristic Lyapunov vectors are possibly optimal, in the sense that they are both perfectly adapted to the flow and extremely robust.
NASA Astrophysics Data System (ADS)
Pazó, Diego; Rodríguez, Miguel A.; López, Juan M.
2010-01-01
We study the evolution of finite perturbations in the Lorenz `96 model, a meteorological toy model of the atmosphere. The initial perturbations are chosen to be aligned along different dynamic vectors: bred, Lyapunov, and singular vectors. Using a particular vector determines not only the amplification rate of the perturbation but also the spatial structure of the perturbation and its stability under the evolution of the flow. The evolution of perturbations is systematically studied by means of the so-called mean-variance of logarithms diagram that provides in a very compact way the basic information to analyse the spatial structure. We discuss the corresponding advantages of using those different vectors for preparing initial perturbations to be used in ensemble prediction systems, focusing on key properties: dynamic adaptation to the flow, robustness, equivalence between members of the ensemble, etc. Among all the vectors considered here, the so-called characteristic Lyapunov vectors are possibly optimal, in the sense that they are both perfectly adapted to the flow and extremely robust.
Spatio-temporal behaviour of medium-range ensemble forecasts
NASA Astrophysics Data System (ADS)
Kipling, Zak; Primo, Cristina; Charlton-Perez, Andrew
2010-05-01
Using the recently-developed mean-variance of logarithms (MVL) diagram, together with the TIGGE archive of medium-range ensemble forecasts from nine different centres, we present an analysis of the spatio-temporal dynamics of their perturbations, and show how the differences between models and perturbation techniques can explain the shape of their characteristic MVL curves. We also consider the use of the MVL diagram to compare the growth of perturbations within the ensemble with the growth of the forecast error, showing that there is a much closer correspondence for some models than others. We conclude by looking at how the MVL technique might assist in selecting models for inclusion in a multi-model ensemble, and suggest an experiment to test its potential in this context.
NASA Astrophysics Data System (ADS)
Li, S.; Rupp, D. E.; Hawkins, L.; Mote, P.; McNeall, D. J.; Sarah, S.; Wallom, D.; Betts, R. A.
2017-12-01
This study investigates the potential to reduce known summer hot/dry biases over Pacific Northwest in the UK Met Office's atmospheric model (HadAM3P) by simultaneously varying multiple model parameters. The bias-reduction process is done through a series of steps: 1) Generation of perturbed physics ensemble (PPE) through the volunteer computing network weather@home; 2) Using machine learning to train "cheap" and fast statistical emulators of climate model, to rule out regions of parameter spaces that lead to model variants that do not satisfy observational constraints, where the observational constraints (e.g., top-of-atmosphere energy flux, magnitude of annual temperature cycle, summer/winter temperature and precipitation) are introduced sequentially; 3) Designing a new PPE by "pre-filtering" using the emulator results. Steps 1) through 3) are repeated until results are considered to be satisfactory (3 times in our case). The process includes a sensitivity analysis to find dominant parameters for various model output metrics, which reduces the number of parameters to be perturbed with each new PPE. Relative to observational uncertainty, we achieve regional improvements without introducing large biases in other parts of the globe. Our results illustrate the potential of using machine learning to train cheap and fast statistical emulators of climate model, in combination with PPEs in systematic model improvement.
A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models
NASA Astrophysics Data System (ADS)
Keller, J. D.; Bach, L.; Hense, A.
2012-12-01
The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.
Improving wave forecasting by integrating ensemble modelling and machine learning
NASA Astrophysics Data System (ADS)
O'Donncha, F.; Zhang, Y.; James, S. C.
2017-12-01
Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.
Blanton, Brian; Dresback, Kendra; Colle, Brian; Kolar, Randy; Vergara, Humberto; Hong, Yang; Leonardo, Nicholas; Davidson, Rachel; Nozick, Linda; Wachtendorf, Tricia
2018-04-25
Hurricane track and intensity can change rapidly in unexpected ways, thus making predictions of hurricanes and related hazards uncertain. This inherent uncertainty often translates into suboptimal decision-making outcomes, such as unnecessary evacuation. Representing this uncertainty is thus critical in evacuation planning and related activities. We describe a physics-based hazard modeling approach that (1) dynamically accounts for the physical interactions among hazard components and (2) captures hurricane evolution uncertainty using an ensemble method. This loosely coupled model system provides a framework for probabilistic water inundation and wind speed levels for a new, risk-based approach to evacuation modeling, described in a companion article in this issue. It combines the Weather Research and Forecasting (WRF) meteorological model, the Coupled Routing and Excess STorage (CREST) hydrologic model, and the ADvanced CIRCulation (ADCIRC) storm surge, tide, and wind-wave model to compute inundation levels and wind speeds for an ensemble of hurricane predictions. Perturbations to WRF's initial and boundary conditions and different model physics/parameterizations generate an ensemble of storm solutions, which are then used to drive the coupled hydrologic + hydrodynamic models. Hurricane Isabel (2003) is used as a case study to illustrate the ensemble-based approach. The inundation, river runoff, and wind hazard results are strongly dependent on the accuracy of the mesoscale meteorological simulations, which improves with decreasing lead time to hurricane landfall. The ensemble envelope brackets the observed behavior while providing "best-case" and "worst-case" scenarios for the subsequent risk-based evacuation model. © 2018 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.
2017-12-01
Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.
NASA Astrophysics Data System (ADS)
Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.
2017-12-01
Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter is mainly caused by the large-scale transport associated with the jet stream that carries the negative biogenic CO2 signals to the northeastern coast. We apply comprehensive statistics to eliminate outliers. We generate a set of flux perturbations based on pre-calibrated flux ensemble members and apply them to the simulations.
Stochastic Parametrisations and Regime Behaviour of Atmospheric Models
NASA Astrophysics Data System (ADS)
Arnold, Hannah; Moroz, Irene; Palmer, Tim
2013-04-01
The presence of regimes is a characteristic of non-linear, chaotic systems (Lorenz, 2006). In the atmosphere, regimes emerge as familiar circulation patterns such as the El-Nino Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and Scandinavian Blocking events. In recent years there has been much interest in the problem of identifying and studying atmospheric regimes (Solomon et al, 2007). In particular, how do these regimes respond to an external forcing such as anthropogenic greenhouse gas emissions? The importance of regimes in observed trends over the past 50-100 years indicates that in order to predict anthropogenic climate change, our climate models must be able to represent accurately natural circulation regimes, their statistics and variability. It is well established that representing model uncertainty as well as initial condition uncertainty is important for reliable weather forecasts (Palmer, 2001). In particular, stochastic parametrisation schemes have been shown to improve the skill of weather forecast models (e.g. Berner et al., 2009; Frenkel et al., 2012; Palmer et al., 2009). It is possible that including stochastic physics as a representation of model uncertainty could also be beneficial in climate modelling, enabling the simulator to explore larger regions of the climate attractor including other flow regimes. An alternative representation of model uncertainty is a perturbed parameter scheme, whereby physical parameters in subgrid parametrisation schemes are perturbed about their optimal value. Perturbing parameters gives a greater control over the ensemble than multi-model or multiparametrisation ensembles, and has been used as a representation of model uncertainty in climate prediction (Stainforth et al., 2005; Rougier et al., 2009). We investigate the effect of including representations of model uncertainty on the regime behaviour of a simulator. A simple chaotic model of the atmosphere, the Lorenz '96 system, is used to study the predictability of regime changes (Lorenz 1996, 2006). Three types of models are considered: a deterministic parametrisation scheme, stochastic parametrisation schemes with additive or multiplicative noise, and a perturbed parameter ensemble. Each forecasting scheme was tested on its ability to reproduce the attractor of the full system, defined in a reduced space based on EOF decomposition. None of the forecast models accurately capture the less common regime, though a significant improvement is observed over the deterministic parametrisation when a temporally correlated stochastic parametrisation is used. The attractor for the perturbed parameter ensemble improves on that forecast by the deterministic or white additive schemes, showing a distinct peak in the attractor corresponding to the less common regime. However, the 40 constituent members of the perturbed parameter ensemble each differ greatly from the true attractor, with many only showing one dominant regime with very rare transitions. These results indicate that perturbed parameter ensembles must be carefully analysed as individual members may have very different characteristics to the ensemble mean and to the true system being modelled. On the other hand, the stochastic parametrisation schemes tested performed well, improving the simulated climate, and motivating the development of a stochastic earth-system simulator for use in climate prediction. J. Berner, G. J. Shutts, M. Leutbecher, and T. N. Palmer. A spectral stochastic kinetic energy backscatter scheme and its impact on flow dependent predictability in the ECMWF ensemble prediction system. J. Atmos. Sci., 66(3):603-626, 2009. Y. Frenkel, A. J. Majda, and B. Khouider. Using the stochastic multicloud model to improve tropical convective parametrisation: A paradigm example. J. Atmos. Sci., 69(3):1080-1105, 2012. E. N. Lorenz. Predictability: a problem partly solved. In Proceedings, Seminar on Predictability, 4-8 September 1995, volume 1, pages 1-18, Shinfield Park, Reading, 1996. ECMWF. E. N. Lorenz. Regimes in simple systems. J. Atmos. Sci., 63(8):2056-2073, 2006. T. N Palmer. A nonlinear dynamical perspective on model error: A proposal for non-local stochastic-dynamic parametrisation in weather and climate prediction models. Q. J. Roy. Meteor. Soc., 127(572):279-304, 2001. T. N. Palmer, R. Buizza, F. Doblas-Reyes, T. Jung, M. Leutbecher, G. J. Shutts, M. Steinheimer, and A. Weisheimer. Stochastic parametrization and model uncertainty. Technical Report 598, European Centre for Medium-Range Weather Forecasts, 2009. J. Rougier, D. M. H. Sexton, J. M. Murphy, and D. Stainforth. Analyzing the climate sensitivity of the HadSM3 climate model using ensembles from different but related experiments. J. Climate, 22:3540-3557, 2009. S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K. B. Averyt, Tignor M., and H. L. Miller. Climate models and their evaluation. In Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge, United Kingdom and New York, NY, USA, 2007. Cambridge University Press. D. A Stainforth, T. Aina, C. Christensen, M. Collins, N. Faull, D. J. Frame, J. A. Kettleborough, S. Knight, A. Martin, J. M. Murphy, C. Piani, D. Sexton, L. A. Smith, R. A Spicer, A. J. Thorpe, and M. R Allen. Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature, 433(7024):403-406, 2005.
NASA Astrophysics Data System (ADS)
Sinsky, E.; Zhu, Y.; Li, W.; Guan, H.; Melhauser, C.
2017-12-01
Optimal forecast quality is crucial for the preservation of life and property. Improving monthly forecast performance over both the tropics and extra-tropics requires attention to various physical aspects such as the representation of the underlying SST, model physics and the representation of the model physics uncertainty for an ensemble forecast system. This work focuses on the impact of stochastic physics, SST and the convection scheme on forecast performance for the sub-seasonal scale over the tropics and extra-tropics with emphasis on the Madden-Julian Oscillation (MJO). A 2-year period is evaluated using the National Centers for Environmental Prediction (NCEP) Global Ensemble Forecast System (GEFS). Three experiments with different configurations than the operational GEFS were performed to illustrate the impact of the stochastic physics, SST and convection scheme. These experiments are compared against a control experiment (CTL) which consists of the operational GEFS but its integration is extended from 16 to 35 days. The three configurations are: 1) SPs, which uses a Stochastically Perturbed Physics Tendencies (SPPT), Stochastic Perturbed Humidity (SHUM) and Stochastic Kinetic Energy Backscatter (SKEB); 2) SPs+SST_bc, which uses a combination of SPs and a bias-corrected forecast SST from the NCEP Climate Forecast System Version 2 (CFSv2); and 3) SPs+SST_bc+SA_CV, which combines SPs, a bias-corrected forecast SST and a scale aware convection scheme. When comparing to the CTL experiment, SPs shows substantial improvement. The MJO skill has improved by about 4 lead days during the 2-year period. Improvement is also seen over the extra-tropics due to the updated stochastic physics, where there is a 3.1% and a 4.2% improvement during weeks 3 and 4 over the northern hemisphere and southern hemisphere, respectively. Improvement is also seen when the bias-corrected CFSv2 SST is combined with SPs. Additionally, forecast performance enhances when the scale aware convection scheme (SPs+SST_bc+SA_CV) is added, especially over the tropics. Among the three experiments, the SPs+SST_bc+SA_CV is the best configuration in MJO forecast skill.
Fidelity decay of the two-level bosonic embedded ensembles of random matrices
NASA Astrophysics Data System (ADS)
Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.
2010-12-01
We study the fidelity decay of the k-body embedded ensembles of random matrices for bosons distributed over two single-particle states. Fidelity is defined in terms of a reference Hamiltonian, which is a purely diagonal matrix consisting of a fixed one-body term and includes the diagonal of the perturbing k-body embedded ensemble matrix, and the perturbed Hamiltonian which includes the residual off-diagonal elements of the k-body interaction. This choice mimics the typical mean-field basis used in many calculations. We study separately the cases k = 2 and 3. We compute the ensemble-averaged fidelity decay as well as the fidelity of typical members with respect to an initial random state. Average fidelity displays a revival at the Heisenberg time, t = tH = 1, and a freeze in the fidelity decay, during which periodic revivals of period tH are observed. We obtain the relevant scaling properties with respect to the number of bosons and the strength of the perturbation. For certain members of the ensemble, we find that the period of the revivals during the freeze of fidelity occurs at fractional times of tH. These fractional periodic revivals are related to the dominance of specific k-body terms in the perturbation.
Gradient Flow and Scale Setting on MILC HISQ Ensembles
Bazavov, A.; Bernard, C.; Brown, N.; ...
2016-05-25
We report on a scale determination with gradient-flow techniques on the N f = 2 + 1 + 1 HISQ ensembles generated by the MILC collaboration. The ensembles include four lattice spacings, ranging from approximately 0.15 to 0.06 fm, and both physical and unphysical values of the quark masses. The scales p √t 0/a and w 0/a and their tree-level improvements,√t 0;imp and w 0;imp, are computed on each ensemble using Symanzik ow and the cloverleaf definition of the energy density E. Using a combination of continuum chiral perturbation theory and a Taylor-series ansatz for the lattice-spacing and strong-coupling dependence,more » the results are simultaneously extrapolated to the continuum and interpolated to physical quark masses. We also determine the scales p t 0 = 0:1416( +8 -5) fm and w 0 = 0:1717( +12 -11) fm, where the errors are sums, in quadrature, of statistical and all systematic errors. The precision of w 0 and √t 0 is comparable to or more precise than the best previous estimates, respectively. We also find the continuum mass-dependence of w 0 that will be useful for estimating the scales of other ensembles. Furthermore, we estimate the integrated autocorrelation length of . For long flow times, the autocorrelation length of appears to be comparable to or smaller than that of the topological charge.« less
NASA Astrophysics Data System (ADS)
Hollenberg, Sebastian; Päs, Heinrich
2012-01-01
The standard wave function approach for the treatment of neutrino oscillations fails in situations where quantum ensembles at a finite temperature with or without an interacting background plasma are encountered. As a first step to treat such phenomena in a novel way, we propose a unified approach to both adiabatic and nonadiabatic two-flavor oscillations in neutrino ensembles with finite temperature and generic (e.g., matter) potentials. Neglecting effects of ensemble decoherence for now, we study the evolution of a neutrino ensemble governed by the associated quantum kinetic equations, which apply to systems with finite temperature. The quantum kinetic equations are solved formally using the Magnus expansion and it is shown that a convenient choice of the quantum mechanical picture (e.g., the interaction picture) reveals suitable parameters to characterize the physics of the underlying system (e.g., an effective oscillation length). It is understood that this method also provides a promising starting point for the treatment of the more general case in which decoherence is taken into account.
Benefits of an ultra large and multiresolution ensemble for estimating available wind power
NASA Astrophysics Data System (ADS)
Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik
2016-04-01
In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.
NASA Astrophysics Data System (ADS)
Wang, S.; Huang, G. H.; Baetz, B. W.; Cai, X. M.; Ancell, B. C.; Fan, Y. R.
2017-11-01
The ensemble Kalman filter (EnKF) is recognized as a powerful data assimilation technique that generates an ensemble of model variables through stochastic perturbations of forcing data and observations. However, relatively little guidance exists with regard to the proper specification of the magnitude of the perturbation and the ensemble size, posing a significant challenge in optimally implementing the EnKF. This paper presents a robust data assimilation system (RDAS), in which a multi-factorial design of the EnKF experiments is first proposed for hydrologic ensemble predictions. A multi-way analysis of variance is then used to examine potential interactions among factors affecting the EnKF experiments, achieving optimality of the RDAS with maximized performance of hydrologic predictions. The RDAS is applied to the Xiangxi River watershed which is the most representative watershed in China's Three Gorges Reservoir region to demonstrate its validity and applicability. Results reveal that the pairwise interaction between perturbed precipitation and streamflow observations has the most significant impact on the performance of the EnKF system, and their interactions vary dynamically across different settings of the ensemble size and the evapotranspiration perturbation. In addition, the interactions among experimental factors vary greatly in magnitude and direction depending on different statistical metrics for model evaluation including the Nash-Sutcliffe efficiency and the Box-Cox transformed root-mean-square error. It is thus necessary to test various evaluation metrics in order to enhance the robustness of hydrologic prediction systems.
Estimation of the uncertainty of a climate model using an ensemble simulation
NASA Astrophysics Data System (ADS)
Barth, A.; Mathiot, P.; Goosse, H.
2012-04-01
The atmospheric forcings play an important role in the study of the ocean and sea-ice dynamics of the Southern Ocean. Error in the atmospheric forcings will inevitably result in uncertain model results. The sensitivity of the model results to errors in the atmospheric forcings are studied with ensemble simulations using multivariate perturbations of the atmospheric forcing fields. The numerical ocean model used is the NEMO-LIM in a global configuration with an horizontal resolution of 2°. NCEP reanalyses are used to provide air temperature and wind data to force the ocean model over the last 50 years. A climatological mean is used to prescribe relative humidity, cloud cover and precipitation. In a first step, the model results is compared with OSTIA SST and OSI SAF sea ice concentration of the southern hemisphere. The seasonal behavior of the RMS difference and bias in SST and ice concentration is highlighted as well as the regions with relatively high RMS errors and biases such as the Antarctic Circumpolar Current and near the ice-edge. Ensemble simulations are performed to statistically characterize the model error due to uncertainties in the atmospheric forcings. Such information is a crucial element for future data assimilation experiments. Ensemble simulations are performed with perturbed air temperature and wind forcings. A Fourier decomposition of the NCEP wind vectors and air temperature for 2007 is used to generate ensemble perturbations. The perturbations are scaled such that the resulting ensemble spread matches approximately the RMS differences between the satellite SST and sea ice concentration. The ensemble spread and covariance are analyzed for the minimum and maximum sea ice extent. It is shown that errors in the atmospheric forcings can extend to several hundred meters in depth near the Antarctic Circumpolar Current.
A transient stochastic weather generator incorporating climate model uncertainty
NASA Astrophysics Data System (ADS)
Glenis, Vassilis; Pinamonti, Valentina; Hall, Jim W.; Kilsby, Chris G.
2015-11-01
Stochastic weather generators (WGs), which provide long synthetic time series of weather variables such as rainfall and potential evapotranspiration (PET), have found widespread use in water resources modelling. When conditioned upon the changes in climatic statistics (change factors, CFs) predicted by climate models, WGs provide a useful tool for climate impacts assessment and adaption planning. The latest climate modelling exercises have involved large numbers of global and regional climate models integrations, designed to explore the implications of uncertainties in the climate model formulation and parameter settings: so called 'perturbed physics ensembles' (PPEs). In this paper we show how these climate model uncertainties can be propagated through to impact studies by testing multiple vectors of CFs, each vector derived from a different sample from a PPE. We combine this with a new methodology to parameterise the projected time-evolution of CFs. We demonstrate how, when conditioned upon these time-dependent CFs, an existing, well validated and widely used WG can be used to generate non-stationary simulations of future climate that are consistent with probabilistic outputs from the Met Office Hadley Centre's Perturbed Physics Ensemble. The WG enables extensive sampling of natural variability and climate model uncertainty, providing the basis for development of robust water resources management strategies in the context of a non-stationary climate.
Exploring the implication of climate process uncertainties within the Earth System Framework
NASA Astrophysics Data System (ADS)
Booth, B.; Lambert, F. H.; McNeal, D.; Harris, G.; Sexton, D.; Boulton, C.; Murphy, J.
2011-12-01
Uncertainties in the magnitude of future climate change have been a focus of a great deal of research. Much of the work with General Circulation Models has focused on the atmospheric response to changes in atmospheric composition, while other processes remain outside these frameworks. Here we introduce an ensemble of new simulations, based on an Earth System configuration of HadCM3C, designed to explored uncertainties in both physical (atmospheric, oceanic and aerosol physics) and carbon cycle processes, using perturbed parameter approaches previously used to explore atmospheric uncertainty. Framed in the context of the climate response to future changes in emissions, the resultant future projections represent significantly broader uncertainty than existing concentration driven GCM assessments. The systematic nature of the ensemble design enables interactions between components to be explored. For example, we show how metrics of physical processes (such as climate sensitivity) are also influenced carbon cycle parameters. The suggestion from this work is that carbon cycle processes represent a comparable contribution to uncertainty in future climate projections as contributions from atmospheric feedbacks more conventionally explored. The broad range of climate responses explored within these ensembles, rather than representing a reason for inaction, provide information on lower likelihood but high impact changes. For example while the majority of these simulations suggest that future Amazon forest extent is resilient to the projected climate changes, a small number simulate dramatic forest dieback. This ensemble represents a framework to examine these risks, breaking them down into physical processes (such as ocean temperature drivers of rainfall change) and vegetation processes (where uncertainties point towards requirements for new observational constraints).
Kutzelnigg, Werner; Mukherjee, Debashis
2004-04-22
We analyze the structure and the solutions of the irreducible k-particle Brillouin conditions (IBCk) and the irreducible contracted Schrödinger equations (ICSEk) for an n-electron system without electron interaction. This exercise is very instructive in that it gives one both the perspective and the strategies to be followed in applying the IBC and ICSE to physically realistic systems with electron interaction. The IBC1 leads to a Liouville equation for the one-particle density matrix gamma1=gamma, consistent with our earlier analysis that the IBC1 holds both for a pure and an ensemble state. The IBC1 or the ICSE1 must be solved subject to the constraints imposed by the n-representability condition, which is particularly simple for gamma. For a closed-shell state gamma is idempotent, i.e., all natural spin orbitals (NSO's) have occupation numbers 0 or 1, and all cumulants lambdak with k> or =2 vanish. For open-shell states there are NSO's with fractional occupation number, and at the same time nonvanishing elements of lambda2, which are related to spin and symmetry coupling. It is often useful to describe an open-shell state by a totally symmetric ensemble state. If one wants to treat a one-particle perturbation by means of perturbation theory, this mainly as a run-up for the study of a two-particle perturbation, one is faced with the problem that the perturbation expansion of the Liouville equation gives information only on the nondiagonal elements (in a basis of the unperturbed states) of gamma. There are essentially three possibilities to construct the diagonal elements of gamma: (i) to consider the perturbation expansion of the characteristic polynomial of gamma, especially the idempotency for closed-shell states, (ii) to rely on the ICSE1, which (at variance with the IBC1) also gives information on the diagonal elements, though not in a very efficient manner, and (iii) to formulate the perturbation theory in terms of a unitary transformation in Fock space. The latter is particularly powerful, especially, when one wishes to study realistic Hamiltonians with a two-body interaction. (c) 2004 American Institute of Physics
Exploring uncertainty of Amazon dieback in a perturbed parameter Earth system ensemble.
Boulton, Chris A; Booth, Ben B B; Good, Peter
2017-12-01
The future of the Amazon rainforest is unknown due to uncertainties in projected climate change and the response of the forest to this change (forest resiliency). Here, we explore the effect of some uncertainties in climate and land surface processes on the future of the forest, using a perturbed physics ensemble of HadCM3C. This is the first time Amazon forest changes are presented using an ensemble exploring both land vegetation processes and physical climate feedbacks in a fully coupled modelling framework. Under three different emissions scenarios, we measure the change in the forest coverage by the end of the 21st century (the transient response) and make a novel adaptation to a previously used method known as "dry-season resilience" to predict the long-term committed response of the forest, should the state of the climate remain constant past 2100. Our analysis of this ensemble suggests that there will be a high chance of greater forest loss on longer timescales than is realized by 2100, especially for mid-range and low emissions scenarios. In both the transient and predicted committed responses, there is an increasing uncertainty in the outcome of the forest as the strength of the emissions scenarios increases. It is important to note however, that very few of the simulations produce future forest loss of the magnitude previously shown under the standard model configuration. We find that low optimum temperatures for photosynthesis and a high minimum leaf area index needed for the forest to compete for space appear to be precursors for dieback. We then decompose the uncertainty into that associated with future climate change and that associated with forest resiliency, finding that it is important to reduce the uncertainty in both of these if we are to better determine the Amazon's outcome. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zhumagulov, Yaroslav V.; Krasavin, Andrey V.; Kashurnikov, Vladimir A.
2018-05-01
The method is developed for calculation of electronic properties of an ensemble of metal nanoclusters with the use of cluster perturbation theory. This method is applied to the system of gold nanoclusters. The Greens function of single nanocluster is obtained by ab initio calculations within the framework of the density functional theory, and then is used in Dyson equation to group nanoclusters together and to compute the Greens function as well as the electron density of states of the whole ensemble. The transition from insulator state of a single nanocluster to metallic state of bulk gold is observed.
NASA Technical Reports Server (NTRS)
Braginsky, V. B.; Vorontsov, Y. I.; Thorne, K. S.
1979-01-01
Future gravitational wave antennas will be approximately 100 kilogram cylinders, whose end-to-end vibrations must be measured so accurately (10 to the -19th power centimeters) that they behave quantum mechanically. Moreover, the vibration amplitude must be measured over and over again without perturbing it (quantum nondemolition measurement). This contrasts with quantum chemistry, quantum optics, or atomic, nuclear, and elementary particle physics where measurements are usually made on an ensemble of identical objects, and care is not given to whether any single object is perturbed or destroyed by the measurement. Electronic techniques required for quantum nondemolition measurements are described as well as the theory underlying them.
Quasi-most unstable modes: a window to 'À la carte' ensemble diversity?
NASA Astrophysics Data System (ADS)
Homar Santaner, Victor; Stensrud, David J.
2010-05-01
The atmospheric scientific community is nowadays facing the ambitious challenge of providing useful forecasts of atmospheric events that produce high societal impact. The low level of social resilience to false alarms creates tremendous pressure on forecasting offices to issue accurate, timely and reliable warnings.Currently, no operational numerical forecasting system is able to respond to the societal demand for high-resolution (in time and space) predictions in the 12-72h time span. The main reasons for such deficiencies are the lack of adequate observations and the high non-linearity of the numerical models that are currently used. The whole weather forecasting problem is intrinsically probabilistic and current methods aim at coping with the various sources of uncertainties and the error propagation throughout the forecasting system. This probabilistic perspective is often created by generating ensembles of deterministic predictions that are aimed at sampling the most important sources of uncertainty in the forecasting system. The ensemble generation/sampling strategy is a crucial aspect of their performance and various methods have been proposed. Although global forecasting offices have been using ensembles of perturbed initial conditions for medium-range operational forecasts since 1994, no consensus exists regarding the optimum sampling strategy for high resolution short-range ensemble forecasts. Bred vectors, however, have been hypothesized to better capture the growing modes in the highly nonlinear mesoscale dynamics of severe episodes than singular vectors or observation perturbations. Yet even this technique is not able to produce enough diversity in the ensembles to accurately and routinely predict extreme phenomena such as severe weather. Thus, we propose a new method to generate ensembles of initial conditions perturbations that is based on the breeding technique. Given a standard bred mode, a set of customized perturbations is derived with specified amplitudes and horizontal scales. This allows the ensemble to excite growing modes across a wider range of scales. Results show that this approach produces significantly more spread in the ensemble prediction than standard bred modes alone. Several examples that illustrate the benefits from this approach for severe weather forecasts will be provided.
NASA Astrophysics Data System (ADS)
Vich, M.; Romero, R.; Richard, E.; Arbogast, P.; Maynard, K.
2010-09-01
Heavy precipitation events occur regularly in the western Mediterranean region. These events often have a high impact on the society due to economic and personal losses. The improvement of the mesoscale numerical forecasts of these events can be used to prevent or minimize their impact on the society. In previous studies, two ensemble prediction systems (EPSs) based on perturbing the model initial and boundary conditions were developed and tested for a collection of high-impact MEDEX cyclonic episodes. These EPSs perturb the initial and boundary potential vorticity (PV) field through a PV inversion algorithm. This technique ensures modifications of all the meteorological fields without compromising the mass-wind balance. One EPS introduces the perturbations along the zones of the three-dimensional PV structure presenting the local most intense values and gradients of the field (a semi-objective choice, PV-gradient), while the other perturbs the PV field over the MM5 adjoint model calculated sensitivity zones (an objective method, PV-adjoint). The PV perturbations are set from a PV error climatology (PVEC) that characterizes typical PV errors in the ECMWF forecasts, both in intensity and displacement. This intensity and displacement perturbation of the PV field is chosen randomly, while its location is given by the perturbation zones defined in each ensemble generation method. Encouraged by the good results obtained by these two EPSs that perturb the PV field, a new approach based on a manual perturbation of the PV field has been tested and compared with the previous results. This technique uses the satellite water vapor (WV) observations to guide the correction of initial PV structures. The correction of the PV field intents to improve the match between the PV distribution and the WV image, taking advantage of the relation between dark and bright features of WV images and PV anomalies, under some assumptions. Afterwards, the PV inversion algorithm is applied to run a forecast with the corresponding perturbed initial state (PV-satellite). The non hydrostatic MM5 mesoscale model has been used to run all forecasts. The simulations are performed for a two-day period with a 22.5 km resolution domain (Domain 1 in http://mm5forecasts.uib.es) nested in the ECMWF large-scale forecast fields. The MEDEX cyclone of 10 June 2000, also known as the Montserrat Case, is a suitable testbed to compare the performance of each ensemble and the PV-satellite method. This case is characterized by an Atlantic upper-level trough and low-level cold front which generated a stationary mesoscale cyclone over the Spanish Mediterranean coast, advecting warm and moist air toward Catalonia from the Mediterranean Sea. The consequences of the resulting mesoscale convective system were 6-h accumulated rainfall amounts of 180 mm with estimated material losses to exceed 65 million euros by media. The performace of both ensemble forecasting systems and PV-satellite technique for our case study is evaluated through the verification of the rainfall field. Since the EPSs are probabilistic forecasts and the PV-satellite is deterministic, their comparison is done using the individual ensemble members. Therefore the verification procedure uses deterministic scores, like the ROC curve, the Taylor diagram or the Q-Q plot. These scores cover the different quality attributes of the forecast such as reliability, resolution, uncertainty and sharpness. The results show that the PV-satellite technique performance lies within the performance range obtained by both ensembles; it is even better than the non-perturbed ensemble member. Thus, perturbing randomly using the PV error climatology and introducing the perturbations in the zones given by each EPS captures the mismatch between PV and WV fields better than manual perturbations made by an expert forecaster, at least for this case study.
Time-dependent generalized Gibbs ensembles in open quantum systems
NASA Astrophysics Data System (ADS)
Lange, Florian; Lenarčič, Zala; Rosch, Achim
2018-04-01
Generalized Gibbs ensembles have been used as powerful tools to describe the steady state of integrable many-particle quantum systems after a sudden change of the Hamiltonian. Here, we demonstrate numerically that they can be used for a much broader class of problems. We consider integrable systems in the presence of weak perturbations which break both integrability and drive the system to a state far from equilibrium. Under these conditions, we show that the steady state and the time evolution on long timescales can be accurately described by a (truncated) generalized Gibbs ensemble with time-dependent Lagrange parameters, determined from simple rate equations. We compare the numerically exact time evolutions of density matrices for small systems with a theory based on block-diagonal density matrices (diagonal ensemble) and a time-dependent generalized Gibbs ensemble containing only a small number of approximately conserved quantities, using the one-dimensional Heisenberg model with perturbations described by Lindblad operators as an example.
A hybrid variational ensemble data assimilation for the HIgh Resolution Limited Area Model (HIRLAM)
NASA Astrophysics Data System (ADS)
Gustafsson, N.; Bojarova, J.; Vignes, O.
2014-02-01
A hybrid variational ensemble data assimilation has been developed on top of the HIRLAM variational data assimilation. It provides the possibility of applying a flow-dependent background error covariance model during the data assimilation at the same time as full rank characteristics of the variational data assimilation are preserved. The hybrid formulation is based on an augmentation of the assimilation control variable with localised weights to be assigned to a set of ensemble member perturbations (deviations from the ensemble mean). The flow-dependency of the hybrid assimilation is demonstrated in single simulated observation impact studies and the improved performance of the hybrid assimilation in comparison with pure 3-dimensional variational as well as pure ensemble assimilation is also proven in real observation assimilation experiments. The performance of the hybrid assimilation is comparable to the performance of the 4-dimensional variational data assimilation. The sensitivity to various parameters of the hybrid assimilation scheme and the sensitivity to the applied ensemble generation techniques are also examined. In particular, the inclusion of ensemble perturbations with a lagged validity time has been examined with encouraging results.
Constraining a Coastal Ocean Model by Surface Observations Using an Ensemble Kalman Filter
NASA Astrophysics Data System (ADS)
De Mey, P. J.; Ayoub, N. K.
2016-02-01
We explore the impact of assimilating sea surface temperature (SST) and sea surface height (SSH) observations in the Bay of Biscay (North-East Atlantic). The study is conducted in the SYMPHONIE coastal circulation model (Marsaleix et al., 2009) on a 3kmx3km grid, with 43 sigma levels. Ensembles are generated by perturbing the wind forcing to analyze the model error subspace spanned by its response to wind forcing uncertainties. The assimilation method is a 4D Ensemble Kalman Filter algorithm with localization. We use the SDAP code developed in the team (https://sourceforge.net/projects/sequoia-dap/). In a first step before the assimilation of real observations, we set up an Ensemble twin experiment protocol where a nature run as well as noisy pseudo-observations of SST and SSH are generated from an Ensemble member (later discarded from the assimilative Ensemble). Our objectives are to assess (1) the adequacy of the choice of error source and perturbation strategy and (2) how effective the surface observational constraint is at constraining the surface and subsurface fields. We first illustrate characteristics of the error subspace generated by the perturbation strategy. We then show that, while the EnKF solves a single seamless problem regardless of the region within our domain, the nature and effectiveness of the data constraint over the shelf differ from those over the abyssal plain.
Examination of multi-model ensemble seasonal prediction methods using a simple climate system
NASA Astrophysics Data System (ADS)
Kang, In-Sik; Yoo, Jin Ho
2006-02-01
A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.
NASA Astrophysics Data System (ADS)
Rowley, C. D.; Hogan, P. J.; Martin, P.; Thoppil, P.; Wei, M.
2017-12-01
An extended range ensemble forecast system is being developed in the US Navy Earth System Prediction Capability (ESPC), and a global ocean ensemble generation capability to represent uncertainty in the ocean initial conditions has been developed. At extended forecast times, the uncertainty due to the model error overtakes the initial condition as the primary source of forecast uncertainty. Recently, stochastic parameterization or stochastic forcing techniques have been applied to represent the model error in research and operational atmospheric, ocean, and coupled ensemble forecasts. A simple stochastic forcing technique has been developed for application to US Navy high resolution regional and global ocean models, for use in ocean-only and coupled atmosphere-ocean-ice-wave ensemble forecast systems. Perturbation forcing is added to the tendency equations for state variables, with the forcing defined by random 3- or 4-dimensional fields with horizontal, vertical, and temporal correlations specified to characterize different possible kinds of error. Here, we demonstrate the stochastic forcing in regional and global ensemble forecasts with varying perturbation amplitudes and length and time scales, and assess the change in ensemble skill measured by a range of deterministic and probabilistic metrics.
Zhu, Guanhua; Liu, Wei; Bao, Chenglong; Tong, Dudu; Ji, Hui; Shen, Zuowei; Yang, Daiwen; Lu, Lanyuan
2018-05-01
The structural variations of multidomain proteins with flexible parts mediate many biological processes, and a structure ensemble can be determined by selecting a weighted combination of representative structures from a simulated structure pool, producing the best fit to experimental constraints such as interatomic distance. In this study, a hybrid structure-based and physics-based atomistic force field with an efficient sampling strategy is adopted to simulate a model di-domain protein against experimental paramagnetic relaxation enhancement (PRE) data that correspond to distance constraints. The molecular dynamics simulations produce a wide range of conformations depicted on a protein energy landscape. Subsequently, a conformational ensemble recovered with low-energy structures and the minimum-size restraint is identified in good agreement with experimental PRE rates, and the result is also supported by chemical shift perturbations and small-angle X-ray scattering data. It is illustrated that the regularizations of energy and ensemble-size prevent an arbitrary interpretation of protein conformations. Moreover, energy is found to serve as a critical control to refine the structure pool and prevent data overfitting, because the absence of energy regularization exposes ensemble construction to the noise from high-energy structures and causes a more ambiguous representation of protein conformations. Finally, we perform structure-ensemble optimizations with a topology-based structure pool, to enhance the understanding on the ensemble results from different sources of pool candidates. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lamraoui, F.; Booth, J. F.; Naud, C. M.
2017-12-01
The representation of subgrid-scale processes of low-level marine clouds located in the post-cold-frontal region poses a serious challenge for climate models. More precisely, the boundary layer parameterizations are predominantly designed for individual regimes that can evolve gradually over time and does not accommodate the cold front passage that can overly modify the boundary layer rapidly. Also, the microphysics schemes respond differently to the quick development of the boundary layer schemes, especially under unstable conditions. To improve the understanding of cloud physics in the post-cold frontal region, the present study focuses on exploring the relationship between cloud properties, the local processes and large-scale conditions. In order to address these questions, we explore the WRF sensitivity to the interaction between various combinations of the boundary layer and microphysics parameterizations, including the Community Atmospheric Model version 5 (CAM5) physical package in a perturbed physics ensemble. Then, we evaluate these simulations against ground-based ARM observations over the Azores. The WRF-based simulations demonstrate particular sensitivities of the marine cold front passage and the associated post-cold frontal clouds to the domain size, the resolution and the physical parameterizations. First, it is found that in multiple different case studies the model cannot generate the cold front passage when the domain size is larger than 3000 km2. Instead, the modeled cold front stalls, which shows the importance of properly capturing the synoptic scale conditions. The simulation reveals persistent delay in capturing the cold front passage and also an underestimated duration of the post-cold-frontal conditions. Analysis of the perturbed physics ensemble shows that changing the microphysics scheme leads to larger differences in the modeled clouds than changing the boundary layer scheme. The in-cloud heating tendencies are analyzed to explain this sensitivity.
Fidelity decay in interacting two-level boson systems: Freezing and revivals
NASA Astrophysics Data System (ADS)
Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.
2011-05-01
We study the fidelity decay in the k-body embedded ensembles of random matrices for bosons distributed in two single-particle states, considering the reference or unperturbed Hamiltonian as the one-body terms and the diagonal part of the k-body embedded ensemble of random matrices and the perturbation as the residual off-diagonal part of the interaction. We calculate the ensemble-averaged fidelity with respect to an initial random state within linear response theory to second order on the perturbation strength and demonstrate that it displays the freeze of the fidelity. During the freeze, the average fidelity exhibits periodic revivals at integer values of the Heisenberg time tH. By selecting specific k-body terms of the residual interaction, we find that the periodicity of the revivals during the freeze of fidelity is an integer fraction of tH, thus relating the period of the revivals with the range of the interaction k of the perturbing terms. Numerical calculations confirm the analytical results.
NASA Astrophysics Data System (ADS)
Taniguchi, Kenji
2018-04-01
To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.
NASA Astrophysics Data System (ADS)
Sastre, Francisco; Moreno-Hilario, Elizabeth; Sotelo-Serna, Maria Guadalupe; Gil-Villegas, Alejandro
2018-02-01
The microcanonical-ensemble computer simulation method (MCE) is used to evaluate the perturbation terms Ai of the Helmholtz free energy of a square-well (SW) fluid. The MCE method offers a very efficient and accurate procedure for the determination of perturbation terms of discrete-potential systems such as the SW fluid and surpass the standard NVT canonical ensemble Monte Carlo method, allowing the calculation of the first six expansion terms. Results are presented for the case of a SW potential with attractive ranges 1.1 ≤ λ ≤ 1.8. Using semi-empirical representation of the MCE values for Ai, we also discuss the accuracy in the determination of the phase diagram of this system.
NASA Astrophysics Data System (ADS)
Batté, Lauriane; Déqué, Michel
2016-06-01
Stochastic methods are increasingly used in global coupled model climate forecasting systems to account for model uncertainties. In this paper, we describe in more detail the stochastic dynamics technique introduced by Batté and Déqué (2012) in the ARPEGE-Climate atmospheric model. We present new results with an updated version of CNRM-CM using ARPEGE-Climate v6.1, and show that the technique can be used both as a means of analyzing model error statistics and accounting for model inadequacies in a seasonal forecasting framework.The perturbations are designed as corrections of model drift errors estimated from a preliminary weakly nudged re-forecast run over an extended reference period of 34 boreal winter seasons. A detailed statistical analysis of these corrections is provided, and shows that they are mainly made of intra-month variance, thereby justifying their use as in-run perturbations of the model in seasonal forecasts. However, the interannual and systematic error correction terms cannot be neglected. Time correlation of the errors is limited, but some consistency is found between the errors of up to 3 consecutive days.These findings encourage us to test several settings of the random draws of perturbations in seasonal forecast mode. Perturbations are drawn randomly but consistently for all three prognostic variables perturbed. We explore the impact of using monthly mean perturbations throughout a given forecast month in a first ensemble re-forecast (SMM, for stochastic monthly means), and test the use of 5-day sequences of perturbations in a second ensemble re-forecast (S5D, for stochastic 5-day sequences). Both experiments are compared in the light of a REF reference ensemble with initial perturbations only. Results in terms of forecast quality are contrasted depending on the region and variable of interest, but very few areas exhibit a clear degradation of forecasting skill with the introduction of stochastic dynamics. We highlight some positive impacts of the method, mainly on Northern Hemisphere extra-tropics. The 500 hPa geopotential height bias is reduced, and improvements project onto the representation of North Atlantic weather regimes. A modest impact on ensemble spread is found over most regions, which suggests that this method could be complemented by other stochastic perturbation techniques in seasonal forecasting mode.
An ensemble forecast of the South China Sea monsoon
NASA Astrophysics Data System (ADS)
Krishnamurti, T. N.; Tewari, Mukul; Bensman, Ed; Han, Wei; Zhang, Zhan; Lau, William K. M.
1999-05-01
This paper presents a generalized ensemble forecast procedure for the tropical latitudes. Here we propose an empirical orthogonal function-based procedure for the definition of a seven-member ensemble. The wind and the temperature fields are perturbed over the global tropics. Although the forecasts are made over the global belt with a high-resolution model, the emphasis of this study is on a South China Sea monsoon. Over this domain of the South China Sea includes the passage of a Tropical Storm, Gary, that moved eastwards north of the Philippines. The ensemble forecast handled the precipitation of this storm reasonably well. A global model at the resolution Triangular Truncation 126 waves is used to carry out these seven forecasts. The evaluation of the ensemble of forecasts is carried out via standard root mean square errors of the precipitation and the wind fields. The ensemble average is shown to have a higher skill compared to a control experiment, which was a first analysis based on operational data sets over both the global tropical and South China Sea domain. All of these experiments were subjected to physical initialization which provides a spin-up of the model rain close to that obtained from satellite and gauge-based estimates. The results furthermore show that inherently much higher skill resides in the forecast precipitation fields if they are averaged over area elements of the order of 4° latitude by 4° longitude squares.
NASA Astrophysics Data System (ADS)
Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.
2018-02-01
The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.
NASA Astrophysics Data System (ADS)
Romanova, Vanya; Hense, Andreas
2017-08-01
In our study we use the anomaly transform, a special case of ensemble transform method, in which a selected set of initial oceanic anomalies in space, time and variables are defined and orthogonalized. The resulting orthogonal perturbation patterns are designed such that they pick up typical balanced anomaly structures in space and time and between variables. The metric used to set up the eigen problem is taken either as the weighted total energy with its zonal, meridional kinetic and available potential energy terms having equal contributions, or the weighted ocean heat content in which a disturbance is applied only to the initial temperature fields. The choices of a reference state for defining the initial anomalies are such that either perturbations on seasonal timescales and or on interannual timescales are constructed. These project a-priori only the slow modes of the ocean physical processes, such that the disturbances grow mainly in the Western Boundary Currents, in the Antarctic Circumpolar Current and the El Nino Southern Oscillation regions. An additional set of initial conditions is designed to fit in a least square sense data from global ocean reanalysis. Applying the AT produced sets of disturbances to oceanic initial conditions initialized by observations of the MPIOM-ESM coupled model on T63L47/GR15 resolution, four ensemble and one hind-cast experiments were performed. The weighted total energy norm is used to monitor the amplitudes and rates of the fastest growing error modes. The results showed minor dependence of the instabilities or error growth on the selected metric but considerable change due to the magnitude of the scaling amplitudes of the perturbation patterns. In contrast to similar atmospheric applications, we find an energy conversion from kinetic to available potential energy, which suggests a different source of uncertainty generation in the ocean than in the atmosphere mainly associated with changes in the density field.
NASA Astrophysics Data System (ADS)
Vlasov, Vladimir; Rosenblum, Michael; Pikovsky, Arkady
2016-08-01
As has been shown by Watanabe and Strogatz (WS) (1993 Phys. Rev. Lett. 70 2391), a population of identical phase oscillators, sine-coupled to a common field, is a partially integrable system: for any ensemble size its dynamics reduce to equations for three collective variables. Here we develop a perturbation approach for weakly nonidentical ensembles. We calculate corrections to the WS dynamics for two types of perturbations: those due to a distribution of natural frequencies and of forcing terms, and those due to small white noise. We demonstrate that in both cases, the complex mean field for which the dynamical equations are written is close to the Kuramoto order parameter, up to the leading order in the perturbation. This supports the validity of the dynamical reduction suggested by Ott and Antonsen (2008 Chaos 18 037113) for weakly inhomogeneous populations.
Climate Risk Management in the Anthropocene: From Basic Science to Decisionmaking and Back.
NASA Astrophysics Data System (ADS)
King, A.; Karoly, D. J.
2014-12-01
In this talk I will discuss studies our group has conducted to investigate the role of anthropogenic climate change in the heavy rains of 2010-2012 and the heat and drought of 2013. Using a range of methodologies based on coupled climate models from the CMIP5 archive and very large atmosphere-only ensembles from the Weather@Home Australia-New Zealand ensemble we have found increases in the likelihood of hot extremes, such as the summer of 2012/13 and individual record-breaking hot days within that summer. In contrast, studies of the precipitation extremes that occurred in the summer of 2011/12 found limited evidence for a substantial anthropogenic role in these events. I will also present briefly on avenues of research we are currently pursuing in the Australian community. These include investigating whether anthropogenic climate change has altered the likelihood of weather associated with bushfires and the implementation of perturbed physics in the Weather@Home ensemble to allow us to study the potential role of human-induced climate change on extreme rainfall events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Covey, Curt; Lucas, Donald D.; Trenberth, Kevin E.
2016-03-02
This document presents the large scale water budget statistics of a perturbed input-parameter ensemble of atmospheric model runs. The model is Version 5.1.02 of the Community Atmosphere Model (CAM). These runs are the “C-Ensemble” described by Qian et al., “Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5” (Journal of Advances in Modeling the Earth System, 2015). As noted by Qian et al., the simulations are “AMIP type” with temperature and sea ice boundary conditions chosen to match surface observations for the five year period 2000-2004. There are 1100 ensemble members in additionmore » to one run with default inputparameter values.« less
Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...
2015-04-10
We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less
Fidelity under isospectral perturbations: a random matrix study
NASA Astrophysics Data System (ADS)
Leyvraz, F.; García, A.; Kohler, H.; Seligman, T. H.
2013-07-01
The set of Hamiltonians generated by all unitary transformations from a single Hamiltonian is the largest set of isospectral Hamiltonians we can form. Taking advantage of the fact that the unitary group can be generated from Hermitian matrices we can take the ones generated by the Gaussian unitary ensemble with a small parameter as small perturbations. Similarly, the transformations generated by Hermitian antisymmetric matrices from orthogonal matrices form isospectral transformations among symmetric matrices. Based on this concept we can obtain the fidelity decay of a system that decays under a random isospectral perturbation with well-defined properties regarding time-reversal invariance. If we choose the Hamiltonian itself also from a classical random matrix ensemble, then we obtain solutions in terms of form factors in the limit of large matrices.
Cosmological ensemble and directional averages of observables
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonvin, Camille; Clarkson, Chris; Durrer, Ruth
We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmologicalmore » observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.« less
NASA Technical Reports Server (NTRS)
Jahshan, S. N.; Singleterry, R. C.
2001-01-01
The effect of random fuel redistribution on the eigenvalue of a one-speed reactor is investigated. An ensemble of such reactors that are identical to a homogeneous reference critical reactor except for the fissile isotope density distribution is constructed such that it meets a set of well-posed redistribution requirements. The average eigenvalue,
A statistical state dynamics approach to wall turbulence.
Farrell, B F; Gayme, D F; Ioannou, P J
2017-03-13
This paper reviews results obtained using statistical state dynamics (SSD) that demonstrate the benefits of adopting this perspective for understanding turbulence in wall-bounded shear flows. The SSD approach used in this work employs a second-order closure that retains only the interaction between the streamwise mean flow and the streamwise mean perturbation covariance. This closure restricts nonlinearity in the SSD to that explicitly retained in the streamwise constant mean flow together with nonlinear interactions between the mean flow and the perturbation covariance. This dynamical restriction, in which explicit perturbation-perturbation nonlinearity is removed from the perturbation equation, results in a simplified dynamics referred to as the restricted nonlinear (RNL) dynamics. RNL systems, in which a finite ensemble of realizations of the perturbation equation share the same mean flow, provide tractable approximations to the SSD, which is equivalent to an infinite ensemble RNL system. This infinite ensemble system, referred to as the stochastic structural stability theory system, introduces new analysis tools for studying turbulence. RNL systems provide computationally efficient means to approximate the SSD and produce self-sustaining turbulence exhibiting qualitative features similar to those observed in direct numerical simulations despite greatly simplified dynamics. The results presented show that RNL turbulence can be supported by as few as a single streamwise varying component interacting with the streamwise constant mean flow and that judicious selection of this truncated support or 'band-limiting' can be used to improve quantitative accuracy of RNL turbulence. These results suggest that the SSD approach provides new analytical and computational tools that allow new insights into wall turbulence.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).
A mesoscale hybrid data assimilation system based on the JMA nonhydrostatic model
NASA Astrophysics Data System (ADS)
Ito, K.; Kunii, M.; Kawabata, T. T.; Saito, K. K.; Duc, L. L.
2015-12-01
This work evaluates the potential of a hybrid ensemble Kalman filter and four-dimensional variational (4D-Var) data assimilation system for predicting severe weather events from a deterministic point of view. This hybrid system is an adjoint-based 4D-Var system using a background error covariance matrix constructed from the mixture of a so-called NMC method and perturbations in a local ensemble transform Kalman filter data assimilation system, both of which are based on the Japan Meteorological Agency nonhydrostatic model. To construct the background error covariance matrix, we investigated two types of schemes. One is a spatial localization scheme and the other is neighboring ensemble approach, which regards the result at a horizontally spatially shifted point in each ensemble member as that obtained from a different realization of ensemble simulation. An assimilation of a pseudo single-observation located to the north of a tropical cyclone (TC) yielded an analysis increment of wind and temperature physically consistent with what is expected for a mature TC in both hybrid systems, whereas an analysis increment in a 4D-Var system using a static background error covariance distorted a structure of the mature TC. Real data assimilation experiments applied to 4 TCs and 3 local heavy rainfall events showed that hybrid systems and EnKF provided better initial conditions than the NMC-based 4D-Var, both for predicting the intensity and track forecast of TCs and for the location and amount of local heavy rainfall events.
Simple Emergent Power Spectra from Complex Inflationary Physics
NASA Astrophysics Data System (ADS)
Dias, Mafalda; Frazer, Jonathan; Marsh, M. C. David
2016-09-01
We construct ensembles of random scalar potentials for Nf-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For Nf=O (few ), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For Nf≫1 , the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large Nf universality of random matrix theory.
Simple Emergent Power Spectra from Complex Inflationary Physics.
Dias, Mafalda; Frazer, Jonathan; Marsh, M C David
2016-09-30
We construct ensembles of random scalar potentials for N_{f}-interacting scalar fields using nonequilibrium random matrix theory, and use these to study the generation of observables during small-field inflation. For N_{f}=O(few), these heavily featured scalar potentials give rise to power spectra that are highly nonlinear, at odds with observations. For N_{f}≫1, the superhorizon evolution of the perturbations is generically substantial, yet the power spectra simplify considerably and become more predictive, with most realizations being well approximated by a linear power spectrum. This provides proof of principle that complex inflationary physics can give rise to simple emergent power spectra. We explain how these results can be understood in terms of large N_{f} universality of random matrix theory.
Uncertainties in climate assessment for the case of aviation NO
Holmes, Christopher D.; Tang, Qi; Prather, Michael J.
2011-01-01
Nitrogen oxides emitted from aircraft engines alter the chemistry of the atmosphere, perturbing the greenhouse gases methane (CH4) and ozone (O3). We quantify uncertainties in radiative forcing (RF) due to short-lived increases in O3, long-lived decreases in CH4 and O3, and their net effect, using the ensemble of published models and a factor decomposition of each forcing. The decomposition captures major features of the ensemble, and also shows which processes drive the total uncertainty in several climate metrics. Aviation-specific factors drive most of the uncertainty for the short-lived O3 and long-lived CH4 RFs, but a nonaviation factor dominates for long-lived O3. The model ensemble shows strong anticorrelation between the short-lived and long-lived RF perturbations (R2 = 0.87). Uncertainty in the net RF is highly sensitive to this correlation. We reproduce the correlation and ensemble spread in one model, showing that processes controlling the background tropospheric abundance of nitrogen oxides are likely responsible for the modeling uncertainty in climate impacts from aviation. PMID:21690364
NASA Astrophysics Data System (ADS)
Caumont, Olivier; Hally, Alan; Garrote, Luis; Richard, Évelyne; Weerts, Albrecht; Delogu, Fabio; Fiori, Elisabetta; Rebora, Nicola; Parodi, Antonio; Mihalović, Ana; Ivković, Marija; Dekić, Ljiljana; van Verseveld, Willem; Nuissier, Olivier; Ducrocq, Véronique; D'Agostino, Daniele; Galizia, Antonella; Danovaro, Emanuele; Clematis, Andrea
2015-04-01
The FP7 DRIHM (Distributed Research Infrastructure for Hydro-Meteorology, http://www.drihm.eu, 2011-2015) project intends to develop a prototype e-Science environment to facilitate the collaboration between meteorologists, hydrologists, and Earth science experts for accelerated scientific advances in Hydro-Meteorology Research (HMR). As the project comes to its end, this presentation will summarize the HMR results that have been obtained in the framework of DRIHM. The vision shaped and implemented in the framework of the DRIHM project enables the production and interpretation of numerous, complex compositions of hydrometeorological simulations of flood events from rainfall, either simulated or modelled, down to discharge. Each element of a composition is drawn from a set of various state-of-the-art models. Atmospheric simulations providing high-resolution rainfall forecasts involve different global and limited-area convection-resolving models, the former being used as boundary conditions for the latter. Some of these models can be run as ensembles, i.e. with perturbed boundary conditions, initial conditions and/or physics, thus sampling the probability density function of rainfall forecasts. In addition, a stochastic downscaling algorithm can be used to create high-resolution rainfall ensemble forecasts from deterministic lower-resolution forecasts. All these rainfall forecasts may be used as input to various rainfall-discharge hydrological models that compute the resulting stream flows for catchments of interest. In some hydrological simulations, physical parameters are perturbed to take into account model errors. As a result, six different kinds of rainfall data (either deterministic or probabilistic) can currently be compared with each other and combined with three different hydrological model engines running either in deterministic or probabilistic mode. HMR topics which are allowed or facilitated by such unprecedented sets of hydrometerological forecasts include: physical process studies, intercomparison of models and ensembles, sensitivity studies to a particular component of the forecasting chain, and design of flash-flood early-warning systems. These benefits will be illustrated with the different key cases that have been under investigation in the course of the project. These are four catastrophic cases of flooding, namely the case of 4 November 2011 in Genoa, Italy, 6 November 2011 in Catalonia, Spain, 13-16 May 2014 in eastern Europe, and 9 October 2014, again in Genoa, Italy.
Impact of inherent meteorology uncertainty on air quality ...
It is well established that there are a number of different classifications and sources of uncertainties in environmental modeling systems. Air quality models rely on two key inputs, namely, meteorology and emissions. When using air quality models for decision making, it is important to understand how uncertainties in these inputs affect the simulated concentrations. Ensembles are one method to explore how uncertainty in meteorology affects air pollution concentrations. Most studies explore this uncertainty by running different meteorological models or the same model with different physics options and in some cases combinations of different meteorological and air quality models. While these have been shown to be useful techniques in some cases, we present a technique that leverages the initial condition perturbations of a weather forecast ensemble, namely, the Short-Range Ensemble Forecast system to drive the four-dimensional data assimilation in the Weather Research and Forecasting (WRF)-Community Multiscale Air Quality (CMAQ) model with a key focus being the response of ozone chemistry and transport. Results confirm that a sizable spread in WRF solutions, including common weather variables of temperature, wind, boundary layer depth, clouds, and radiation, can cause a relatively large range of ozone-mixing ratios. Pollutant transport can be altered by hundreds of kilometers over several days. Ozone-mixing ratios of the ensemble can vary as much as 10–20 ppb
NASA Astrophysics Data System (ADS)
Kleist, D. T.; Ide, K.; Mahajan, R.; Thomas, C.
2014-12-01
The use of hybrid error covariance models has become quite popular for numerical weather prediction (NWP). One such method for incorporating localized covariances from an ensemble within the variational framework utilizes an augmented control variable (EnVar), and has been implemented in the operational NCEP data assimilation system (GSI). By taking the existing 3D EnVar algorithm in GSI and allowing for four-dimensional ensemble perturbations, coupled with the 4DVAR infrastructure already in place, a 4D EnVar capability has been developed. The 4D EnVar algorithm has a few attractive qualities relative to 4DVAR, including the lack of need for tangent-linear and adjoint model as well as reduced computational cost. Preliminary results using real observations have been encouraging, showing forecast improvements nearly as large as were found in moving from 3DVAR to hybrid 3D EnVar. 4D EnVar is the method of choice for the next generation assimilation system for use with the operational NCEP global model, the global forecast system (GFS). The use of an outer-loop has long been the method of choice for 4DVar data assimilation to help address nonlinearity. An outer loop involves the re-running of the (deterministic) background forecast from the updated initial condition at the beginning of the assimilation window, and proceeding with another inner loop minimization. Within 4D EnVar, a similar procedure can be adopted since the solver evaluates a 4D analysis increment throughout the window, consistent with the valid times of the 4D ensemble perturbations. In this procedure, the ensemble perturbations are kept fixed and centered about the updated background state. This is analogous to the quasi-outer loop idea developed for the EnKF. Here, we present results for both toy model and real NWP systems demonstrating the impact from incorporating outer loops to address nonlinearity within the 4D EnVar context. The appropriate amplitudes for observation and background error covariances in subsequent outer loops will be explored. Lastly, variable transformations on the ensemble perturbations will be utilized to help address issues of non-Gaussianity. This may be particularly important for variables that clearly have non-Gaussian error characteristics such as water vapor and cloud condensate.
NASA Technical Reports Server (NTRS)
Druyan, Leonard M.; Fulakeza, Matthew B.
2014-01-01
The Atlantic cold tongue (ACT) develops during spring and early summer near the Equator in the Eastern Atlantic Ocean and Gulf of Guinea. The hypothesis that the ACT accelerates the timing of West African monsoon (WAM) onset is tested by comparing two regional climate model (RM3) simulation ensembles. Observed sea surface temperatures (SST) that include the ACT are used to force a control ensemble. An idealized, warm SST perturbation is designed to represent lower boundary forcing without the ACT for the experiment ensemble. Summer simulations forced by observed SST and reanalysis boundary conditions for each of five consecutive years are compared to five parallel runs forced by SST with the warm perturbation. The article summarizes the sequence of events leading to the onset of the WAM in the Sahel region. The representation of WAM onset in RM3 simulations is examined and compared to Tropical Rainfall Measuring Mission (TRMM), Global Precipitation Climatology Project (GPCP) and reanalysis data. The study evaluates the sensitivity of WAM onset indicators to the presence of the ACT by analysing the differences between the two simulation ensembles. Results show that the timing of major rainfall events and therefore theWAM onset in the Sahel are not sensitive to the presence of the ACT. However, the warm SST perturbation does increase downstream rainfall rates over West Africa as a consequence of enhanced specific humidity and enhanced northward moisture flux in the lower troposphere.
NASA Astrophysics Data System (ADS)
Yu, Liuqian; Fennel, Katja; Bertino, Laurent; Gharamti, Mohamad El; Thompson, Keith R.
2018-06-01
Effective data assimilation methods for incorporating observations into marine biogeochemical models are required to improve hindcasts, nowcasts and forecasts of the ocean's biogeochemical state. Recent assimilation efforts have shown that updating model physics alone can degrade biogeochemical fields while only updating biogeochemical variables may not improve a model's predictive skill when the physical fields are inaccurate. Here we systematically investigate whether multivariate updates of physical and biogeochemical model states are superior to only updating either physical or biogeochemical variables. We conducted a series of twin experiments in an idealized ocean channel that experiences wind-driven upwelling. The forecast model was forced with biased wind stress and perturbed biogeochemical model parameters compared to the model run representing the "truth". Taking advantage of the multivariate nature of the deterministic Ensemble Kalman Filter (DEnKF), we assimilated different combinations of synthetic physical (sea surface height, sea surface temperature and temperature profiles) and biogeochemical (surface chlorophyll and nitrate profiles) observations. We show that when biogeochemical and physical properties are highly correlated (e.g., thermocline and nutricline), multivariate updates of both are essential for improving model skill and can be accomplished by assimilating either physical (e.g., temperature profiles) or biogeochemical (e.g., nutrient profiles) observations. In our idealized domain, the improvement is largely due to a better representation of nutrient upwelling, which results in a more accurate nutrient input into the euphotic zone. In contrast, assimilating surface chlorophyll improves the model state only slightly, because surface chlorophyll contains little information about the vertical density structure. We also show that a degradation of the correlation between observed subsurface temperature and nutrient fields, which has been an issue in several previous assimilation studies, can be reduced by multivariate updates of physical and biogeochemical fields.
Ensemble sea ice forecast for predicting compressive situations in the Baltic Sea
NASA Astrophysics Data System (ADS)
Lehtiranta, Jonni; Lensu, Mikko; Kokkonen, Iiro; Haapala, Jari
2017-04-01
Forecasting of sea ice hazards is important for winter shipping in the Baltic Sea. In current numerical models the ice thickness distribution and drift are captured well, but compressive situations are often missing from forecast products. Its inclusion is requested by the shipping community, as compression poses a threat to ship operations. As compressing ice is capable of stopping ships for days and even damaging them, its inclusion in ice forecasts is vital. However, we have found that compression can not be predicted well in a deterministic forecast, since it can be a local and a quickly changing phenomenon. It is also very sensitive to small changes in the wind speed and direction, the prevailing ice conditions, and the model parameters. Thus, a probabilistic ensemble simulation is needed to produce a meaningful compression forecast. An ensemble model setup was developed in the SafeWIN project for this purpose. It uses the HELMI multicategory ice model, which was amended for making simulations in parallel. The ensemble was built by perturbing the atmospheric forcing and the physical parameters of the ice pack. The model setup will provide probabilistic forecasts for the compression in the Baltic sea ice. Additionally the model setup provides insight into the uncertainties related to different model parameters and their impact on the model results. We have completed several hindcast simulations for the Baltic Sea for verification purposes. These results are shown to match compression reports gathered from ships. In addition, an ensemble forecast is in preoperational testing phase and its first evaluation will be presented in this work.
A statistical state dynamics approach to wall turbulence
Gayme, D. F.; Ioannou, P. J.
2017-01-01
This paper reviews results obtained using statistical state dynamics (SSD) that demonstrate the benefits of adopting this perspective for understanding turbulence in wall-bounded shear flows. The SSD approach used in this work employs a second-order closure that retains only the interaction between the streamwise mean flow and the streamwise mean perturbation covariance. This closure restricts nonlinearity in the SSD to that explicitly retained in the streamwise constant mean flow together with nonlinear interactions between the mean flow and the perturbation covariance. This dynamical restriction, in which explicit perturbation–perturbation nonlinearity is removed from the perturbation equation, results in a simplified dynamics referred to as the restricted nonlinear (RNL) dynamics. RNL systems, in which a finite ensemble of realizations of the perturbation equation share the same mean flow, provide tractable approximations to the SSD, which is equivalent to an infinite ensemble RNL system. This infinite ensemble system, referred to as the stochastic structural stability theory system, introduces new analysis tools for studying turbulence. RNL systems provide computationally efficient means to approximate the SSD and produce self-sustaining turbulence exhibiting qualitative features similar to those observed in direct numerical simulations despite greatly simplified dynamics. The results presented show that RNL turbulence can be supported by as few as a single streamwise varying component interacting with the streamwise constant mean flow and that judicious selection of this truncated support or ‘band-limiting’ can be used to improve quantitative accuracy of RNL turbulence. These results suggest that the SSD approach provides new analytical and computational tools that allow new insights into wall turbulence. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167577
Time series, correlation matrices and random matrix models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vinayak; Seligman, Thomas H.
2014-01-08
In this set of five lectures the authors have presented techniques to analyze open classical and quantum systems using correlation matrices. For diverse reasons we shall see that random matrices play an important role to describe a null hypothesis or a minimum information hypothesis for the description of a quantum system or subsystem. In the former case various forms of correlation matrices of time series associated with the classical observables of some system. The fact that such series are necessarily finite, inevitably introduces noise and this finite time influence lead to a random or stochastic component in these time series.more » By consequence random correlation matrices have a random component, and corresponding ensembles are used. In the latter we use random matrices to describe high temperature environment or uncontrolled perturbations, ensembles of differing chaotic systems etc. The common theme of the lectures is thus the importance of random matrix theory in a wide range of fields in and around physics.« less
Girsanov reweighting for path ensembles and Markov state models
NASA Astrophysics Data System (ADS)
Donati, L.; Hartmann, C.; Keller, B. G.
2017-06-01
The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.
NASA Astrophysics Data System (ADS)
Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.
2017-12-01
Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.
Random matrices with external source and the asymptotic behaviour of multiple orthogonal polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aptekarev, Alexander I; Lysov, Vladimir G; Tulyakov, Dmitrii N
2011-02-28
Ensembles of random Hermitian matrices with a distribution measure defined by an anharmonic potential perturbed by an external source are considered. The limiting characteristics of the eigenvalue distribution of the matrices in these ensembles are related to the asymptotic behaviour of a certain system of multiple orthogonal polynomials. Strong asymptotic formulae are derived for this system. As a consequence, for matrices in this ensemble the limit mean eigenvalue density is found, and a variational principle is proposed to characterize this density. Bibliography: 35 titles.
A Wind Forecasting System for Energy Application
NASA Astrophysics Data System (ADS)
Courtney, Jennifer; Lynch, Peter; Sweeney, Conor
2010-05-01
Accurate forecasting of available energy is crucial for the efficient management and use of wind power in the national power grid. With energy output critically dependent upon wind strength there is a need to reduce the errors associated wind forecasting. The objective of this research is to get the best possible wind forecasts for the wind energy industry. To achieve this goal, three methods are being applied. First, a mesoscale numerical weather prediction (NWP) model called WRF (Weather Research and Forecasting) is being used to predict wind values over Ireland. Currently, a gird resolution of 10km is used and higher model resolutions are being evaluated to establish whether they are economically viable given the forecast skill improvement they produce. Second, the WRF model is being used in conjunction with ECMWF (European Centre for Medium-Range Weather Forecasts) ensemble forecasts to produce a probabilistic weather forecasting product. Due to the chaotic nature of the atmosphere, a single, deterministic weather forecast can only have limited skill. The ECMWF ensemble methods produce an ensemble of 51 global forecasts, twice a day, by perturbing initial conditions of a 'control' forecast which is the best estimate of the initial state of the atmosphere. This method provides an indication of the reliability of the forecast and a quantitative basis for probabilistic forecasting. The limitation of ensemble forecasting lies in the fact that the perturbed model runs behave differently under different weather patterns and each model run is equally likely to be closest to the observed weather situation. Models have biases, and involve assumptions about physical processes and forcing factors such as underlying topography. Third, Bayesian Model Averaging (BMA) is being applied to the output from the ensemble forecasts in order to statistically post-process the results and achieve a better wind forecasting system. BMA is a promising technique that will offer calibrated probabilistic wind forecasts which will be invaluable in wind energy management. In brief, this method turns the ensemble forecasts into a calibrated predictive probability distribution. Each ensemble member is provided with a 'weight' determined by its relative predictive skill over a training period of around 30 days. Verification of data is carried out using observed wind data from operational wind farms. These are then compared to existing forecasts produced by ECMWF and Met Eireann in relation to skill scores. We are developing decision-making models to show the benefits achieved using the data produced by our wind energy forecasting system. An energy trading model will be developed, based on the rules currently used by the Single Electricity Market Operator for energy trading in Ireland. This trading model will illustrate the potential for financial savings by using the forecast data generated by this research.
Uncertainty and dispersion in air parcel trajectories near the tropical tropopause
NASA Astrophysics Data System (ADS)
Bergman, John; Jensen, Eric; Pfister, Leonhard; Bui, Thoapaul
2016-04-01
The Tropical Tropopause Layer (TTL) is important as the gateway to the stratosphere for chemical constituents produced at the Earth's surface. As such, understanding the processes that transport air through the upper tropical troposphere is important for a number of current scientific issues such as the impact of stratospheric water vapor on the global radiative budget and the depletion of ozone by both anthropogenically- and naturally-produced halocarbons. Compared to the lower troposphere, transport in the TTL is relatively unaffected by turbulent motion. Consequently, Lagrangian particle models are thought to provide reasonable estimates of parcel pathways through the TTL. However, there are complications that make trajectory analyses difficult to interpret; uncertainty in the wind data used to drive these calculations and trajectory dispersion being among the most important. These issues are examined using ensembles of backward air parcel trajectories that are initially tightly grouped near the tropical tropopause using three approaches: A Monte Carlo ensemble, in which different members use identical resolved wind fluctuations but different realizations of stochastic, multi-fractal simulations of unresolved winds, perturbed initial location ensembles, in which members use identical resolved wind fields but initial locations are displaced 2° in latitude and longitude, and a multi-model ensemble that uses identical initial conditions but different resolved wind fields and/or trajectory formulations. Comparisons among the approaches distinguish, to some degree, physical dispersion from that due to data uncertainty and the impact of unresolved wind fluctuations from that of resolved variability.
NASA Astrophysics Data System (ADS)
Hartin, C.; Lynch, C.; Kravitz, B.; Link, R. P.; Bond-Lamberty, B. P.
2017-12-01
Typically, uncertainty quantification of internal variability relies on large ensembles of climate model runs under multiple forcing scenarios or perturbations in a parameter space. Computationally efficient, standard pattern scaling techniques only generate one realization and do not capture the complicated dynamics of the climate system (i.e., stochastic variations with a frequency-domain structure). In this study, we generate large ensembles of climate data with spatially and temporally coherent variability across a subselection of Coupled Model Intercomparison Project Phase 5 (CMIP5) models. First, for each CMIP5 model we apply a pattern emulation approach to derive the model response to external forcing. We take all the spatial and temporal variability that isn't explained by the emulator and decompose it into non-physically based structures through use of empirical orthogonal functions (EOFs). Then, we perform a Fourier decomposition of the EOF projection coefficients to capture the input fields' temporal autocorrelation so that our new emulated patterns reproduce the proper timescales of climate response and "memory" in the climate system. Through this 3-step process, we derive computationally efficient climate projections consistent with CMIP5 model trends and modes of variability, which address a number of deficiencies inherent in the ability of pattern scaling to reproduce complex climate model behavior.
Multipoint propagators in cosmological gravitational instability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernardeau, Francis; Crocce, Martin; Scoccimarro, Roman
2008-11-15
We introduce the concept of multipoint propagators between linear cosmic fields and their nonlinear counterparts in the context of cosmological perturbation theory. Such functions express how a nonlinearly evolved Fourier mode depends on the full ensemble of modes in the initial density field. We identify and resum the dominant diagrams in the large-k limit, showing explicitly that multipoint propagators decay into the nonlinear regime at the same rate as the two-point propagator. These analytic results generalize the large-k limit behavior of the two-point propagator to arbitrary order. We measure the three-point propagator as a function of triangle shape in numericalmore » simulations and confirm the results of our high-k resummation. We show that any n-point spectrum can be reconstructed from multipoint propagators, which leads to a physical connection between nonlinear corrections to the power spectrum at small scales and higher-order correlations at large scales. As a first application of these results, we calculate the reduced bispectrum at one loop in renormalized perturbation theory and show that we can predict the decrease in its dependence on triangle shape at redshift zero, when standard perturbation theory is least successful.« less
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
NASA Astrophysics Data System (ADS)
Wetterhall, F.; Cloke, H. L.; He, Y.; Freer, J.; Pappenberger, F.
2012-04-01
Evidence provided by modelled assessments of climate change impact on flooding is fundamental to water resource and flood risk decision making. Impact models usually rely on climate projections from Global and Regional Climate Models, and there is no doubt that these provide a useful assessment of future climate change. However, cascading ensembles of climate projections into impact models is not straightforward because of problems of coarse resolution in Global and Regional Climate Models (GCM/RCM) and the deficiencies in modelling high-intensity precipitation events. Thus decisions must be made on how to appropriately pre-process the meteorological variables from GCM/RCMs, such as selection of downscaling methods and application of Model Output Statistics (MOS). In this paper a grand ensemble of projections from several GCM/RCM are used to drive a hydrological model and analyse the resulting future flood projections for the Upper Severn, UK. The impact and implications of applying MOS techniques to precipitation as well as hydrological model parameter uncertainty is taken into account. The resultant grand ensemble of future river discharge projections from the RCM/GCM-hydrological model chain is evaluated against a response surface technique combined with a perturbed physics experiment creating a probabilisic ensemble climate model outputs. The ensemble distribution of results show that future risk of flooding in the Upper Severn increases compared to present conditions, however, the study highlights that the uncertainties are large and that strong assumptions were made in using Model Output Statistics to produce the estimates of future discharge. The importance of analysing on a seasonal basis rather than just annual is highlighted. The inability of the RCMs (and GCMs) to produce realistic precipitation patterns, even in present conditions, is a major caveat of local climate impact studies on flooding, and this should be a focus for future development.
Bayesian Hierarchical Models to Augment the Mediterranean Forecast System
2010-09-30
In part 2 (Bonazzi et al., 2010), the impact of the ensemble forecast methodology based on MFS-Wind-BHM perturbations is documented. Forecast...absence of dt data stage inputs, the forecast impact of MFS-Error-BHM is neutral. Experiments are underway now to introduce dt back into the MFS-Error...BHM and quantify forecast impacts at MFS. MFS-SuperEnsemble-BHM We have assembled all needed datasets and completed algorithmic development
NASA Astrophysics Data System (ADS)
Fernández, J.; Primo, C.; Cofiño, A. S.; Gutiérrez, J. M.; Rodríguez, M. A.
2009-08-01
In a recent paper, Gutiérrez et al. (Nonlinear Process Geophys 15(1):109-114, 2008) introduced a new characterization of spatiotemporal error growth—the so called mean-variance logarithmic (MVL) diagram—and applied it to study ensemble prediction systems (EPS); in particular, they analyzed single-model ensembles obtained by perturbing the initial conditions. In the present work, the MVL diagram is applied to multi-model ensembles analyzing also the effect of model formulation differences. To this aim, the MVL diagram is systematically applied to the multi-model ensemble produced in the EU-funded DEMETER project. It is shown that the shared building blocks (atmospheric and ocean components) impose similar dynamics among different models and, thus, contribute to poorly sampling the model formulation uncertainty. This dynamical similarity should be taken into account, at least as a pre-screening process, before applying any objective weighting method.
Cervera, Javier; Manzanares, Jose Antonio; Mafe, Salvador
2015-02-19
We analyze the coupling of model nonexcitable (non-neural) cells assuming that the cell membrane potential is the basic individual property. We obtain this potential on the basis of the inward and outward rectifying voltage-gated channels characteristic of cell membranes. We concentrate on the electrical coupling of a cell ensemble rather than on the biochemical and mechanical characteristics of the individual cells, obtain the map of single cell potentials using simple assumptions, and suggest procedures to collectively modify this spatial map. The response of the cell ensemble to an external perturbation and the consequences of cell isolation, heterogeneity, and ensemble size are also analyzed. The results suggest that simple coupling mechanisms can be significant for the biophysical chemistry of model biomolecular ensembles. In particular, the spatiotemporal map of single cell potentials should be relevant for the uptake and distribution of charged nanoparticles over model cell ensembles and the collective properties of droplet networks incorporating protein ion channels inserted in lipid bilayers.
NASA Astrophysics Data System (ADS)
Waldman, Robin; Somot, Samuel; Herrmann, Marine; Bosse, Anthony; Caniaux, Guy; Estournel, Claude; Houpert, Loic; Prieur, Louis; Sevault, Florence; Testor, Pierre
2017-02-01
The northwestern Mediterranean Sea is a well-observed ocean deep convection site. Winter 2012-2013 was an intense and intensely documented dense water formation (DWF) event. We evaluate this DWF event in an ensemble configuration of the regional ocean model NEMOMED12. We then assess for the first time the impact of ocean intrinsic variability on DWF with a novel perturbed initial state ensemble method. Finally, we identify the main physical mechanisms driving water mass transformations. NEMOMED12 reproduces accurately the deep convection chronology between late January and March, its location off the Gulf of Lions although with a southward shift and its magnitude. It fails to reproduce the Western Mediterranean Deep Waters salinification and warming, consistently with too strong a surface heat loss. The Ocean Intrinsic Variability modulates half of the DWF area, especially in the open-sea where the bathymetry slope is low. It modulates marginally (3-5%) the integrated DWF rate, but its increase with time suggests its impact could be larger at interannual timescales. We conclude that ensemble frameworks are necessary to evaluate accurately numerical simulations of DWF. Each phase of DWF has distinct diapycnal and thermohaline regimes: during preconditioning, the Mediterranean thermohaline circulation is driven by exchanges with the Algerian basin. During the intense mixing phase, surface heat fluxes trigger deep convection and internal mixing largely determines the resulting deep water properties. During restratification, lateral exchanges and internal mixing are enhanced. Finally, isopycnal mixing was shown to play a large role in water mass transformations during the preconditioning and restratification phases.
How does the sensitivity of climate affect stratospheric solar radiation management?
NASA Astrophysics Data System (ADS)
Ricke, K.; Rowlands, D. J.; Ingram, W.; Keith, D.; Morgan, M. G.
2011-12-01
If implementation of proposals to engineer the climate through solar radiation management (SRM) ever occurs, it is likely to be contingent upon climate sensitivity. Despite this, no modeling studies have examined how the effectiveness of SRM forcings differs between the typical Atmosphere-Ocean General Circulation Models (AOGCMs) with climate sensitivities close to the Coupled Model Intercomparison Project (CMIP) mean and ones with high climate sensitivities. Here, we use a perturbed physics ensemble modeling experiment to examine variations in the response of climate to SRM under different climate sensitivities. When SRM is used as a substitute for mitigation its ability to maintain the current climate state gets worse with increased climate sensitivity and with increased concentrations of greenhouse gases. However, our results also demonstrate that the potential of SRM to slow climate change, even at the regional level, grows with climate sensitivity. On average, SRM reduces regional rates of temperature change by more than 90 percent and rates of precipitation change by more than 50 percent in these higher sensitivity model configurations. To investigate how SRM might behave in models with high climate sensitivity that are also consistent with recent observed climate change we perform a "perturbed physics" ensemble (PPE) modelling experiment with the climateprediction.net (cpdn) version of the HadCM3L AOGCM. Like other perturbed physics climate modelling experiments, we simulate past and future climate scenarios using a wide range of model parameter combinations that both reproduce past climate within a specified level of accuracy and simulate future climates with a wide range of climate sensitivities. We chose 43 members ("model versions") from a subset of the 1,550 from the British Broadcasting Corporation (BBC) climateprediction.net project that have data that allow restarts. We use our results to explore how much assessments of SRM that use best-estimate models, and so near-median climate sensitivity, may be ignoring important contingencies associated with implementing SRM in reality. A primary motivation for studying SRM via the injection of aerosols in the stratosphere is to evaluate its potential effectiveness as "insurance" in the case of higher-than-expected climate response to global warming. We find that this is precisely when SRM appears to be least effective in returning regional climates to their baseline states and reducing regional rates of precipitation change. On the other hand, given the very high regional temperature anomalies associated with rising greenhouse gas concentrations in high sensitivity models, it is also where SRM is most effective in reducing rates of change relative to a no SRM alternative.
Sampling-based ensemble segmentation against inter-operator variability
NASA Astrophysics Data System (ADS)
Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew
2011-03-01
Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).
From cyclone tracks to the costs of European winter storms: A probabilistic loss assessment model
NASA Astrophysics Data System (ADS)
Renggli, Dominik; Corti, Thierry; Reese, Stefan; Wueest, Marc; Viktor, Elisabeth; Zimmerli, Peter
2014-05-01
The quantitative assessment of the potential losses of European winter storms is essential for the economic viability of a global reinsurance company. For this purpose, reinsurance companies generally use probabilistic loss assessment models. This work presents an innovative approach to develop physically meaningful probabilistic events for Swiss Re's new European winter storm loss model. The meteorological hazard component of the new model is based on cyclone and windstorm tracks identified in the 20th Century Reanalysis data. The knowledge of the evolution of winter storms both in time and space allows the physically meaningful perturbation of properties of historical events (e.g. track, intensity). The perturbation includes a random element but also takes the local climatology and the evolution of the historical event into account. The low-resolution wind footprints taken from 20th Century Reanalysis are processed by a statistical-dynamical downscaling to generate high-resolution footprints of the historical and probabilistic winter storm events. Downscaling transfer functions are generated using ENSEMBLES regional climate model data. The result is a set of reliable probabilistic events representing thousands of years. The event set is then combined with country- and risk-specific vulnerability functions and detailed market- or client-specific exposure information to compute (re-)insurance risk premiums.
NASA Astrophysics Data System (ADS)
Romanova, Vanya; Hense, Andreas; Wahl, Sabrina; Brune, Sebastian; Baehr, Johanna
2016-04-01
The decadal variability and its predictability of the surface net freshwater fluxes is compared in a set of retrospective predictions, all using the same model setup, and only differing in the implemented ocean initialisation method and ensemble generation method. The basic aim is to deduce the differences between the initialization/ensemble generation methods in view of the uncertainty of the verifying observational data sets. The analysis will give an approximation of the uncertainties of the net freshwater fluxes, which up to now appear to be one of the most uncertain products in observational data and model outputs. All ensemble generation methods are implemented into the MPI-ESM earth system model in the framework of the ongoing MiKlip project (www.fona-miklip.de). Hindcast experiments are initialised annually between 2000-2004, and from each start year 10 ensemble members are initialized for 5 years each. Four different ensemble generation methods are compared: (i) a method based on the Anomaly Transform method (Romanova and Hense, 2015) in which the initial oceanic perturbations represent orthogonal and balanced anomaly structures in space and time and between the variables taken from a control run, (ii) one-day-lagged ocean states from the MPI-ESM-LR baseline system (iii) one-day-lagged of ocean and atmospheric states with preceding full-field nudging to re-analysis in both the atmospheric and the oceanic component of the system - the baseline one MPI-ESM-LR system, (iv) an Ensemble Kalman Filter (EnKF) implemented into oceanic part of MPI-ESM (Brune et al. 2015), assimilating monthly subsurface oceanic temperature and salinity (EN3) using the Parallel Data Assimilation Framework (PDAF). The hindcasts are evaluated probabilistically using fresh water flux data sets from four different reanalysis data sets: MERRA, NCEP-R1, GFDL ocean reanalysis and GECCO2. The assessments show no clear differences in the evaluations scores on regional scales. However, on the global scale the physically motivated methods (i) and (iv) provide probabilistic hindcasts with a consistently higher reliability than the lagged initialization methods (ii)/(iii) despite the large uncertainties in the verifying observations and in the simulations.
2013-01-01
Background Many problems in protein modeling require obtaining a discrete representation of the protein conformational space as an ensemble of conformations. In ab-initio structure prediction, in particular, where the goal is to predict the native structure of a protein chain given its amino-acid sequence, the ensemble needs to satisfy energetic constraints. Given the thermodynamic hypothesis, an effective ensemble contains low-energy conformations which are similar to the native structure. The high-dimensionality of the conformational space and the ruggedness of the underlying energy surface currently make it very difficult to obtain such an ensemble. Recent studies have proposed that Basin Hopping is a promising probabilistic search framework to obtain a discrete representation of the protein energy surface in terms of local minima. Basin Hopping performs a series of structural perturbations followed by energy minimizations with the goal of hopping between nearby energy minima. This approach has been shown to be effective in obtaining conformations near the native structure for small systems. Recent work by us has extended this framework to larger systems through employment of the molecular fragment replacement technique, resulting in rapid sampling of large ensembles. Methods This paper investigates the algorithmic components in Basin Hopping to both understand and control their effect on the sampling of near-native minima. Realizing that such an ensemble is reduced before further refinement in full ab-initio protocols, we take an additional step and analyze the quality of the ensemble retained by ensemble reduction techniques. We propose a novel multi-objective technique based on the Pareto front to filter the ensemble of sampled local minima. Results and conclusions We show that controlling the magnitude of the perturbation allows directly controlling the distance between consecutively-sampled local minima and, in turn, steering the exploration towards conformations near the native structure. For the minimization step, we show that the addition of Metropolis Monte Carlo-based minimization is no more effective than a simple greedy search. Finally, we show that the size of the ensemble of sampled local minima can be effectively and efficiently reduced by a multi-objective filter to obtain a simpler representation of the probed energy surface. PMID:24564970
Mesoscale model response to random, surface-based perturbations — A sea-breeze experiment
NASA Astrophysics Data System (ADS)
Garratt, J. R.; Pielke, R. A.; Miller, W. F.; Lee, T. J.
1990-09-01
The introduction into a mesoscale model of random (in space) variations in roughness length, or random (in space and time) surface perturbations of temperature and friction velocity, produces a measurable, but barely significant, response in the simulated flow dynamics of the lower atmosphere. The perturbations are an attempt to include the effects of sub-grid variability into the ensemble-mean parameterization schemes used in many numerical models. Their magnitude is set in our experiments by appeal to real-world observations of the spatial variations in roughness length and daytime surface temperature over the land on horizontal scales of one to several tens of kilometers. With sea-breeze simulations, comparisons of a number of realizations forced by roughness-length and surface-temperature perturbations with the standard simulation reveal no significant change in ensemble mean statistics, and only small changes in the sea-breeze vertical velocity. Changes in the updraft velocity for individual runs, of up to several cms-1 (compared to a mean of 14 cms-1), are directly the result of prefrontal temperature changes of 0.1 to 0.2K, produced by the random surface forcing. The correlation and magnitude of the changes are entirely consistent with a gravity-current interpretation of the sea breeze.
Stability of quantum-dot excited-state laser emission under simultaneous ground-state perturbation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaptan, Y., E-mail: yuecel.kaptan@physik.tu-berlin.de; Herzog, B.; Schöps, O.
2014-11-10
The impact of ground state amplification on the laser emission of In(Ga)As quantum dot excited state lasers is studied in time-resolved experiments. We find that a depopulation of the quantum dot ground state is followed by a drop in excited state lasing intensity. The magnitude of the drop is strongly dependent on the wavelength of the depletion pulse and the applied injection current. Numerical simulations based on laser rate equations reproduce the experimental results and explain the wavelength dependence by the different dynamics in lasing and non-lasing sub-ensembles within the inhomogeneously broadened quantum dots. At high injection levels, the observedmore » response even upon perturbation of the lasing sub-ensemble is small and followed by a fast recovery, thus supporting the capacity of fast modulation in dual-state devices.« less
Dispersion Modeling Using Ensemble Forecasts Compared to ETEX Measurements.
NASA Astrophysics Data System (ADS)
Straume, Anne Grete; N'dri Koffi, Ernest; Nodop, Katrin
1998-11-01
Numerous numerical models are developed to predict long-range transport of hazardous air pollution in connection with accidental releases. When evaluating and improving such a model, it is important to detect uncertainties connected to the meteorological input data. A Lagrangian dispersion model, the Severe Nuclear Accident Program, is used here to investigate the effect of errors in the meteorological input data due to analysis error. An ensemble forecast, produced at the European Centre for Medium-Range Weather Forecasts, is then used as model input. The ensemble forecast members are generated by perturbing the initial meteorological fields of the weather forecast. The perturbations are calculated from singular vectors meant to represent possible forecast developments generated by instabilities in the atmospheric flow during the early part of the forecast. The instabilities are generated by errors in the analyzed fields. Puff predictions from the dispersion model, using ensemble forecast input, are compared, and a large spread in the predicted puff evolutions is found. This shows that the quality of the meteorological input data is important for the success of the dispersion model. In order to evaluate the dispersion model, the calculations are compared with measurements from the European Tracer Experiment. The model manages to predict the measured puff evolution concerning shape and time of arrival to a fairly high extent, up to 60 h after the start of the release. The modeled puff is still too narrow in the advection direction.
Can nudging be used to quantify model sensitivities in precipitation and cloud forcing?
NASA Astrophysics Data System (ADS)
Lin, Guangxing; Wan, Hui; Zhang, Kai; Qian, Yun; Ghan, Steven J.
2016-09-01
Efficient simulation strategies are crucial for the development and evaluation of high-resolution climate models. This paper evaluates simulations with constrained meteorology for the quantification of parametric sensitivities in the Community Atmosphere Model version 5 (CAM5). Two parameters are perturbed as illustrating examples: the convection relaxation time scale (TAU), and the threshold relative humidity for the formation of low-level stratiform clouds (rhminl). Results suggest that the fidelity of the constrained simulations depends on the detailed implementation of nudging and the mechanism through which the perturbed parameter affects precipitation and cloud. The relative computational costs of nudged and free-running simulations are determined by the magnitude of internal variability in the physical quantities of interest, as well as the magnitude of the parameter perturbation. In the case of a strong perturbation in convection, temperature, and/or wind nudging with a 6 h relaxation time scale leads to nonnegligible side effects due to the distorted interactions between resolved dynamics and parameterized convection, while 1 year free-running simulations can satisfactorily capture the annual mean precipitation and cloud forcing sensitivities. In the case of a relatively weak perturbation in the large-scale condensation scheme, results from 1 year free-running simulations are strongly affected by natural noise, while nudging winds effectively reduces the noise, and reasonably reproduces the sensitivities. These results indicate that caution is needed when using nudged simulations to assess precipitation and cloud forcing sensitivities to parameter changes in general circulation models. We also demonstrate that ensembles of short simulations are useful for understanding the evolution of model sensitivities.
Climate Modeling and Causal Identification for Sea Ice Predictability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark
This project aims to better understand causes of ongoing changes in the Arctic climate system, particularly as decreasing sea ice trends have been observed in recent decades and are expected to continue in the future. As part of the Sea Ice Prediction Network, a multi-agency effort to improve sea ice prediction products on seasonal-to-interannual time scales, our team is studying sensitivity of sea ice to a collection of physical process and feedback mechanism in the coupled climate system. During 2017 we completed a set of climate model simulations using the fully coupled ACME-HiLAT model. The simulations consisted of experiments inmore » which cloud, sea ice, and air-ocean turbulent exchange parameters previously identified as important for driving output uncertainty in climate models were perturbed to account for parameter uncertainty in simulated climate variables. We conducted a sensitivity study to these parameters, which built upon a previous study we made for standalone simulations (Urrego-Blanco et al., 2016, 2017). Using the results from the ensemble of coupled simulations, we are examining robust relationships between climate variables that emerge across the experiments. We are also using causal discovery techniques to identify interaction pathways among climate variables which can help identify physical mechanisms and provide guidance in predictability studies. This work further builds on and leverages the large ensemble of standalone sea ice simulations produced in our previous w14_seaice project.« less
Simulating large-scale crop yield by using perturbed-parameter ensemble method
NASA Astrophysics Data System (ADS)
Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.
2010-12-01
Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.
NASA Astrophysics Data System (ADS)
Semenova, N. I.; Strelkova, G. I.; Anishchenko, V. S.; Zakharova, A.
2017-06-01
We describe numerical results for the dynamics of networks of nonlocally coupled chaotic maps. Switchings in time between amplitude and phase chimera states have been first established and studied. It has been shown that in autonomous ensembles, a nonstationary regime of switchings has a finite lifetime and represents a transient process towards a stationary regime of phase chimera. The lifetime of the nonstationary switching regime can be increased to infinity by applying short-term noise perturbations.
NASA Astrophysics Data System (ADS)
Karmalkar, A.; Sexton, D.; Murphy, J.
2017-12-01
We present exploratory work towards developing an efficient strategy to select variants of a state-of-the-art but expensive climate model suitable for climate projection studies. The strategy combines information from a set of idealized perturbed parameter ensemble (PPE) and CMIP5 multi-model ensemble (MME) experiments, and uses two criteria as basis to select model variants for a PPE suitable for future projections: a) acceptable model performance at two different timescales, and b) maintaining diversity in model response to climate change. We demonstrate that there is a strong relationship between model errors at weather and climate timescales for a variety of key variables. This relationship is used to filter out parts of parameter space that do not give credible simulations of historical climate, while minimizing the impact on ranges in forcings and feedbacks that drive model responses to climate change. We use statistical emulation to explore the parameter space thoroughly, and demonstrate that about 90% can be filtered out without affecting diversity in global-scale climate change responses. This leads to identification of plausible parts of parameter space from which model variants can be selected for projection studies.
Physical results from 2+1 flavor domain wall QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholz,E.E.
2008-07-14
We review recent results for the chiral behavior of meson masses and decay constants and the determination of the light quark masses by the RBC and UKQCD collaborations. We find that one-loop SU(2) chiral perturbation theory represents the behavior of our lattice data better than one-loop SU(3) chiral perturbation theory in both the pion and kaon sectors. The simulations have been performed using the Iwasaki gauge action at two different lattice spacings with the physical spatial volume held approximately fixed at (2.7fm){sup 3}. The Domain Wall fermion formulation was used for the 2+1 dynamical quark flavors: two (mass degenerate) lightmore » flavors with masses as light as roughly 1/5 the mass of the physical strange quark mass and one heavier quark flavor at approximately the value of the physical strange quark mass, On the ensembles generated with the coarser lattice spacing, we obtain for the physical average up- and down-quark and strange quark masses m{sub ud}{sup {ovr MS}} (2 GeV) = 3.72(0.16){sub stat}(0.33){sub ren}(0.18){sub syst}MeV and m{sub s}{sup {ovr MS}} (2 GeV) = 107.3(4.4){sub stat}(9.7){sub ren}(4.9){sub syst} MeV, respectively, while they find for the pion and kaon decay constants f{sub {pi}} = 124.1(3.6){sub stat}(6.9){sub syst}MeV, f{sub K} = 149.6(3.6){sub stat}(6.3){sub syst} MeV. The analysis for the finer lattice spacing has not been fully completed yet, but we already present some first (preliminary) results.« less
NASA Astrophysics Data System (ADS)
Laux, Patrick; Nguyen, Phuong N. B.; Cullmann, Johannes; Kunstmann, Harald
2016-04-01
Regional climate models (RCMs) comprise both terrestrial and atmospheric compartments and thereby allowing to study land atmosphere feedbacks, and in particular the land-use and climate change impacts. In this study, a methodological framework is developed to separate the land use change induced signals in RCM simulations from noise caused by perturbed initial boundary conditions. The framework is applied for two different case studies in SE Asia, i.e. an urbanization and a deforestation scenario, which are implemented into the Weather Research and Forecasting (WRF) model. The urbanization scenario is produced for Da Nang, one of the fastest growing cities in Central Vietnam, by converting the land-use in a 20 km, 14 km, and 9 km radius around the Da Nang meteorological station systematically from cropland to urban. Likewise, three deforestation scenarios are derived for Nong Son (Central Vietnam). Based on WRF ensemble simulations with perturbed initial conditions for 2010, the signal to-noise ratio (SNR) is calculated to identify areas with pronounced signals induced by LULCC. While clear and significant signals are found for air temperature, latent and sensible heat flux in the urbanization scenario (SNR values up to 24), the signals are not pronounced for deforestation (SNR values < 1). Albeit statistically significant signals are found for precipitation, low SNR values hinder scientifically sound inferences for climate change adaptation options. It is demonstrated that ensemble simulations with more than at least 5 ensemble members are required to derive robust LULCC adaptation strategies, particularly if precipitation is considered. This is rarely done in practice, thus potentially leading to erroneous estimates of the LULCC induced signals of water and energy fluxes, which are propagated through the regional climate - hydrological model modeling chains, and finally leading to unfavorable decision support.
NASA Astrophysics Data System (ADS)
Csordás, A.; Graham, R.; Szépfalusy, P.; Vattay, G.
1994-01-01
One wall of an Artin's billiard on the Poincaré half-plane is replaced by a one-parameter (cp) family of nongeodetic walls. A brief description of the classical phase space of this system is given. In the quantum domain, the continuous and gradual transition from the Poisson-like to Gaussian-orthogonal-ensemble (GOE) level statistics due to the small perturbations breaking the symmetry responsible for the ``arithmetic chaos'' at cp=1 is studied. Another GOE-->Poisson transition due to the mixed phase space for large perturbations is also investigated. A satisfactory description of the intermediate level statistics by the Brody distribution was found in both cases. The study supports the existence of a scaling region around cp=1. A finite-size scaling relation for the Brody parameter as a function of 1-cp and the number of levels considered can be established.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Hui; Rasch, Philip J.; Zhang, Kai
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivitymore » of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.« less
Solution structural ensembles of substrate-free cytochrome P450(cam).
Asciutto, Eliana K; Young, Matthew J; Madura, Jeffry; Pochapsky, Susan Sondej; Pochapsky, Thomas C
2012-04-24
Removal of substrate (+)-camphor from the active site of cytochrome P450(cam) (CYP101A1) results in nuclear magnetic resonance-detected perturbations in multiple regions of the enzyme. The (1)H-(15)N correlation map of substrate-free diamagnetic Fe(II) CO-bound CYP101A permits these perturbations to be mapped onto the solution structure of the enzyme. Residual dipolar couplings (RDCs) were measured for (15)N-(1)H amide pairs in two independent alignment media for the substrate-free enzyme and used as restraints in solvated molecular dynamics (MD) simulations to generate an ensemble of best-fit structures of the substrate-free enzyme in solution. Nuclear magnetic resonance-detected chemical shift perturbations reflect changes in the electronic environment of the NH pairs, such as hydrogen bonding and ring current shifts, and are observed for residues in the active site as well as in hinge regions between secondary structural features. RDCs provide information about relative orientations of secondary structures, and RDC-restrained MD simulations indicate that portions of a β-rich region adjacent to the active site shift so as to partially occupy the vacancy left by removal of the substrate. The accessible volume of the active site is reduced in the substrate-free enzyme relative to the substrate-bound structure calculated using the same methods. Both symmetric and asymmetric broadening of multiple resonances observed upon substrate removal as well as localized increased errors in RDC fits suggest that an ensemble of enzyme conformations are present in the substrate-free form.
Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts
NASA Astrophysics Data System (ADS)
Gingrich, Mark
Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.
NASA Astrophysics Data System (ADS)
Calini, A.; Schober, C. M.
2013-09-01
In this article we present the results of a broad numerical investigation on the stability of breather-type solutions of the nonlinear Schrödinger (NLS) equation, specifically the one- and two-mode breathers for an unstable plane wave, which are frequently used to model rogue waves. The numerical experiments involve large ensembles of perturbed initial data for six typical random perturbations. Ensemble estimates of the "closeness", A(t), of the perturbed solution to an element of the respective unperturbed family indicate that the only neutrally stable breathers are the ones of maximal dimension, that is: given an unstable background with N unstable modes, the only neutrally stable breathers are the N-dimensional ones (obtained as a superimposition of N simple breathers via iterated Backlund transformations). Conversely, breathers which are not fully saturated are sensitive to noisy environments and are unstable. Interestingly, A(t) is smallest for the coalesced two-mode breather indicating the coalesced case may be the most robust two-mode breather in a laboratory setting. The numerical simulations confirm and provide a realistic realization of the stability behavior established analytically by the authors.
Implications of global warming for the climate of African rainforests
James, Rachel; Washington, Richard; Rowell, David P.
2013-01-01
African rainforests are likely to be vulnerable to changes in temperature and precipitation, yet there has been relatively little research to suggest how the regional climate might respond to global warming. This study presents projections of temperature and precipitation indices of relevance to African rainforests, using global climate model experiments to identify local change as a function of global temperature increase. A multi-model ensemble and two perturbed physics ensembles are used, one with over 100 members. In the east of the Congo Basin, most models (92%) show a wet signal, whereas in west equatorial Africa, the majority (73%) project an increase in dry season water deficits. This drying is amplified as global temperature increases, and in over half of coupled models by greater than 3% per °C of global warming. Analysis of atmospheric dynamics in a subset of models suggests that this could be partly because of a rearrangement of zonal circulation, with enhanced convection in the Indian Ocean and anomalous subsidence over west equatorial Africa, the Atlantic Ocean and, in some seasons, the Amazon Basin. Further research to assess the plausibility of this and other mechanisms is important, given the potential implications of drying in these rainforest regions. PMID:23878329
Implications of global warming for the climate of African rainforests.
James, Rachel; Washington, Richard; Rowell, David P
2013-01-01
African rainforests are likely to be vulnerable to changes in temperature and precipitation, yet there has been relatively little research to suggest how the regional climate might respond to global warming. This study presents projections of temperature and precipitation indices of relevance to African rainforests, using global climate model experiments to identify local change as a function of global temperature increase. A multi-model ensemble and two perturbed physics ensembles are used, one with over 100 members. In the east of the Congo Basin, most models (92%) show a wet signal, whereas in west equatorial Africa, the majority (73%) project an increase in dry season water deficits. This drying is amplified as global temperature increases, and in over half of coupled models by greater than 3% per °C of global warming. Analysis of atmospheric dynamics in a subset of models suggests that this could be partly because of a rearrangement of zonal circulation, with enhanced convection in the Indian Ocean and anomalous subsidence over west equatorial Africa, the Atlantic Ocean and, in some seasons, the Amazon Basin. Further research to assess the plausibility of this and other mechanisms is important, given the potential implications of drying in these rainforest regions.
Extended Range Prediction of Indian Summer Monsoon: Current status
NASA Astrophysics Data System (ADS)
Sahai, A. K.; Abhilash, S.; Borah, N.; Joseph, S.; Chattopadhyay, R.; S, S.; Rajeevan, M.; Mandal, R.; Dey, A.
2014-12-01
The main focus of this study is to develop forecast consensus in the extended range prediction (ERP) of monsoon Intraseasonal oscillations using a suit of different variants of Climate Forecast system (CFS) model. In this CFS based Grand MME prediction system (CGMME), the ensemble members are generated by perturbing the initial condition and using different configurations of CFSv2. This is to address the role of different physical mechanisms known to have control on the error growth in the ERP in the 15-20 day time scale. The final formulation of CGMME is based on 21 ensembles of the standalone Global Forecast System (GFS) forced with bias corrected forecasted SST from CFS, 11 low resolution CFST126 and 11 high resolution CFST382. Thus, we develop the multi-model consensus forecast for the ERP of Indian summer monsoon (ISM) using a suite of different variants of CFS model. This coordinated international effort lead towards the development of specific tailor made regional forecast products over Indian region. Skill of deterministic and probabilistic categorical rainfall forecast as well the verification of large-scale low frequency monsoon intraseasonal oscillations has been carried out using hindcast from 2001-2012 during the monsoon season in which all models are initialized at every five days starting from 16May to 28 September. The skill of deterministic forecast from CGMME is better than the best participating single model ensemble configuration (SME). The CGMME approach is believed to quantify the uncertainty in both initial conditions and model formulation. Main improvement is attained in probabilistic forecast which is because of an increase in the ensemble spread, thereby reducing the error due to over-confident ensembles in a single model configuration. For probabilistic forecast, three tercile ranges are determined by ranking method based on the percentage of ensemble members from all the participating models falls in those three categories. CGMME further added value to both deterministic and probability forecast compared to raw SME's and this better skill is probably flows from large spread and improved spread-error relationship. CGMME system is currently capable of generating ER prediction in real time and successfully delivering its experimental operational ER forecast of ISM for the last few years.
Machine Learning Predictions of a Multiresolution Climate Model Ensemble
NASA Astrophysics Data System (ADS)
Anderson, Gemma J.; Lucas, Donald D.
2018-05-01
Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.
Toward canonical ensemble distribution from self-guided Langevin dynamics simulation
NASA Astrophysics Data System (ADS)
Wu, Xiongwu; Brooks, Bernard R.
2011-04-01
This work derives a quantitative description of the conformational distribution in self-guided Langevin dynamics (SGLD) simulations. SGLD simulations employ guiding forces calculated from local average momentums to enhance low-frequency motion. This enhancement in low-frequency motion dramatically accelerates conformational search efficiency, but also induces certain perturbations in conformational distribution. Through the local averaging, we separate properties of molecular systems into low-frequency and high-frequency portions. The guiding force effect on the conformational distribution is quantitatively described using these low-frequency and high-frequency properties. This quantitative relation provides a way to convert between a canonical ensemble and a self-guided ensemble. Using example systems, we demonstrated how to utilize the relation to obtain canonical ensemble properties and conformational distributions from SGLD simulations. This development makes SGLD not only an efficient approach for conformational searching, but also an accurate means for conformational sampling.
Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic
2014-01-01
The finite resolution of general circulation models of the coupled atmosphere–ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere–ocean climate system in operational forecast mode, and the latest seasonal forecasting system—System 4—has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981–2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden–Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid-latitude circulation regimes over the Pacific–North America region. PMID:24842026
Weisheimer, Antje; Corti, Susanna; Palmer, Tim; Vitart, Frederic
2014-06-28
The finite resolution of general circulation models of the coupled atmosphere-ocean system and the effects of sub-grid-scale variability present a major source of uncertainty in model simulations on all time scales. The European Centre for Medium-Range Weather Forecasts has been at the forefront of developing new approaches to account for these uncertainties. In particular, the stochastically perturbed physical tendency scheme and the stochastically perturbed backscatter algorithm for the atmosphere are now used routinely for global numerical weather prediction. The European Centre also performs long-range predictions of the coupled atmosphere-ocean climate system in operational forecast mode, and the latest seasonal forecasting system--System 4--has the stochastically perturbed tendency and backscatter schemes implemented in a similar way to that for the medium-range weather forecasts. Here, we present results of the impact of these schemes in System 4 by contrasting the operational performance on seasonal time scales during the retrospective forecast period 1981-2010 with comparable simulations that do not account for the representation of model uncertainty. We find that the stochastic tendency perturbation schemes helped to reduce excessively strong convective activity especially over the Maritime Continent and the tropical Western Pacific, leading to reduced biases of the outgoing longwave radiation (OLR), cloud cover, precipitation and near-surface winds. Positive impact was also found for the statistics of the Madden-Julian oscillation (MJO), showing an increase in the frequencies and amplitudes of MJO events. Further, the errors of El Niño southern oscillation forecasts become smaller, whereas increases in ensemble spread lead to a better calibrated system if the stochastic tendency is activated. The backscatter scheme has overall neutral impact. Finally, evidence for noise-activated regime transitions has been found in a cluster analysis of mid-latitude circulation regimes over the Pacific-North America region.
NASA Astrophysics Data System (ADS)
Waldman, Robin; Herrmann, Marine; Somot, Samuel; Arsouze, Thomas; Benshila, Rachid; Bosse, Anthony; Chanut, Jerome; Giordani, Herve; Sevault, Florence; Testor, Pierre
2017-11-01
Winter 2012-2013 was a particularly intense and well-observed Dense Water Formation (DWF) event in the Northwestern Mediterranean Sea. In this study, we investigate the impact of the mesoscale dynamics on DWF. We perform two perturbed initial state simulation ensembles from summer 2012 to 2013, respectively, mesoscale-permitting and mesoscale-resolving, with the AGRIF refinement tool in the Mediterranean configuration NEMOMED12. The mean impact of the mesoscale on DWF occurs mainly through the high-resolution physics and not the high-resolution bathymetry. This impact is shown to be modest: the mesoscale does not modify the chronology of the deep convective winter nor the volume of dense waters formed. It however impacts the location of the mixed patch by reducing its extent to the west of the North Balearic Front and by increasing it along the Northern Current, in better agreement with observations. The maximum mixed patch volume is significantly reduced from 5.7 ± 0.2 to 4.2 ± 0.6 × 1013 m3. Finally, the spring restratification volume is more realistic and enhanced from 1.4 ± 0.2 to 1.8 ± 0.2 × 1013 m3 by the mesoscale. We also address the mesoscale impact on the ocean intrinsic variability by performing perturbed initial state ensemble simulations. The mesoscale enhances the intrinsic variability of the deep convection geography, with most of the mixed patch area impacted by intrinsic variability. The DWF volume has a low intrinsic variability but it is increased by 2-3 times with the mesoscale. We relate it to a dramatic increase of the Gulf of Lions eddy kinetic energy from 5.0 ± 0.6 to 17.3 ± 1.5 cm2/s2, in remarkable agreement with observations.
Nonlinear gyrokinetics: a powerful tool for the description of microturbulence in magnetized plasmas
NASA Astrophysics Data System (ADS)
Krommes, John A.
2010-12-01
Gyrokinetics is the description of low-frequency dynamics in magnetized plasmas. In magnetic-confinement fusion, it provides the most fundamental basis for numerical simulations of microturbulence; there are astrophysical applications as well. In this tutorial, a sketch of the derivation of the novel dynamical system comprising the nonlinear gyrokinetic (GK) equation (GKE) and the coupled electrostatic GK Poisson equation will be given by using modern Lagrangian and Lie perturbation methods. No background in plasma physics is required in order to appreciate the logical development. The GKE describes the evolution of an ensemble of gyrocenters moving in a weakly inhomogeneous background magnetic field and in the presence of electromagnetic perturbations with wavelength of the order of the ion gyroradius. Gyrocenters move with effective drifts, which may be obtained by an averaging procedure that systematically, order by order, removes gyrophase dependence. To that end, the use of the Lagrangian differential one-form as well as the content and advantages of Lie perturbation theory will be explained. The electromagnetic fields follow via Maxwell's equations from the charge and current density of the particles. Particle and gyrocenter densities differ by an important polarization effect. That is calculated formally by a 'pull-back' (a concept from differential geometry) of the gyrocenter distribution to the laboratory coordinate system. A natural truncation then leads to the closed GK dynamical system. Important properties such as GK energy conservation and fluctuation noise will be mentioned briefly, as will the possibility (and difficulties) of deriving nonlinear gyrofluid equations suitable for rapid numerical solution—although it is probably best to directly simulate the GKE. By the end of the tutorial, students should appreciate the GKE as an extremely powerful tool and will be prepared for later lectures describing its applications to physical problems.
Automatic Estimation of Osteoporotic Fracture Cases by Using Ensemble Learning Approaches.
Kilic, Niyazi; Hosgormez, Erkan
2016-03-01
Ensemble learning methods are one of the most powerful tools for the pattern classification problems. In this paper, the effects of ensemble learning methods and some physical bone densitometry parameters on osteoporotic fracture detection were investigated. Six feature set models were constructed including different physical parameters and they fed into the ensemble classifiers as input features. As ensemble learning techniques, bagging, gradient boosting and random subspace (RSM) were used. Instance based learning (IBk) and random forest (RF) classifiers applied to six feature set models. The patients were classified into three groups such as osteoporosis, osteopenia and control (healthy), using ensemble classifiers. Total classification accuracy and f-measure were also used to evaluate diagnostic performance of the proposed ensemble classification system. The classification accuracy has reached to 98.85 % by the combination of model 6 (five BMD + five T-score values) using RSM-RF classifier. The findings of this paper suggest that the patients will be able to be warned before a bone fracture occurred, by just examining some physical parameters that can easily be measured without invasive operations.
NASA Astrophysics Data System (ADS)
Yang, Yi-Bo; Chen, Ying; Draper, Terrence; Liang, Jian; Liu, Keh-Fei
2018-03-01
We report the results on the proton mass decomposition and also on the related quark and glue momentum fractions. The results are based on overlap valence fermions on four ensembles of Nf = 2 + 1 DWF configurations with three lattice spacings and volumes, and several pion masses including the physical pion mass. With 1-loop pertur-bative calculation and proper normalization of the glue operator, we find that the u, d, and s quark masses contribute 9(2)% to the proton mass. The quark energy and glue field energy contribute 31(5)% and 37(5)% respectively in the MS scheme at µ = 2 GeV. The trace anomaly gives the remaining 23(1)% contribution. The u, d, s and glue momentum fractions in the MS scheme are consistent with the global analysis at µ = 2 GeV.
Tests of oceanic stochastic parameterisation in a seasonal forecast system.
NASA Astrophysics Data System (ADS)
Cooper, Fenwick; Andrejczuk, Miroslaw; Juricke, Stephan; Zanna, Laure; Palmer, Tim
2015-04-01
Over seasonal time scales, our aim is to compare the relative impact of ocean initial condition and model uncertainty, upon the ocean forecast skill and reliability. Over seasonal timescales we compare four oceanic stochastic parameterisation schemes applied in a 1x1 degree ocean model (NEMO) with a fully coupled T159 atmosphere (ECMWF IFS). The relative impacts upon the ocean of the resulting eddy induced activity, wind forcing and typical initial condition perturbations are quantified. Following the historical success of stochastic parameterisation in the atmosphere, two of the parameterisations tested were multiplicitave in nature: A stochastic variation of the Gent-McWilliams scheme and a stochastic diffusion scheme. We also consider a surface flux parameterisation (similar to that introduced by Williams, 2012), and stochastic perturbation of the equation of state (similar to that introduced by Brankart, 2013). The amplitude of the stochastic term in the Williams (2012) scheme was set to the physically reasonable amplitude considered in that paper. The amplitude of the stochastic term in each of the other schemes was increased to the limits of model stability. As expected, variability was increased. Up to 1 month after initialisation, ensemble spread induced by stochastic parameterisation is greater than that induced by the atmosphere, whilst being smaller than the initial condition perturbations currently used at ECMWF. After 1 month, the wind forcing becomes the dominant source of model ocean variability, even at depth.
Flavor-singlet meson decay constants from Nf=2 +1 +1 twisted mass lattice QCD
NASA Astrophysics Data System (ADS)
Ottnad, Konstantin; Urbach, Carsten; ETM Collaboration
2018-03-01
We present an improved analysis of our lattice data for the η - η' system, including a correction of the relevant correlation functions for residual topological finite size effects and employing consistent chiral and continuum fits. From this analysis we update our physical results for the masses Mη=557 (11 )stat(03 )χ PT MeV and Mη'=911 (64 )stat(03 )χ PT MeV , as well as the mixing angle in the quark flavor basis ϕ =38.8 (2.2 )stat(2.4 )χPT ∘ in excellent agreement with other results from phenomenology. Similarly, we include an analysis for the decay constant parameters, leading to fl=125 (5 )stat(6 )χ PT MeV and fs=178 (4 )stat(1 )χ PT MeV . The second error reflects the uncertainty related to the chiral extrapolation. The data used for this study has been generated on gauge ensembles provided by the European Twisted Mass Collaboration with Nf=2 +1 +1 dynamical flavors of Wilson twisted mass fermions. These ensembles cover a range of pion masses from 220 MeV to 500 MeV and three values of the lattice spacing. Combining our data with a prediction from chiral perturbation theory, we give an estimate for the physical η , η'→γ γ decay widths and the singly-virtual η , η'→γ γ* transition form factors in the limit of large momentum transfer.
Perturbed-input-data ensemble modeling of magnetospheric dynamics
NASA Astrophysics Data System (ADS)
Morley, S.; Steinberg, J. T.; Haiducek, J. D.; Welling, D. T.; Hassan, E.; Weaver, B. P.
2017-12-01
Many models of Earth's magnetospheric dynamics - including global magnetohydrodynamic models, reduced complexity models of substorms and empirical models - are driven by solar wind parameters. To provide consistent coverage of the upstream solar wind these measurements are generally taken near the first Lagrangian point (L1) and algorithmically propagated to the nose of Earth's bow shock. However, the plasma and magnetic field measured near L1 is a point measurement of an inhomogeneous medium, so the individual measurement may not be sufficiently representative of the broader region near L1. The measured plasma may not actually interact with the Earth, and the solar wind structure may evolve between L1 and the bow shock. To quantify uncertainties in simulations, as well as to provide probabilistic forecasts, it is desirable to use perturbed input ensembles of magnetospheric and space weather forecasting models. By using concurrent measurements of the solar wind near L1 and near the Earth, we construct a statistical model of the distributions of solar wind parameters conditioned on their upstream value. So that we can draw random variates from our model we specify the conditional probability distributions using Kernel Density Estimation. We demonstrate the utility of this approach using ensemble runs of selected models that can be used for space weather prediction.
Robustness of Synchrony in Complex Networks and Generalized Kirchhoff Indices
NASA Astrophysics Data System (ADS)
Tyloo, M.; Coletta, T.; Jacquod, Ph.
2018-02-01
In network theory, a question of prime importance is how to assess network vulnerability in a fast and reliable manner. With this issue in mind, we investigate the response to external perturbations of coupled dynamical systems on complex networks. We find that for specific, nonaveraged perturbations, the response of synchronous states depends on the eigenvalues of the stability matrix of the unperturbed dynamics, as well as on its eigenmodes via their overlap with the perturbation vector. Once averaged over properly defined ensembles of perturbations, the response is given by new graph topological indices, which we introduce as generalized Kirchhoff indices. These findings allow for a fast and reliable method for assessing the specific or average vulnerability of a network against changing operational conditions, faults, or external attacks.
Simulating Quantitative Cellular Responses Using Asynchronous Threshold Boolean Network Ensembles
With increasing knowledge about the potential mechanisms underlying cellular functions, it is becoming feasible to predict the response of biological systems to genetic and environmental perturbations. Due to the lack of homogeneity in living tissues it is difficult to estimate t...
Motion compensation using origin ensembles in awake small animal positron emission tomography
NASA Astrophysics Data System (ADS)
Gillam, John E.; Angelis, Georgios I.; Kyme, Andre Z.; Meikle, Steven R.
2017-02-01
In emission tomographic imaging, the stochastic origin ensembles algorithm provides unique information regarding the detected counts given the measured data. Precision in both voxel and region-wise parameters may be determined for a single data set based on the posterior distribution of the count density allowing uncertainty estimates to be allocated to quantitative measures. Uncertainty estimates are of particular importance in awake animal neurological and behavioral studies for which head motion, unique for each acquired data set, perturbs the measured data. Motion compensation can be conducted when rigid head pose is measured during the scan. However, errors in pose measurements used for compensation can degrade the data and hence quantitative outcomes. In this investigation motion compensation and detector resolution models were incorporated into the basic origin ensembles algorithm and an efficient approach to computation was developed. The approach was validated against maximum liklihood—expectation maximisation and tested using simulated data. The resultant algorithm was then used to analyse quantitative uncertainty in regional activity estimates arising from changes in pose measurement precision. Finally, the posterior covariance acquired from a single data set was used to describe correlations between regions of interest providing information about pose measurement precision that may be useful in system analysis and design. The investigation demonstrates the use of origin ensembles as a powerful framework for evaluating statistical uncertainty of voxel and regional estimates. While in this investigation rigid motion was considered in the context of awake animal PET, the extension to arbitrary motion may provide clinical utility where respiratory or cardiac motion perturb the measured data.
Tsuchiya, Masa; Giuliani, Alessandro; Hashimoto, Midori; Erenpreisa, Jekaterina; Yoshikawa, Kenichi
2016-01-01
Background A fundamental issue in bioscience is to understand the mechanism that underlies the dynamic control of genome-wide expression through the complex temporal-spatial self-organization of the genome to regulate the change in cell fate. We address this issue by elucidating a physically motivated mechanism of self-organization. Principal Findings Building upon transcriptome experimental data for seven distinct cell fates, including early embryonic development, we demonstrate that self-organized criticality (SOC) plays an essential role in the dynamic control of global gene expression regulation at both the population and single-cell levels. The novel findings are as follows: i) Mechanism of cell-fate changes: A sandpile-type critical transition self-organizes overall expression into a few transcription response domains (critical states). A cell-fate change occurs by means of a dissipative pulse-like global perturbation in self-organization through the erasure of initial-state critical behaviors (criticality). Most notably, the reprogramming of early embryo cells destroys the zygote SOC control to initiate self-organization in the new embryonal genome, which passes through a stochastic overall expression pattern. ii) Mechanism of perturbation of SOC controls: Global perturbations in self-organization involve the temporal regulation of critical states. Quantitative evaluation of this perturbation in terminal cell fates reveals that dynamic interactions between critical states determine the critical-state coherent regulation. The occurrence of a temporal change in criticality perturbs this between-states interaction, which directly affects the entire genomic system. Surprisingly, a sub-critical state, corresponding to an ensemble of genes that shows only marginal changes in expression and consequently are considered to be devoid of any interest, plays an essential role in generating a global perturbation in self-organization directed toward the cell-fate change. Conclusion and Significance ‘Whole-genome’ regulation of gene expression through self-regulatory SOC control complements gene-by-gene fine tuning and represents a still largely unexplored non-equilibrium statistical mechanism that is responsible for the massive reprogramming of genome expression. PMID:27997556
NASA Technical Reports Server (NTRS)
Ham, Yoo-Geun; Schubert, Siegfried; Chang, Yehui
2012-01-01
An initialization strategy, tailored to the prediction of the Madden-Julian oscillation (MJO), is evaluated using the Goddard Earth Observing System Model, version 5 (GEOS-5), coupled general circulation model (CGCM). The approach is based on the empirical singular vectors (ESVs) of a reduced-space statistically determined linear approximation of the full nonlinear CGCM. The initial ESV, extracted using 10 years (1990-99) of boreal winter hindcast data, has zonal wind anomalies over the western Indian Ocean, while the final ESV (at a forecast lead time of 10 days) reflects a propagation of the zonal wind anomalies to the east over the Maritime Continent an evolution that is characteristic of the MJO. A new set of ensemble hindcasts are produced for the boreal winter season from 1990 to 1999 in which the leading ESV provides the initial perturbations. The results are compared with those from a set of control hindcasts generated using random perturbations. It is shown that the ESV-based predictions have a systematically higher bivariate correlation skill in predicting the MJO compared to those using the random perturbations. Furthermore, the improvement in the skill depends on the phase of the MJO. The ESV is particularly effective in increasing the forecast skill during those phases of the MJO in which the control has low skill (with correlations increasing by as much as 0.2 at 20 25-day lead times), as well as during those times in which the MJO is weak.
Davey, James A; Chica, Roberto A
2014-05-01
Multistate computational protein design (MSD) with backbone ensembles approximating conformational flexibility can predict higher quality sequences than single-state design with a single fixed backbone. However, it is currently unclear what characteristics of backbone ensembles are required for the accurate prediction of protein sequence stability. In this study, we aimed to improve the accuracy of protein stability predictions made with MSD by using a variety of backbone ensembles to recapitulate the experimentally measured stability of 85 Streptococcal protein G domain β1 sequences. Ensembles tested here include an NMR ensemble as well as those generated by molecular dynamics (MD) simulations, by Backrub motions, and by PertMin, a new method that we developed involving the perturbation of atomic coordinates followed by energy minimization. MSD with the PertMin ensembles resulted in the most accurate predictions by providing the highest number of stable sequences in the top 25, and by correctly binning sequences as stable or unstable with the highest success rate (≈90%) and the lowest number of false positives. The performance of PertMin ensembles is due to the fact that their members closely resemble the input crystal structure and have low potential energy. Conversely, the NMR ensemble as well as those generated by MD simulations at 500 or 1000 K reduced prediction accuracy due to their low structural similarity to the crystal structure. The ensembles tested herein thus represent on- or off-target models of the native protein fold and could be used in future studies to design for desired properties other than stability. Copyright © 2013 Wiley Periodicals, Inc.
Update on SU(2) gauge theory with NF = 2 fundamental flavours.
NASA Astrophysics Data System (ADS)
Drach, Vincent; Janowski, Tadeusz; Pica, Claudio
2018-03-01
We present a non perturbative study of SU(2) gauge theory with two fundamental Dirac flavours. This theory provides a minimal template which is ideal for a wide class of Standard Model extensions featuring novel strong dynamics, such as a minimal realization of composite Higgs models. We present an update on the status of the meson spectrum and decay constants based on increased statistics on our existing ensembles and the inclusion of new ensembles with lighter pion masses, resulting in a more reliable chiral extrapolation. Preprint: CP3-Origins-2017-048 DNRF90
A Flexible Approach for the Statistical Visualization of Ensemble Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, K.; Wilson, A.; Bremer, P.
2009-09-29
Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methodsmore » that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.« less
Data-driven reverse engineering of signaling pathways using ensembles of dynamic models.
Henriques, David; Villaverde, Alejandro F; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R
2017-02-01
Despite significant efforts and remarkable progress, the inference of signaling networks from experimental data remains very challenging. The problem is particularly difficult when the objective is to obtain a dynamic model capable of predicting the effect of novel perturbations not considered during model training. The problem is ill-posed due to the nonlinear nature of these systems, the fact that only a fraction of the involved proteins and their post-translational modifications can be measured, and limitations on the technologies used for growing cells in vitro, perturbing them, and measuring their variations. As a consequence, there is a pervasive lack of identifiability. To overcome these issues, we present a methodology called SELDOM (enSEmbLe of Dynamic lOgic-based Models), which builds an ensemble of logic-based dynamic models, trains them to experimental data, and combines their individual simulations into an ensemble prediction. It also includes a model reduction step to prune spurious interactions and mitigate overfitting. SELDOM is a data-driven method, in the sense that it does not require any prior knowledge of the system: the interaction networks that act as scaffolds for the dynamic models are inferred from data using mutual information. We have tested SELDOM on a number of experimental and in silico signal transduction case-studies, including the recent HPN-DREAM breast cancer challenge. We found that its performance is highly competitive compared to state-of-the-art methods for the purpose of recovering network topology. More importantly, the utility of SELDOM goes beyond basic network inference (i.e. uncovering static interaction networks): it builds dynamic (based on ordinary differential equation) models, which can be used for mechanistic interpretations and reliable dynamic predictions in new experimental conditions (i.e. not used in the training). For this task, SELDOM's ensemble prediction is not only consistently better than predictions from individual models, but also often outperforms the state of the art represented by the methods used in the HPN-DREAM challenge.
Data-driven reverse engineering of signaling pathways using ensembles of dynamic models
Henriques, David; Villaverde, Alejandro F.; Banga, Julio R.
2017-01-01
Despite significant efforts and remarkable progress, the inference of signaling networks from experimental data remains very challenging. The problem is particularly difficult when the objective is to obtain a dynamic model capable of predicting the effect of novel perturbations not considered during model training. The problem is ill-posed due to the nonlinear nature of these systems, the fact that only a fraction of the involved proteins and their post-translational modifications can be measured, and limitations on the technologies used for growing cells in vitro, perturbing them, and measuring their variations. As a consequence, there is a pervasive lack of identifiability. To overcome these issues, we present a methodology called SELDOM (enSEmbLe of Dynamic lOgic-based Models), which builds an ensemble of logic-based dynamic models, trains them to experimental data, and combines their individual simulations into an ensemble prediction. It also includes a model reduction step to prune spurious interactions and mitigate overfitting. SELDOM is a data-driven method, in the sense that it does not require any prior knowledge of the system: the interaction networks that act as scaffolds for the dynamic models are inferred from data using mutual information. We have tested SELDOM on a number of experimental and in silico signal transduction case-studies, including the recent HPN-DREAM breast cancer challenge. We found that its performance is highly competitive compared to state-of-the-art methods for the purpose of recovering network topology. More importantly, the utility of SELDOM goes beyond basic network inference (i.e. uncovering static interaction networks): it builds dynamic (based on ordinary differential equation) models, which can be used for mechanistic interpretations and reliable dynamic predictions in new experimental conditions (i.e. not used in the training). For this task, SELDOM’s ensemble prediction is not only consistently better than predictions from individual models, but also often outperforms the state of the art represented by the methods used in the HPN-DREAM challenge. PMID:28166222
Conservation of Mass and Preservation of Positivity with Ensemble-Type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; Mclaughlin, Dennis; Cohn, Stephen E.; Verlaan, Martin
2014-01-01
This paper considers the incorporation of constraints to enforce physically based conservation laws in the ensemble Kalman filter. In particular, constraints are used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In certain situations filtering algorithms such as the ensemble Kalman filter (EnKF) and ensemble transform Kalman filter (ETKF) yield updated ensembles that conserve mass but are negative, even though the actual states must be nonnegative. In such situations if negative values are set to zero, or a log transform is introduced, the total mass will not be conserved. In this study, mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate non-negativity constraints. Simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. In two examples, an update that includes a non-negativity constraint is able to properly describe the transport of a sharp feature (e.g., a triangle or cone). A number of implementation questions still need to be addressed, particularly the need to develop a computationally efficient quadratic programming update for large ensemble.
NASA Astrophysics Data System (ADS)
Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique
2018-05-01
Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.
NASA Astrophysics Data System (ADS)
Fukui, Shin; Iwasaki, Toshiki; Saito, Kazuo; Seko, Hiromu; Kunii, Masaru
2016-04-01
Several long-term global reanalyses have been produced by major operational centres and have contributed to the advance of weather and climate researches considerably. Although the horizontal resolutions of these global reanalyses are getting higher partly due to the development of computing technology, they are still too coarse to reproduce local circulations and precipitation realistically. To solve this problem, dynamical downscaling is often employed. However, the forcing from lateral boundaries only cannot necessarily control the inner fields especially in long-term dynamical downscaling. Regional reanalysis is expected to overcome the difficulty. To maintain the long-term consistency of the analysis quality, it is better to assimilate only the conventional observations that are available in long period. To confirm the effectiveness of the regional reanalysis, some assimilation experiments are performed. In the experiments, only conventional observations (SYNOP, SHIP, BUOY, TEMP, PILOT, TC-Bogus) are assimilated with the NHM-LETKF system, which consists of the nonhydrostatic model (NHM) of the Japan Meteorological Agency (JMA) and the local ensemble transform Kalman filter (LETKF). The horizontal resolution is 25 km and the domain covers Japan and its surroundings. Japanese 55-year reanalysis (JRA-55) is adopted as the initial and lateral boundary conditions for the NHM-LETKF forecast-analysis cycles. The ensemble size is 10. The experimental period is August 2014 as a representative of warm season for the region. The results are verified against the JMA's operational Meso-scale Analysis, which is produced with assimilating observation data including various remote sensing observations using a 4D-Var scheme, and compared with those of the simple dynamical downscaling experiment without data assimilation. Effects of implementation of lateral boundary perturbations derived from an EOF analysis of JRA-55 over the targeted domain are also examined. The comparison proposes that the assimilation system can reproduce more accurate fields than dynamical downscaling. The implementation of the lateral boundary perturbations implies that the perturbations contribute to providing more appropriate ensemble spreads, though the perturbations are not necessarily consistent to those of the inner fields given by NHM-LETKF.
Projecting the release of carbon from permafrost soils using a perturbed physics ensemble
NASA Astrophysics Data System (ADS)
MacDougall, A. H.; Knutti, R.
2015-12-01
The soils of the Northern Hemisphere permafrost region are estimated to contain 1100 to 1500 Pg of carbon (Pg C). A substantial fraction of this carbon has been frozen and therefore protected from microbial decay for millennia. As anthropogenic climate warming progresses much of this permafrost is expected to thaw. Here we conduct perturbed physics experiments on a climate model of intermediate complexity, with an improved permafrost carbon module, to estimate with formal uncertainty bounds the release of carbon from permafrost soils by year 2100 and 2300. We estimate that by 2100 the permafrost region may release between 56 (13 to 118) Pg C under Representative Concentration Pathway (RCP) 2.6 and 102 (27 to 199) Pg C under RCP 8.5, with substantially more to be released under each scenario by year 2300. A subset of 25 model variants were projected 8000 years into the future under continued RCP 4.5 and 8.5 forcing. Under the high forcing scenario the permafrost carbon pool decays away over several thousand years. Under the moderate scenario forcing a remnant near-surface permafrost region persists in the high Arctic which develops a large permafrost carbon pool, leading to global recovery of the pool beginning in mid third millennium of the common era (CE). Overall our simulations suggest that the permafrost carbon cycle feedback to climate change will make a significant but not cataclysmic contribution to climate change over the next centuries and millennia.
NASA Astrophysics Data System (ADS)
MacDougall, Andrew; Knutti, Reto
2016-04-01
The soils of the northern hemisphere permafrost region are estimated to contain 1100 to 1500 Pg of carbon. A substantial fraction of this carbon has been frozen and therefore protected from microbial decay for millennia. As anthropogenic climate warming progresses permafrost soils are expected to thaw. Here we conduct perturbed physics experiments on a climate model of intermediate complexity, with an improved permafrost carbon module, to estimate with formal uncertainty bounds the release of carbon from permafrost soils by year 2100 and 2300. We estimate that by year 2100 the permafrost region may release between 56 (13 to 118)Pg C under Representative Concentration Pathway (RCP) 2.6 and 102 (27 to 199) Pg C under RCP 8.5, with substantially more to be released under each scenario by 2300. A subset of 25 model variants is projected 8000 years into the future under continued RCP 4.5 and 8.5 forcing. Under the high forcing scenario the permafrost carbon pool decays away over several thousand years. Under the moderate forcing scenario a remnant near-surface permafrost region persists in the High-Arctic, which develops a large permafrost carbon pool, leading to a global recovery of the pool beginning in mid third millennium of the common era. Overall our simulations suggest that the permafrost carbon cycle feedback to climate change will make a significant but not cataclysmic contribution to climate change over the next centuries and millennia.
From Cyclone Tracks to the Costs of European Winter Storms: A Probabilistic Loss Assessment Model
NASA Astrophysics Data System (ADS)
Orwig, K.; Renggli, D.; Corti, T.; Reese, S.; Wueest, M.; Viktor, E.; Zimmerli, P.
2014-12-01
European winter storms cause billions of dollars of insured losses every year. Therefore, it is essential to understand potential impacts of future events, and the role reinsurance can play to mitigate the losses. The authors will present an overview on natural catastrophe risk assessment modeling in the reinsurance industry, and the development of a new innovative approach for modeling the risk associated with European winter storms.The new innovative approach includes the development of physically meaningful probabilistic (i.e. simulated) events for European winter storm loss assessment. The meteorological hazard component of the new model is based on cyclone and windstorm tracks identified in the 20thCentury Reanalysis data. The knowledge of the evolution of winter storms both in time and space allows the physically meaningful perturbation of historical event properties (e.g. track, intensity, etc.). The perturbation includes a random element but also takes the local climatology and the evolution of the historical event into account.The low-resolution wind footprints taken from the 20thCentury Reanalysis are processed by a statistical-dynamical downscaling to generate high-resolution footprints for both the simulated and historical events. Downscaling transfer functions are generated using ENSEMBLES regional climate model data. The result is a set of reliable probabilistic events representing thousands of years. The event set is then combined with country and site-specific vulnerability functions and detailed market- or client-specific information to compute annual expected losses.
NASA Astrophysics Data System (ADS)
Farrell, Brian; Ioannou, Petros; Nikolaidis, Marios-Andreas
2017-11-01
While linear non-normality underlies the mechanism of energy transfer from the externally driven flow to the perturbation field, nonlinearity is also known to play an essential role in sustaining turbulence. We report a study based on the statistical state dynamics of Couette flow turbulence with the goal of better understanding the role of nonlinearity in sustaining turbulence. The statistical state dynamics implementations used are ensemble closures at second order in a cumulant expansion of the Navier-Stokes equations in which the averaging operator is the streamwise mean. Two fundamentally non-normal mechanisms potentially contributing to maintaining the second cumulant are identified. These are essentially parametric perturbation growth arising from interaction of the perturbations with the fluctuating mean flow and transient growth of perturbations arising from nonlinear interaction between components of the perturbation field. By the method of selectively including these mechanisms parametric growth is found to maintain the perturbation field in the turbulent state while the more commonly invoked mechanism associated with transient growth of perturbations arising from scattering by nonlinear interaction is found to suppress perturbation variance. Funded by ERC Coturb Madrid Summer Program and NSF AGS-1246929.
NASA Astrophysics Data System (ADS)
Motzoi, F.; Mølmer, K.
2018-05-01
We propose to use the interaction between a single qubit atom and a surrounding ensemble of three level atoms to control the phase of light reflected by an optical cavity. Our scheme employs an ensemble dark resonance that is perturbed by the qubit atom to yield a single-atom single photon gate. We show here that off-resonant excitation towards Rydberg states with strong dipolar interactions offers experimentally-viable regimes of operations with low errors (in the 10‑3 range) as required for fault-tolerant optical-photon, gate-based quantum computation. We also propose and analyze an implementation within microwave circuit-QED, where a strongly-coupled ancilla superconducting qubit can be used in the place of the atomic ensemble to provide high-fidelity coupling to microwave photons.
NASA Astrophysics Data System (ADS)
Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D.
2012-09-01
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission driven rather than concentration driven perturbed parameter ensemble of a Global Climate Model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration driven simulations (with 10-90 percentile ranges of 1.7 K for the aggressive mitigation scenario up to 3.9 K for the high end business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 degrees (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission driven experiments, they do not change existing expectations (based on previous concentration driven experiments) on the timescale that different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration pathways used to drive GCM ensembles lies towards the lower end of our simulated distribution. This design decision (a legecy of previous assessments) is likely to lead concentration driven experiments to under-sample strong feedback responses in concentration driven projections. Our ensemble of emission driven simulations span the global temperature response of other multi-model frameworks except at the low end, where combinations of low climate sensitivity and low carbon cycle feedbacks lead to responses outside our ensemble range. The ensemble simulates a number of high end responses which lie above the CMIP5 carbon cycle range. These high end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real world climate sensitivity constraints which, if achieved, would lead to reductions on the uppper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present day observables and future changes while the large spread of future projected changes, highlights the ongoing need for such work.
The role of internal variability for decadal carbon uptake anomalies in the Southern Ocean
NASA Astrophysics Data System (ADS)
Spring, Aaron; Hi, Hongmei; Ilyina, Tatiana
2017-04-01
The Southern Ocean is a major sink for anthropogenic CO2 emissions and hence it plays an essential role in modulating global carbon cycle and climate change. Previous studies based on observations (e.g., Landschützer et al. 2015) show pronounced decadal variations of carbon uptake in the Southern Ocean in recent decades and this variability is largely driven by internal climate variability. However, due to limited ensemble size of simulations, the variability of this important ocean sink is still poorly assessed by the state-of-the-art earth system models (ESMs). To assess the internal variability of carbon sink in the Southern Ocean, we use a large ensemble of 100 member simulations based on the Max Planck Institute-ESM (MPI-ESM). The large ensemble of simulations is generated via perturbed initial conditions in the ocean and atmosphere. Each ensemble member includes a historical simulation from 1850 to 2005 with an extension until 2100 under Representative Concentration Pathway (RCP) 4.5 future projections. Here we use model simulations from 1980-2015 to compare with available observation-based dataset. We found several ensemble members showing decadal decreasing trends in the carbon sink, which are similar to the trend shown in observations. This result suggests that MPI-ESM large ensemble simulations are able to reproduce decadal variation of carbon sink in the Southern Ocean. Moreover, the decreasing trends of Southern Ocean carbon sink in MPI-ESM are mainly contributed by region between 50-60°S. To understand the internal variability of the air-sea carbon fluxes in the Southern Ocean, we further investigate the variability of underlying processes, such as physical climate variability and ocean biological processes. Our results indicate two main drivers for the decadal decreasing trend of carbon sink: i) Intensified winds enhance upwelling of old carbon-rich waters, this leads to increase of the ocean surface pCO2; ii) Primary production is reduced in area from 50-60°S, probably induced by reduced euphotic water column stability; therefore the biological drawdown of ocean surface pCO2 is weakened accordingly and hence the ocean is in favor of carbon outgassing. Landschützer, et al. (2015): The reinvigoration of the Southern Ocean carbon sink, Science, 349, 1221-1224.
A statistical analysis of RNA folding algorithms through thermodynamic parameter perturbation.
Layton, D M; Bundschuh, R
2005-01-01
Computational RNA secondary structure prediction is rather well established. However, such prediction algorithms always depend on a large number of experimentally measured parameters. Here, we study how sensitive structure prediction algorithms are to changes in these parameters. We found already that for changes corresponding to the actual experimental error to which these parameters have been determined, 30% of the structure are falsely predicted whereas the ground state structure is preserved under parameter perturbation in only 5% of all the cases. We establish that base-pairing probabilities calculated in a thermal ensemble are viable although not a perfect measure for the reliability of the prediction of individual structure elements. Here, a new measure of stability using parameter perturbation is proposed, and its limitations are discussed.
Mass Conservation and Positivity Preservation with Ensemble-type Kalman Filter Algorithms
NASA Technical Reports Server (NTRS)
Janjic, Tijana; McLaughlin, Dennis B.; Cohn, Stephen E.; Verlaan, Martin
2013-01-01
Maintaining conservative physical laws numerically has long been recognized as being important in the development of numerical weather prediction (NWP) models. In the broader context of data assimilation, concerted efforts to maintain conservation laws numerically and to understand the significance of doing so have begun only recently. In order to enforce physically based conservation laws of total mass and positivity in the ensemble Kalman filter, we incorporate constraints to ensure that the filter ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. We show that the analysis steps of ensemble transform Kalman filter (ETKF) algorithm and ensemble Kalman filter algorithm (EnKF) can conserve the mass integral, but do not preserve positivity. Further, if localization is applied or if negative values are simply set to zero, then the total mass is not conserved either. In order to ensure mass conservation, a projection matrix that corrects for localization effects is constructed. In order to maintain both mass conservation and positivity preservation through the analysis step, we construct a data assimilation algorithms based on quadratic programming and ensemble Kalman filtering. Mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate constraints. Some simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. The results show clear improvements in both analyses and forecasts, particularly in the presence of localized features. Behavior of the algorithm is also tested in presence of model error.
On the relationships among cloud cover, mixed-phase partitioning, and planetary albedo in GCMs
McCoy, Daniel T.; Tan, Ivy; Hartmann, Dennis L.; ...
2016-05-06
In this study, it is shown that CMIP5 global climate models (GCMs) that convert supercooled water to ice at relatively warm temperatures tend to have a greater mean-state cloud fraction and more negative cloud feedback in the middle and high latitude Southern Hemisphere. We investigate possible reasons for these relationships by analyzing the mixed-phase parameterizations in 26 GCMs. The atmospheric temperature where ice and liquid are equally prevalent (T5050) is used to characterize the mixed-phase parameterization in each GCM. Liquid clouds have a higher albedo than ice clouds, so, all else being equal, models with more supercooled liquid water wouldmore » also have a higher planetary albedo. The lower cloud fraction in these models compensates the higher cloud reflectivity and results in clouds that reflect shortwave radiation (SW) in reasonable agreement with observations, but gives clouds that are too bright and too few. The temperature at which supercooled liquid can remain unfrozen is strongly anti-correlated with cloud fraction in the climate mean state across the model ensemble, but we know of no robust physical mechanism to explain this behavior, especially because this anti-correlation extends through the subtropics. A set of perturbed physics simulations with the Community Atmospheric Model Version 4 (CAM4) shows that, if its temperature-dependent phase partitioning is varied and the critical relative humidity for cloud formation in each model run is also tuned to bring reflected SW into agreement with observations, then cloud fraction increases and liquid water path (LWP) decreases with T5050, as in the CMIP5 ensemble.« less
Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections
NASA Astrophysics Data System (ADS)
Wakazuki, Y.
2015-12-01
A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.
NASA Astrophysics Data System (ADS)
Zhang, X.; Cornuelle, B. D.; Martin, A.; Weihs, R. R.; Ralph, M.
2017-12-01
We evaluated the merit in coastal precipitation forecasts by inclusion of high resolution sea surface temperature (SST) from blended satellite and in situ observations as a boundary condition (BC) to the Weather Research and Forecast (WRF) mesoscale model through simple perturbation tests. Our sensitivity analyses shows that the limited improvement of watershed scale precipitation forecast is credible. When only SST BC is changed, there is an uncertainty introduced because of artificial model state equilibrium and the nonlinear nature of the WRF model system. With the change of SST on the order of a fraction of a degree centigrade, we found that the part of random perturbation forecast response is saturated after 48 hours when it reaches to the order magnitude of the linear response. It is important to update the SST at a shorter time period, so that the independent excited nonlinear modes can cancel each other. The uncertainty in our SST configuration is quantitatively equivalent to adding to a spatially uncorrelated Guasian noise of zero mean and 0.05 degree of standard deviation to the SST. At this random noise perturbation magnitude, the ensemble average behaves well within a convergent range. It is also found that the sensitivity of forecast changes in response to SST changes. This is measured by the ratio of the spatial variability of mean of the ensemble perturbations over the spatial variability of the corresponding forecast. The ratio is about 10% for surface latent heat flux, 5 % for IWV, and less than 1% for surface pressure.
Ensemble forecasting has been used for operational numerical weather prediction in the United States and Europe since the early 1990s. An ensemble of weather or climate forecasts is used to characterize the two main sources of uncertainty in computer models of physical systems: ...
The use of perturbed physics ensembles and emulation in palaeoclimate reconstruction (Invited)
NASA Astrophysics Data System (ADS)
Edwards, T. L.; Rougier, J.; Collins, M.
2010-12-01
Climate is a coherent process, with correlations and dependencies across space, time, and climate variables. However, reconstructions of palaeoclimate traditionally consider individual pieces of information independently, rather than making use of this covariance structure. Such reconstructions are at risk of being unphysical or at least implausible. Climate simulators such as General Circulation Models (GCMs), on the other hand, contain climate system theory in the form of dynamical equations describing physical processes, but are imperfect and computationally expensive. These two datasets - pointwise palaeoclimate reconstructions and climate simulator evaluations - contain complementary information, and a statistical synthesis can produce a palaeoclimate reconstruction that combines them while not ignoring their limitations. We use an ensemble of simulators with perturbed parameterisations, to capture the uncertainty about the simulator variant, and our method also accounts for structural uncertainty. The resulting reconstruction contains a full expression of climate uncertainty, not just pointwise but also jointly over locations. Such joint information is crucial in determining spatially extensive features such as isotherms, or the location of the tree-line. A second outcome of the statistical analysis is a refined distribution for the simulator parameters. In this way, information from palaeoclimate observations can be used directly in quantifying uncertainty in future climate projections. The main challenge is the expense of running a large scale climate simulator: each evaluation of an atmosphere-ocean GCM takes several months of computing time. The solution is to interpret the ensemble of evaluations within an 'emulator', which is a statistical model of the simulator. This technique has been used fruitfully in the statistical field of Computer Models for two decades, and has recently been applied in estimating uncertainty in future climate predictions in the UKCP09 (http://ukclimateprojections.defra.gov.uk). But only in the last couple of years has it developed to the point where it can be applied to large-scale spatial fields. We construct an emulator for the mid-Holocene (6000 calendar years BP) temperature anomaly over North America, at the resolution of our simulator (2.5° latitude by 3.75° longitude). This allows us to explore the behaviour of simulator variants that we could not afford to evaluate directly. We introduce the technique of 'co-emulation' of two versions of the climate simulator: the coupled atmosphere-ocean model HadCM3, and an equivalent with a simplified ocean, HadSM3. Running two different versions of a simulator is a powerful tool for increasing the information yield from a fixed budget of computer time, but the results must be combined statistically to account for the reduced fidelity of the quicker version. Emulators provide the appropriate framework.
2011-09-01
the ensemble perturbations fXb(k): k5 1, . . . , Kg are from the same distribution; thus P̂bc ’ 1 K 2 1 K21 k51 Pbtc ’ Pbtc, and (18) p̂bcu ’ p btc u...19) where p̂bcu and p btc u are the uth column of P̂ bc u and P btc u , respectively. Similar arguments can be made to show that the filtered...estimate should also satisfy ~pbcu ’ p btc u , (20) where ~pbcu is the uth column of ~P bc u . We emphasize that Eqs. (19) and (20) do not provide
Project FIRES. Volume 4: Prototype Protective Ensemble Qualification Test Report, Phase 1B
NASA Technical Reports Server (NTRS)
Abeles, F. J.
1980-01-01
The qualification testing of a prototype firefighter's protective ensemble is documented. Included are descriptions of the design requirements, the testing methods, and the test apparatus. The tests include measurements of individual subsystem characteristics in areas relating to both physical testing, such as heat, flame, impact penetration and human factors testing, such as dexterity, grip, and mobility. Also, measurements related to both physical and human factors testing of the complete ensemble, such as water protection, metabolic expenditures, and compatibility are considered.
From a structural average to the conformational ensemble of a DNA bulge
Shi, Xuesong; Beauchamp, Kyle A.; Harbury, Pehr B.; Herschlag, Daniel
2014-01-01
Direct experimental measurements of conformational ensembles are critical for understanding macromolecular function, but traditional biophysical methods do not directly report the solution ensemble of a macromolecule. Small-angle X-ray scattering interferometry has the potential to overcome this limitation by providing the instantaneous distance distribution between pairs of gold-nanocrystal probes conjugated to a macromolecule in solution. Our X-ray interferometry experiments reveal an increasing bend angle of DNA duplexes with bulges of one, three, and five adenosine residues, consistent with previous FRET measurements, and further reveal an increasingly broad conformational ensemble with increasing bulge length. The distance distributions for the AAA bulge duplex (3A-DNA) with six different Au-Au pairs provide strong evidence against a simple elastic model in which fluctuations occur about a single conformational state. Instead, the measured distance distributions suggest a 3A-DNA ensemble with multiple conformational states predominantly across a region of conformational space with bend angles between 24 and 85 degrees and characteristic bend directions and helical twists and displacements. Additional X-ray interferometry experiments revealed perturbations to the ensemble from changes in ionic conditions and the bulge sequence, effects that can be understood in terms of electrostatic and stacking contributions to the ensemble and that demonstrate the sensitivity of X-ray interferometry. Combining X-ray interferometry ensemble data with molecular dynamics simulations gave atomic-level models of representative conformational states and of the molecular interactions that may shape the ensemble, and fluorescence measurements with 2-aminopurine-substituted 3A-DNA provided initial tests of these atomistic models. More generally, X-ray interferometry will provide powerful benchmarks for testing and developing computational methods. PMID:24706812
Glue Spin and Helicity in the Proton from Lattice QCD.
Yang, Yi-Bo; Sufian, Raza Sabbir; Alexandru, Andrei; Draper, Terrence; Glatzmaier, Michael J; Liu, Keh-Fei; Zhao, Yong
2017-03-10
We report the first lattice QCD calculation of the glue spin in the nucleon. The lattice calculation is carried out with valence overlap fermions on 2+1 flavor domain-wall fermion gauge configurations on four lattice spacings and four volumes including an ensemble with physical values for the quark masses. The glue spin S_{G} in the Coulomb gauge in the modified minimal subtraction (MS[over ¯]) scheme is obtained with one-loop perturbative matching. We find the results fairly insensitive to lattice spacing and quark masses. We also find that the proton momentum dependence of S_{G} in the range 0≤|p[over →]|<1.5 GeV is very mild, and we determine it in the large-momentum limit to be S_{G}=0.251(47)(16) at the physical pion mass in the MS[over ¯] scheme at μ^{2}=10 GeV^{2}. If the matching procedure in large-momentum effective theory is neglected, S_{G} is equal to the glue helicity measured in high-energy scattering experiments.
The NRL relocatable ocean/acoustic ensemble forecast system
NASA Astrophysics Data System (ADS)
Rowley, C.; Martin, P.; Cummings, J.; Jacobs, G.; Coelho, E.; Bishop, C.; Hong, X.; Peggion, G.; Fabre, J.
2009-04-01
A globally relocatable regional ocean nowcast/forecast system has been developed to support rapid implementation of new regional forecast domains. The system is in operational use at the Naval Oceanographic Office for a growing number of regional and coastal implementations. The new system is the basis for an ocean acoustic ensemble forecast and adaptive sampling capability. We present an overview of the forecast system and the ocean ensemble and adaptive sampling methods. The forecast system consists of core ocean data analysis and forecast modules, software for domain configuration, surface and boundary condition forcing processing, and job control, and global databases for ocean climatology, bathymetry, tides, and river locations and transports. The analysis component is the Navy Coupled Ocean Data Assimilation (NCODA) system, a 3D multivariate optimum interpolation system that produces simultaneous analyses of temperature, salinity, geopotential, and vector velocity using remotely-sensed SST, SSH, and sea ice concentration, plus in situ observations of temperature, salinity, and currents from ships, buoys, XBTs, CTDs, profiling floats, and autonomous gliders. The forecast component is the Navy Coastal Ocean Model (NCOM). The system supports one-way nesting and multiple assimilation methods. The ensemble system uses the ensemble transform technique with error variance estimates from the NCODA analysis to represent initial condition error. Perturbed surface forcing or an atmospheric ensemble is used to represent errors in surface forcing. The ensemble transform Kalman filter is used to assess the impact of adaptive observations on future analysis and forecast uncertainty for both ocean and acoustic properties.
Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.
Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G
2017-09-01
To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.
NASA Astrophysics Data System (ADS)
Teutschbein, Claudia; Grabs, Thomas; Laudon, Hjalmar; Karlsen, Reinert H.; Bishop, Kevin
2018-06-01
In this paper we explored how landscape characteristics such as topography, geology, soils and land cover influence the way catchments respond to changing climate conditions. Based on an ensemble of 15 regional climate models bias-corrected with a distribution-mapping approach, present and future streamflow in 14 neighboring and rather similar catchments in Northern Sweden was simulated with the HBV model. We established functional relationships between a range of landscape characteristics and projected changes in streamflow signatures. These were then used to analyze hydrological consequences of physical perturbations in a hypothetically ungauged basin in a climate change context. Our analysis showed a strong connection between the forest cover extent and the sensitivity of different components of a catchment's hydrological regime to changing climate conditions. This emphasizes the need to redefine forestry goals and practices in advance of climate change-related risks and uncertainties.
NASA Astrophysics Data System (ADS)
Kutty, Govindan; Muraleedharan, Rohit; Kesarkar, Amit P.
2018-03-01
Uncertainties in the numerical weather prediction models are generally not well-represented in ensemble-based data assimilation (DA) systems. The performance of an ensemble-based DA system becomes suboptimal, if the sources of error are undersampled in the forecast system. The present study examines the effect of accounting for model error treatments in the hybrid ensemble transform Kalman filter—three-dimensional variational (3DVAR) DA system (hybrid) in the track forecast of two tropical cyclones viz. Hudhud and Thane, formed over the Bay of Bengal, using Advanced Research Weather Research and Forecasting (ARW-WRF) model. We investigated the effect of two types of model error treatment schemes and their combination on the hybrid DA system; (i) multiphysics approach, which uses different combination of cumulus, microphysics and planetary boundary layer schemes, (ii) stochastic kinetic energy backscatter (SKEB) scheme, which perturbs the horizontal wind and potential temperature tendencies, (iii) a combination of both multiphysics and SKEB scheme. Substantial improvements are noticed in the track positions of both the cyclones, when flow-dependent ensemble covariance is used in 3DVAR framework. Explicit model error representation is found to be beneficial in treating the underdispersive ensembles. Among the model error schemes used in this study, a combination of multiphysics and SKEB schemes has outperformed the other two schemes with improved track forecast for both the tropical cyclones.
Bavassi, M Luz; Tagliazucchi, Enzo; Laje, Rodrigo
2013-02-01
Time processing in the few hundred milliseconds range is involved in the human skill of sensorimotor synchronization, like playing music in an ensemble or finger tapping to an external beat. In finger tapping, a mechanistic explanation in biologically plausible terms of how the brain achieves synchronization is still missing despite considerable research. In this work we show that nonlinear effects are important for the recovery of synchronization following a perturbation (a step change in stimulus period), even for perturbation magnitudes smaller than 10% of the period, which is well below the amount of perturbation needed to evoke other nonlinear effects like saturation. We build a nonlinear mathematical model for the error correction mechanism and test its predictions, and further propose a framework that allows us to unify the description of the three common types of perturbations. While previous authors have used two different model mechanisms for fitting different perturbation types, or have fitted different parameter value sets for different perturbation magnitudes, we propose the first unified description of the behavior following all perturbation types and magnitudes as the dynamical response of a compound model with fixed terms and a single set of parameter values. Copyright © 2012 Elsevier B.V. All rights reserved.
Probabilistic forecasts based on radar rainfall uncertainty
NASA Astrophysics Data System (ADS)
Liguori, S.; Rico-Ramirez, M. A.
2012-04-01
The potential advantages resulting from integrating weather radar rainfall estimates in hydro-meteorological forecasting systems is limited by the inherent uncertainty affecting radar rainfall measurements, which is due to various sources of error [1-3]. The improvement of quality control and correction techniques is recognized to play a role for the future improvement of radar-based flow predictions. However, the knowledge of the uncertainty affecting radar rainfall data can also be effectively used to build a hydro-meteorological forecasting system in a probabilistic framework. This work discusses the results of the implementation of a novel probabilistic forecasting system developed to improve ensemble predictions over a small urban area located in the North of England. An ensemble of radar rainfall fields can be determined as the sum of a deterministic component and a perturbation field, the latter being informed by the knowledge of the spatial-temporal characteristics of the radar error assessed with reference to rain-gauges measurements. This approach is similar to the REAL system [4] developed for use in the Southern-Alps. The radar uncertainty estimate can then be propagated with a nowcasting model, used to extrapolate an ensemble of radar rainfall forecasts, which can ultimately drive hydrological ensemble predictions. A radar ensemble generator has been calibrated using radar rainfall data made available from the UK Met Office after applying post-processing and corrections algorithms [5-6]. One hour rainfall accumulations from 235 rain gauges recorded for the year 2007 have provided the reference to determine the radar error. Statistics describing the spatial characteristics of the error (i.e. mean and covariance) have been computed off-line at gauges location, along with the parameters describing the error temporal correlation. A system has then been set up to impose the space-time error properties to stochastic perturbations, generated in real-time at gauges location, and then interpolated back onto the radar domain, in order to obtain probabilistic radar rainfall fields in real time. The deterministic nowcasting model integrated in the STEPS system [7-8] has been used for the purpose of propagating the uncertainty and assessing the benefit of implementing the radar ensemble generator for probabilistic rainfall forecasts and ultimately sewer flow predictions. For this purpose, events representative of different types of precipitation (i.e. stratiform/convective) and significant at the urban catchment scale (i.e. in terms of sewer overflow within the urban drainage system) have been selected. As high spatial/temporal resolution is required to the forecasts for their use in urban areas [9-11], the probabilistic nowcasts have been set up to be produced at 1 km resolution and 5 min intervals. The forecasting chain is completed by a hydrodynamic model of the urban drainage network. The aim of this work is to discuss the implementation of this probabilistic system, which takes into account the radar error to characterize the forecast uncertainty, with consequent potential benefits in the management of urban systems. It will also allow a comparison with previous findings related to the analysis of different approaches to uncertainty estimation and quantification in terms of rainfall [12] and flows at the urban scale [13]. Acknowledgements The authors would like to acknowledge the BADC, the UK Met Office and Dr. Alan Seed from the Australian Bureau of Meteorology for providing the radar data and the nowcasting model. The authors acknowledge the support from the Engineering and Physical Sciences Research Council (EPSRC) via grant EP/I012222/1.
NASA Astrophysics Data System (ADS)
Peishu, Zong; Jianping, Tang; Shuyu, Wang; Lingyun, Xie; Jianwei, Yu; Yunqian, Zhu; Xiaorui, Niu; Chao, Li
2017-08-01
The parameterization of physical processes is one of the critical elements to properly simulate the regional climate over eastern China. It is essential to conduct detailed analyses on the effect of physical parameterization schemes on regional climate simulation, to provide more reliable regional climate change information. In this paper, we evaluate the 25-year (1983-2007) summer monsoon climate characteristics of precipitation and surface air temperature by using the regional spectral model (RSM) with different physical schemes. The ensemble results using the reliability ensemble averaging (REA) method are also assessed. The result shows that the RSM model has the capacity to reproduce the spatial patterns, the variations, and the temporal tendency of surface air temperature and precipitation over eastern China. And it tends to predict better climatology characteristics over the Yangtze River basin and the South China. The impact of different physical schemes on RSM simulations is also investigated. Generally, the CLD3 cloud water prediction scheme tends to produce larger precipitation because of its overestimation of the low-level moisture. The systematic biases derived from the KF2 cumulus scheme are larger than those from the RAS scheme. The scale-selective bias correction (SSBC) method improves the simulation of the temporal and spatial characteristics of surface air temperature and precipitation and advances the circulation simulation capacity. The REA ensemble results show significant improvement in simulating temperature and precipitation distribution, which have much higher correlation coefficient and lower root mean square error. The REA result of selected experiments is better than that of nonselected experiments, indicating the necessity of choosing better ensemble samples for ensemble.
NASA Astrophysics Data System (ADS)
Rivière, G.; Hua, B. L.
2004-10-01
A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.
Entropy criteria applied to pattern selection in systems with free boundaries
NASA Astrophysics Data System (ADS)
Kirkaldy, J. S.
1985-10-01
The steady state differential or integral equations which describe patterned dissipative structures, typically to be identified with first order phase transformation morphologies like isothermal pearlites, are invariably degenerate in one or more order parameters (the lamellar spacing in the pearlite case). It is often observed that a different pattern is attained at the steady state for each initial condition (the hysteresis or metastable case). Alternatively, boundary perturbations and internal fluctuations during transition up to, or at the steady state, destroy the path coherence. In this case a statistical ensemble of imperfect patterns often emerges which represents a fluctuating but recognizably patterned and unique average steady state. It is cases like cellular, lamellar pearlite, involving an assembly of individual cell patterns which are regularly perturbed by local fluctuation and growth processes, which concern us here. Such weakly fluctuating nonlinear steady state ensembles can be arranged in a thought experiment so as to evolve as subsystems linking two very large mass-energy reservoirs in isolation. Operating on this discontinuous thermodynamic ideal, Onsager’s principle of maximum path probability for isolated systems, which we interpret as a minimal time correlation function connecting subsystem and baths, identifies the stable steady state at a parametric minimum or maximum (or both) in the dissipation rate. This nonlinear principle is independent of the Principle of Minimum Dissipation which is applicable in the linear regime of irreversible thermodynamics. The statistical argument is equivalent to the weak requirement that the isolated system entropy as a function of time be differentiable to the second order despite the macroscopic pattern fluctuations which occur in the subsystem. This differentiability condition is taken for granted in classical stability theory based on the 2nd Law. The optimal principle as applied to isothermal and forced velocity pearlites (in this case maximal) possesses a Le Chatelier (perturbation) Principle which can be formulated exactly via Langer’s conjecture that “each lamella must grow in a direction which is perpendicular to the solidification front”. This is the first example of such an equivalence to be experimentally and theoretically recognized in nonlinear irreversible thermodynamics. A further application to binary solidification cells is reviewed. In this case the optimum in the dissipation is a minimum and the closure between theory and experiment is excellent. Other applications in thermal-hydraulics, biology, and solid state physics are briefy described.
Preservation of physical properties with Ensemble-type Kalman Filter Algorithms
NASA Astrophysics Data System (ADS)
Janjic, T.
2017-12-01
We show the behavior of the localized Ensemble Kalman filter (EnKF) with respect to preservation of positivity, conservation of mass, energy and enstrophy in toy models that conserve these properties. In order to preserve physical properties in the analysis as well as to deal with the non-Gaussianity in an EnKF framework, Janjic et al. 2014 proposed the use of physically based constraints in the analysis step to constrain the solution. In particular, constraints were used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In the study, mass and positivity were both preserved by formulating the filter update as a set of quadratic programming problems that incorporate nonnegativity constraints. Simple numerical experiments indicated that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that were more physically plausible both for individual ensemble members and for the ensemble mean. Moreover, in experiments designed to mimic the most important characteristics of convective motion, it is shown that the mass conservation- and positivity-constrained rain significantly suppresses noise seen in localized EnKF results. This is highly desirable in order to avoid spurious storms from appearing in the forecast starting from this initial condition (Lange and Craig 2014). In addition, the root mean square error is reduced for all fields and total mass of the rain is correctly simulated. Similarly, the enstrophy, divergence, as well as energy spectra can as well be strongly affected by localization radius, thinning interval, and inflation and depend on the variable that is observed (Zeng and Janjic, 2016). We constructed the ensemble data assimilation algorithm that conserves mass, total energy and enstrophy (Zeng et al., 2017). With 2D shallow water model experiments, it is found that the conservation of enstrophy within the data assimilation effectively avoids the spurious energy cascade of rotational part and thereby successfully suppresses the noise generated by the data assimilation algorithm. The 14-day deterministic and ensemble free forecast, starting from the initial condition enforced by both total energy and enstrophy constraints, produces the best prediction.
Free energy calculations, enhanced by a Gaussian ansatz, for the "chemical work" distribution.
Boulougouris, Georgios C
2014-05-15
The evaluation of the free energy is essential in molecular simulation because it is intimately related with the existence of multiphase equilibrium. Recently, it was demonstrated that it is possible to evaluate the Helmholtz free energy using a single statistical ensemble along an entire isotherm by accounting for the "chemical work" of transforming each molecule, from an interacting one, to an ideal gas. In this work, we show that it is possible to perform such a free energy perturbation over a liquid vapor phase transition. Furthermore, we investigate the link between a general free energy perturbation scheme and the novel nonequilibrium theories of Crook's and Jarzinsky. We find that for finite systems away from the thermodynamic limit the second law of thermodynamics will always be an inequality for isothermal free energy perturbations, resulting always to a dissipated work that may tend to zero only in the thermodynamic limit. The work, the heat, and the entropy produced during a thermodynamic free energy perturbation can be viewed in the context of the Crooks and Jarzinsky formalism, revealing that for a given value of the ensemble average of the "irreversible" work, the minimum entropy production corresponded to a Gaussian distribution for the histogram of the work. We propose the evaluation of the free energy difference in any free energy perturbation based scheme on the average irreversible "chemical work" minus the dissipated work that can be calculated from the variance of the distribution of the logarithm of the work histogram, within the Gaussian approximation. As a consequence, using the Gaussian ansatz for the distribution of the "chemical work," accurate estimates for the chemical potential and the free energy of the system can be performed using much shorter simulations and avoiding the necessity of sampling the computational costly tails of the "chemical work." For a more general free energy perturbation scheme that the Gaussian ansatz may not be valid, the free energy calculation can be expressed in terms of the moment generating function of the "chemical work" distribution. Copyright © 2014 Wiley Periodicals, Inc.
Understanding genetic regulatory networks
NASA Astrophysics Data System (ADS)
Kauffman, Stuart
2003-04-01
Random Boolean networks (RBM) were introduced about 35 years ago as first crude models of genetic regulatory networks. RBNs are comprised of N on-off genes, connected by a randomly assigned regulatory wiring diagram where each gene has K inputs, and each gene is controlled by a randomly assigned Boolean function. This procedure samples at random from the ensemble of all possible NK Boolean networks. The central ideas are to study the typical, or generic properties of this ensemble, and see 1) whether characteristic differences appear as K and biases in Boolean functions are introducted, and 2) whether a subclass of this ensemble has properties matching real cells. Such networks behave in an ordered or a chaotic regime, with a phase transition, "the edge of chaos" between the two regimes. Networks with continuous variables exhibit the same two regimes. Substantial evidence suggests that real cells are in the ordered regime. A key concept is that of an attractor. This is a reentrant trajectory of states of the network, called a state cycle. The central biological interpretation is that cell types are attractors. A number of properties differentiate the ordered and chaotic regimes. These include the size and number of attractors, the existence in the ordered regime of a percolating "sea" of genes frozen in the on or off state, with a remainder of isolated twinkling islands of genes, a power law distribution of avalanches of gene activity changes following perturbation to a single gene in the ordered regime versus a similar power law distribution plus a spike of enormous avalanches of gene changes in the chaotic regime, and the existence of branching pathway of "differentiation" between attractors induced by perturbations in the ordered regime. Noise is serious issue, since noise disrupts attractors. But numerical evidence suggests that attractors can be made very stable to noise, and meanwhile, metaplasias may be a biological manifestation of noise. As we learn more about the wiring diagram and constraints on rules controlling real genes, we can build refined ensembles reflecting these properties, study the generic properties of the refined ensembles, and hope to gain insight into the dynamics of real cells.
Displacement data assimilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, W. Steven; Venkataramani, Shankar; Mariano, Arthur J.
We show that modifying a Bayesian data assimilation scheme by incorporating kinematically-consistent displacement corrections produces a scheme that is demonstrably better at estimating partially observed state vectors in a setting where feature information is important. While the displacement transformation is generic, here we implement it within an ensemble Kalman Filter framework and demonstrate its effectiveness in tracking stochastically perturbed vortices.
Application of an Ensemble Smoother to Precipitation Assimilation
NASA Technical Reports Server (NTRS)
Zhang, Sara; Zupanski, Dusanka; Hou, Arthur; Zupanski, Milija
2008-01-01
Assimilation of precipitation in a global modeling system poses a special challenge in that the observation operators for precipitation processes are highly nonlinear. In the variational approach, substantial development work and model simplifications are required to include precipitation-related physical processes in the tangent linear model and its adjoint. An ensemble based data assimilation algorithm "Maximum Likelihood Ensemble Smoother (MLES)" has been developed to explore the ensemble representation of the precipitation observation operator with nonlinear convection and large-scale moist physics. An ensemble assimilation system based on the NASA GEOS-5 GCM has been constructed to assimilate satellite precipitation data within the MLES framework. The configuration of the smoother takes the time dimension into account for the relationship between state variables and observable rainfall. The full nonlinear forward model ensembles are used to represent components involving the observation operator and its transpose. Several assimilation experiments using satellite precipitation observations have been carried out to investigate the effectiveness of the ensemble representation of the nonlinear observation operator and the data impact of assimilating rain retrievals from the TMI and SSM/I sensors. Preliminary results show that this ensemble assimilation approach is capable of extracting information from nonlinear observations to improve the analysis and forecast if ensemble size is adequate, and a suitable localization scheme is applied. In addition to a dynamically consistent precipitation analysis, the assimilation system produces a statistical estimate of the analysis uncertainty.
A Random Forest-based ensemble method for activity recognition.
Feng, Zengtao; Mo, Lingfei; Li, Meng
2015-01-01
This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation.
A multiphysical ensemble system of numerical snow modelling
NASA Astrophysics Data System (ADS)
Lafaysse, Matthieu; Cluzet, Bertrand; Dumont, Marie; Lejeune, Yves; Vionnet, Vincent; Morin, Samuel
2017-05-01
Physically based multilayer snowpack models suffer from various modelling errors. To represent these errors, we built the new multiphysical ensemble system ESCROC (Ensemble System Crocus) by implementing new representations of different physical processes in the deterministic coupled multilayer ground/snowpack model SURFEX/ISBA/Crocus. This ensemble was driven and evaluated at Col de Porte (1325 m a.s.l., French alps) over 18 years with a high-quality meteorological and snow data set. A total number of 7776 simulations were evaluated separately, accounting for the uncertainties of evaluation data. The ability of the ensemble to capture the uncertainty associated to modelling errors is assessed for snow depth, snow water equivalent, bulk density, albedo and surface temperature. Different sub-ensembles of the ESCROC system were studied with probabilistic tools to compare their performance. Results show that optimal members of the ESCROC system are able to explain more than half of the total simulation errors. Integrating members with biases exceeding the range corresponding to observational uncertainty is necessary to obtain an optimal dispersion, but this issue can also be a consequence of the fact that meteorological forcing uncertainties were not accounted for. The ESCROC system promises the integration of numerical snow-modelling errors in ensemble forecasting and ensemble assimilation systems in support of avalanche hazard forecasting and other snowpack-modelling applications.
Regionalization of post-processed ensemble runoff forecasts
NASA Astrophysics Data System (ADS)
Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian
2016-05-01
For many years, meteorological models have been run with perturbated initial conditions or parameters to produce ensemble forecasts that are used as a proxy of the uncertainty of the forecasts. However, the ensembles are usually both biased (the mean is systematically too high or too low, compared with the observed weather), and has dispersion errors (the ensemble variance indicates a too low or too high confidence in the forecast, compared with the observed weather). The ensembles are therefore commonly post-processed to correct for these shortcomings. Here we look at one of these techniques, referred to as Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). Originally, the post-processing parameters were identified as a fixed set of parameters for a region. The application of our work is the European Flood Awareness System (http://www.efas.eu), where a distributed model is run with meteorological ensembles as input. We are therefore dealing with a considerably larger data set than previous analyses. We also want to regionalize the parameters themselves for other locations than the calibration gauges. The post-processing parameters are therefore estimated for each calibration station, but with a spatial penalty for deviations from neighbouring stations, depending on the expected semivariance between the calibration catchment and these stations. The estimated post-processed parameters can then be used for regionalization of the postprocessing parameters also for uncalibrated locations using top-kriging in the rtop-package (Skøien et al., 2006, 2014). We will show results from cross-validation of the methodology and although our interest is mainly in identifying exceedance probabilities for certain return levels, we will also show how the rtop package can be used for creating a set of post-processed ensembles through simulations.
Guidelines for the analysis of free energy calculations
Klimovich, Pavel V.; Shirts, Michael R.; Mobley, David L.
2015-01-01
Free energy calculations based on molecular dynamics (MD) simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical–analysis.py, freely available on GitHub at https://github.com/choderalab/pymbar–examples, that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope these tools and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations. PMID:25808134
Scale-Similar Models for Large-Eddy Simulations
NASA Technical Reports Server (NTRS)
Sarghini, F.
1999-01-01
Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.
Advanced Atmospheric Ensemble Modeling Techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckley, R.; Chiswell, S.; Kurzeja, R.
Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two releasemore » times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.« less
Non-Perturbative Renormalization of the Lattice Heavy Quark Classical Velocity
NASA Astrophysics Data System (ADS)
Mandula, Jeffrey E.; Ogilvie, Michael C.
1997-02-01
We discuss the renormalization of the lattice formulation of the Heavy Quark Effective Theory (LHQET). In addition to wave function and composite operator renormalizations, on the lattice the classical velocity is also renormalized. The origin of this renormalization is the reduction of Lorentz (or O(4)) invariance to (hyper)cubic invariance. We present results of a new, direct lattice simulation of this finite renormalization, and compare the results to the perturbative (one loop) result. The simulation results are obtained with the use of a variationally optimized heavy-light meson operator, using an ensemble of lattices provided by the Fermilab ACP-MAPS collaboration.
Renormalization of the Lattice Heavy Quark Classical Velocity
NASA Astrophysics Data System (ADS)
Mandula, Jeffrey E.; Ogilvie, Michael C.
1996-03-01
In the lattice formulation of the Heavy Quark Effective Theory (LHQET), the "classical velocity" v becomes renormalized. The origin of this renormalization is the reduction of Lorentz (or O(4)) invariance to (hyper)cubic invariance. The renormalization is finite and depends on the form of the decretization of the reduced heavy quark Dirac equation. For the Forward Time — Centered Space discretization, the renormalization is computed both perturbatively, to one loop, and non-perturbatively using two ensembles of lattices, one at β = 5.7 and the other at β = 6.1 The estimates agree, and indicate that for small classical velocities, ν→ is reduced by about 25-30%.
NASA Astrophysics Data System (ADS)
Sinha, Bablu; Blaker, Adam; Duchez, Aurelie; Grist, Jeremy; Hewitt, Helene; Hirschi, Joel; Hyder, Patrick; Josey, Simon; Maclachlan, Craig; New, Adrian
2017-04-01
A high-resolution coupled ocean atmosphere model is used to study the effects of seasonal re-emergence of North Atlantic subsurface ocean temperature anomalies on northern hemisphere winter climate. A 50-member control simulation is integrated from September 1 to 28 February and compared with a similar ensemble with perturbed ocean initial conditions. The perturbation consists of a density-compensated subsurface (deeper than 180m) temperature anomaly corresponding to the observed subsurface temperature anomaly for September 2010, which is known to have re-emerged at the ocean surface in subsequent months. The perturbation is confined to the North Atlantic Ocean between the Equator and 65 degrees North. The model has 1/4 degree horizontal resolution in the ocean and the experiment is repeated for two atmosphere horizontal resolutions ( 60km and 25km) in order to determine whether the sensitivity of the atmosphere to re-emerging temperature anomalies is dependent on resolution. The ensembles display a wide range of reemergence behaviour, in some cases re-emergence occurs by November, in others it is delayed or does not occur at all. A wide range of amplitudes of the re-emergent temperature anomalies is observed. In cases where re-emergence occurs, there is a marked effect on both the regional (North Atlantic and Europe) and hemispheric surface pressure and temperature patterns. The results highlight a potentially important process whereby ocean memory of conditions up to a year earlier can significantly enhance seasonal forecast skill.
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
Pre-relaxation in weakly interacting models
NASA Astrophysics Data System (ADS)
Bertini, Bruno; Fagotti, Maurizio
2015-07-01
We consider time evolution in models close to integrable points with hidden symmetries that generate infinitely many local conservation laws that do not commute with one another. The system is expected to (locally) relax to a thermal ensemble if integrability is broken, or to a so-called generalised Gibbs ensemble if unbroken. In some circumstances expectation values exhibit quasi-stationary behaviour long before their typical relaxation time. For integrability-breaking perturbations, these are also called pre-thermalisation plateaux, and emerge e.g. in the strong coupling limit of the Bose-Hubbard model. As a result of the hidden symmetries, quasi-stationarity appears also in integrable models, for example in the Ising limit of the XXZ model. We investigate a weak coupling limit, identify a time window in which the effects of the perturbations become significant and solve the time evolution through a mean-field mapping. As an explicit example we study the XYZ spin-\\frac{1}{2} chain with additional perturbations that break integrability. One of the most intriguing results of the analysis is the appearance of persistent oscillatory behaviour. To unravel its origin, we study in detail a toy model: the transverse-field Ising chain with an additional nonlocal interaction proportional to the square of the transverse spin per unit length (2013 Phys. Rev. Lett. 111 197203). Despite being nonlocal, this belongs to a class of models that emerge as intermediate steps of the mean-field mapping and shares many dynamical properties with the weakly interacting models under consideration.
Random matrix ensembles for many-body quantum systems
NASA Astrophysics Data System (ADS)
Vyas, Manan; Seligman, Thomas H.
2018-04-01
Classical random matrix ensembles were originally introduced in physics to approximate quantum many-particle nuclear interactions. However, there exists a plethora of quantum systems whose dynamics is explained in terms of few-particle (predom-inantly two-particle) interactions. The random matrix models incorporating the few-particle nature of interactions are known as embedded random matrix ensembles. In the present paper, we provide a brief overview of these two ensembles and illustrate how the embedded ensembles can be successfully used to study decoherence of a qubit interacting with an environment, both for fermionic and bosonic embedded ensembles. Numerical calculations show the dependence of decoherence on the nature of the environment.
NASA Astrophysics Data System (ADS)
Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D. M. H.
2013-04-01
We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission-driven rather than concentration-driven perturbed parameter ensemble of a global climate model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration-driven simulations (with 10-90th percentile ranges of 1.7 K for the aggressive mitigation scenario, up to 3.9 K for the high-end, business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 K (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission-driven experiments, they do not change existing expectations (based on previous concentration-driven experiments) on the timescales over which different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in the case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration scenarios used to drive GCM ensembles, lies towards the lower end of our simulated distribution. This design decision (a legacy of previous assessments) is likely to lead concentration-driven experiments to under-sample strong feedback responses in future projections. Our ensemble of emission-driven simulations span the global temperature response of the CMIP5 emission-driven simulations, except at the low end. Combinations of low climate sensitivity and low carbon cycle feedbacks lead to a number of CMIP5 responses to lie below our ensemble range. The ensemble simulates a number of high-end responses which lie above the CMIP5 carbon cycle range. These high-end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real-world climate-sensitivity constraints which, if achieved, would lead to reductions on the upper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present-day observables and future changes, while the large spread of future-projected changes highlights the ongoing need for such work.
Role of Forcing Uncertainty and Background Model Error Characterization in Snow Data Assimilation
NASA Technical Reports Server (NTRS)
Kumar, Sujay V.; Dong, Jiarul; Peters-Lidard, Christa D.; Mocko, David; Gomez, Breogan
2017-01-01
Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology) provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 (Advanced Microwave Scanning Radiometer 2) instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.
Probabilistic Predictions of PM2.5 Using a Novel Ensemble Design for the NAQFC
NASA Astrophysics Data System (ADS)
Kumar, R.; Lee, J. A.; Delle Monache, L.; Alessandrini, S.; Lee, P.
2017-12-01
Poor air quality (AQ) in the U.S. is estimated to cause about 60,000 premature deaths with costs of 100B-150B annually. To reduce such losses, the National AQ Forecasting Capability (NAQFC) at the National Oceanic and Atmospheric Administration (NOAA) produces forecasts of ozone, particulate matter less than 2.5 mm in diameter (PM2.5), and other pollutants so that advance notice and warning can be issued to help individuals and communities limit the exposure and reduce air pollution-caused health problems. The current NAQFC, based on the U.S. Environmental Protection Agency Community Multi-scale AQ (CMAQ) modeling system, provides only deterministic AQ forecasts and does not quantify the uncertainty associated with the predictions, which could be large due to the chaotic nature of atmosphere and nonlinearity in atmospheric chemistry. This project aims to take NAQFC a step further in the direction of probabilistic AQ prediction by exploring and quantifying the potential value of ensemble predictions of PM2.5, and perturbing three key aspects of PM2.5 modeling: the meteorology, emissions, and CMAQ secondary organic aerosol formulation. This presentation focuses on the impact of meteorological variability, which is represented by three members of NOAA's Short-Range Ensemble Forecast (SREF) system that were down-selected by hierarchical cluster analysis. These three SREF members provide the physics configurations and initial/boundary conditions for the Weather Research and Forecasting (WRF) model runs that generate required output variables for driving CMAQ that are missing in operational SREF output. We conducted WRF runs for Jan, Apr, Jul, and Oct 2016 to capture seasonal changes in meteorology. Estimated emissions of trace gases and aerosols via the Sparse Matrix Operator Kernel (SMOKE) system were developed using the WRF output. WRF and SMOKE output drive a 3-member CMAQ mini-ensemble of once-daily, 48-h PM2.5 forecasts for the same four months. The CMAQ mini-ensemble is evaluated against both observations and the current operational deterministic NAQFC products, and analyzed to assess the impact of meteorological biases on PM2.5 variability. Quantification of the PM2.5 prediction uncertainty will prove a key factor to support cost-effective decision-making while protecting public health.
NASA Astrophysics Data System (ADS)
Zhu, Kefeng; Xue, Ming
2016-11-01
On 21 July 2012, an extreme rainfall event that recorded a maximum rainfall amount over 24 hours of 460 mm, occurred in Beijing, China. Most operational models failed to predict such an extreme amount. In this study, a convective-permitting ensemble forecast system (CEFS), at 4-km grid spacing, covering the entire mainland of China, is applied to this extreme rainfall case. CEFS consists of 22 members and uses multiple physics parameterizations. For the event, the predicted maximum is 415 mm d-1 in the probability-matched ensemble mean. The predicted high-probability heavy rain region is located in southwest Beijing, as was observed. Ensemble-based verification scores are then investigated. For a small verification domain covering Beijing and its surrounding areas, the precipitation rank histogram of CEFS is much flatter than that of a reference global ensemble. CEFS has a lower (higher) Brier score and a higher resolution than the global ensemble for precipitation, indicating more reliable probabilistic forecasting by CEFS. Additionally, forecasts of different ensemble members are compared and discussed. Most of the extreme rainfall comes from convection in the warm sector east of an approaching cold front. A few members of CEFS successfully reproduce such precipitation, and orographic lift of highly moist low-level flows with a significantly southeasterly component is suggested to have played important roles in producing the initial convection. Comparisons between good and bad forecast members indicate a strong sensitivity of the extreme rainfall to the mesoscale environmental conditions, and, to less of an extent, the model physics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Ben, Mauro, E-mail: mauro.delben@chem.uzh.ch; Hutter, Jürg, E-mail: hutter@chem.uzh.ch; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch
The forces acting on the atoms as well as the stress tensor are crucial ingredients for calculating the structural and dynamical properties of systems in the condensed phase. Here, these derivatives of the total energy are evaluated for the second-order Møller-Plesset perturbation energy (MP2) in the framework of the resolution of identity Gaussian and plane waves method, in a way that is fully consistent with how the total energy is computed. This consistency is non-trivial, given the different ways employed to compute Coulomb, exchange, and canonical four center integrals, and allows, for example, for energy conserving dynamics in various ensembles.more » Based on this formalism, a massively parallel algorithm has been developed for finite and extended system. The designed parallel algorithm displays, with respect to the system size, cubic, quartic, and quintic requirements, respectively, for the memory, communication, and computation. All these requirements are reduced with an increasing number of processes, and the measured performance shows excellent parallel scalability and efficiency up to thousands of nodes. Additionally, the computationally more demanding quintic scaling steps can be accelerated by employing graphics processing units (GPU’s) showing, for large systems, a gain of almost a factor two compared to the standard central processing unit-only case. In this way, the evaluation of the derivatives of the RI-MP2 energy can be performed within a few minutes for systems containing hundreds of atoms and thousands of basis functions. With good time to solution, the implementation thus opens the possibility to perform molecular dynamics (MD) simulations in various ensembles (microcanonical ensemble and isobaric-isothermal ensemble) at the MP2 level of theory. Geometry optimization, full cell relaxation, and energy conserving MD simulations have been performed for a variety of molecular crystals including NH{sub 3}, CO{sub 2}, formic acid, and benzene.« less
Lattice black branes: sphere packing in general relativity
NASA Astrophysics Data System (ADS)
Dias, Óscar J. C.; Santos, Jorge E.; Way, Benson
2018-05-01
We perturbatively construct asymptotically R^{1,3}× T^2 black branes with multiple inhomogeneous directions and show that some of them are thermodynamically preferred over uniform branes in both the microcanonical and canonical ensembles. This demonstrates that, unlike five-dimensional black strings, the instability of some unstable black branes has a plausible endpoint that does not require a violation of cosmic censorship.
Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers
NASA Astrophysics Data System (ADS)
Sendersky, Dmitry
2000-10-01
The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.
NASA Astrophysics Data System (ADS)
van Westen, Thijs; Gross, Joachim
2017-07-01
The Helmholtz energy of a fluid interacting by a Lennard-Jones pair potential is expanded in a perturbation series. Both the methods of Barker-Henderson (BH) and of Weeks-Chandler-Andersen (WCA) are evaluated for the division of the intermolecular potential into reference and perturbation parts. The first four perturbation terms are evaluated for various densities and temperatures (in the ranges ρ*=0 -1.5 and T*=0.5 -12 ) using Monte Carlo simulations in the canonical ensemble. The simulation results are used to test several approximate theoretical methods for describing perturbation terms or for developing an approximate infinite order perturbation series. Additionally, the simulations serve as a basis for developing fully analytical third order BH and WCA perturbation theories. The development of analytical theories allows (1) a careful comparison between the BH and WCA formalisms, and (2) a systematic examination of the effect of higher-order perturbation terms on calculated thermodynamic properties of fluids. Properties included in the comparison are supercritical thermodynamic properties (pressure, internal energy, and chemical potential), vapor-liquid phase equilibria, second virial coefficients, and heat capacities. For all properties studied, we find a systematically improved description upon using a higher-order perturbation theory. A result of particular relevance is that a third order perturbation theory is capable of providing a quantitative description of second virial coefficients to temperatures as low as the triple-point of the Lennard-Jones fluid. We find no reason to prefer the WCA formalism over the BH formalism.
NASA Astrophysics Data System (ADS)
Khade, Vikram; Kurian, Jaison; Chang, Ping; Szunyogh, Istvan; Thyng, Kristen; Montuoro, Raffaele
2017-05-01
This paper demonstrates the potential of ocean ensemble forecasting in the Gulf of Mexico (GoM). The Bred Vector (BV) technique with one week rescaling frequency is implemented on a 9 km resolution version of the Regional Ocean Modelling System (ROMS). Numerical experiments are carried out by using the HYCOM analysis products to define the initial conditions and the lateral boundary conditions. The growth rates of the forecast uncertainty are estimated to be about 10% of initial amplitude per week. By carrying out ensemble forecast experiments with and without perturbed surface forcing, it is demonstrated that in the coastal regions accounting for uncertainties in the atmospheric forcing is more important than accounting for uncertainties in the ocean initial conditions. In the Loop Current region, the initial condition uncertainties, are the dominant source of the forecast uncertainty. The root-mean-square error of the Lagrangian track forecasts at the 15-day forecast lead time can be reduced by about 10 - 50 km using the ensemble mean Eulerian forecast of the oceanic flow for the computation of the tracks, instead of the single-initial-condition Eulerian forecast.
CLS 2+1 flavor simulations at physical light-and strange-quark masses
NASA Astrophysics Data System (ADS)
Mohler, Daniel; Schaefer, Stefan; Simeth, Jakob
2018-03-01
We report recent efforts by CLS to generate an ensemble with physical lightand strange-quark masses in a lattice volume of 192 × 963 at β = 3:55 corresponding to a lattice spacing of 0:064 fm. This ensemble is being generated as part of the CLS 2+1 flavor effort with improved Wilson fermions. Our simulations currently cover 5 lattice spacings ranging from 0:039 fm to 0:086 fm at various pion masses along chiral trajectories with either the sum of the quark masses kept fixed, or with the strange-quark mass at the physical value. The current status of simulations is briefly reviewed, including a short discussion of measured autocorrelation times and of the main features of the simulations. We then proceed to discuss the thermalization strategy employed for the generation of the physical quark-mass ensemble and present first results for some simple observables. Challenges encountered in the simulation are highlighted.
ENSO Bred Vectors in Coupled Ocean-Atmosphere General Circulation Models
NASA Technical Reports Server (NTRS)
Yang, S. C.; Cai, Ming; Kalnay, E.; Rienecker, M.; Yuan, G.; Toth, ZA.
2004-01-01
The breeding method has been implemented in the NASA Seasonal-to-Interannual Prediction Project (NSIPP) Coupled General Circulation Model (CGCM) with the goal of improving operational seasonal to interannual climate predictions through ensemble forecasting and data assimilation. The coupled instability as cap'tured by the breeding method is the first attempt to isolate the evolving ENSO instability and its corresponding global atmospheric response in a fully coupled ocean-atmosphere GCM. Our results show that the growth rate of the coupled bred vectors (BV) peaks at about 3 months before a background ENSO event. The dominant growing BV modes are reminiscent of the background ENSO anomalies and show a strong tropical response with wind/SST/thermocline interrelated in a manner similar to the background ENSO mode. They exhibit larger amplitudes in the eastern tropical Pacific, reflecting the natural dynamical sensitivity associated with the presence of the shallow thermocline. Moreover, the extratropical perturbations associated with these coupled BV modes reveal the variations related to the atmospheric teleconnection patterns associated with background ENSO variability, e.g. over the North Pacific and North America. A similar experiment was carried out with the NCEP/CFS03 CGCM. Comparisons between bred vectors from the NSIPP CGCM and NCEP/CFS03 CGCM demonstrate the robustness of the results. Our results strongly suggest that the breeding method can serve as a natural filter to identify the slowly varying, coupled instabilities in a coupled GCM, which can be used to construct ensemble perturbations for ensemble forecasts and to estimate the coupled background error covariance for coupled data assimilation.
Time-Hierarchical Clustering and Visualization of Weather Forecast Ensembles.
Ferstl, Florian; Kanzler, Mathias; Rautenhaus, Marc; Westermann, Rudiger
2017-01-01
We propose a new approach for analyzing the temporal growth of the uncertainty in ensembles of weather forecasts which are started from perturbed but similar initial conditions. As an alternative to traditional approaches in meteorology, which use juxtaposition and animation of spaghetti plots of iso-contours, we make use of contour clustering and provide means to encode forecast dynamics and spread in one single visualization. Based on a given ensemble clustering in a specified time window, we merge clusters in time-reversed order to indicate when and where forecast trajectories start to diverge. We present and compare different visualizations of the resulting time-hierarchical grouping, including space-time surfaces built by connecting cluster representatives over time, and stacked contour variability plots. We demonstrate the effectiveness of our visual encodings with forecast examples of the European Centre for Medium-Range Weather Forecasts, which convey the evolution of specific features in the data as well as the temporally increasing spatial variability.
NASA Astrophysics Data System (ADS)
Gaspari, M.; McDonald, M.; Hamer, S. L.; Brighenti, F.; Temi, P.; Gendron-Marsolais, M.; Hlavacek-Larrondo, J.; Edge, A. C.; Werner, N.; Tozzi, P.; Sun, M.; Stone, J. M.; Tremblay, G. R.; Hogan, M. T.; Eckert, D.; Ettori, S.; Yu, H.; Biffi, V.; Planelles, S.
2018-02-01
We propose a novel method to constrain turbulence and bulk motions in massive galaxies, galaxy groups, and clusters, exploring both simulations and observations. As emerged in the recent picture of top-down multiphase condensation, hot gaseous halos are tightly linked to all other phases in terms of cospatiality and thermodynamics. While hot halos (∼107 K) are perturbed by subsonic turbulence, warm (∼104 K) ionized and neutral filaments condense out of the turbulent eddies. The peaks condense into cold molecular clouds (<100 K) raining in the core via chaotic cold accretion (CCA). We show that all phases are tightly linked in terms of the ensemble (wide-aperture) velocity dispersion along the line of sight. The correlation arises in complementary long-term AGN feedback simulations and high-resolution CCA runs, and is corroborated by the combined Hitomi and new Integral Field Unit measurements in the Perseus cluster. The ensemble multiphase gas distributions (from the UV to the radio band) are characterized by substantial spectral line broadening (σ v,los ≈ 100–200 {km} {{{s}}}-1) with a mild line shift. On the other hand, pencil-beam detections (as H I absorption against the AGN backlight) sample the small-scale clouds displaying smaller broadening and significant line shifts of up to several 100 {km} {{{s}}}-1 (for those falling toward the AGN), with increased scatter due to the turbulence intermittency. We present new ensemble σ v,los of the warm Hα+[N II] gas in 72 observed cluster/group cores: the constraints are consistent with the simulations and can be used as robust proxies for the turbulent velocities, in particular for the challenging hot plasma (otherwise requiring extremely long X-ray exposures). Finally, we show that the physically motivated criterion C ≡ t cool/t eddy ≈ 1 best traces the condensation extent region and the presence of multiphase gas in observed clusters and groups. The ensemble method can be applied to many available spectroscopic data sets and can substantially advance our understanding of multiphase halos in light of the next-generation multiwavelength missions.
The Impact of STTP on the GEFS Forecast of Week-2 and Beyond in the Presence of Stochastic Physics
NASA Astrophysics Data System (ADS)
Hou, D.
2015-12-01
The Stochastic Total Tendency Perturbation (STTP) scheme was designed to represent the model related uncertainties not considered in the numerical model itself and the physics based stochastic schemes. It has been applied in NCEP's Global Ensemble Forecast System (GEFS) since 2010, showing significant positive impacts on the forecast with improved spread-error ratio and probabilistic forecast skills. The scheme is robust and it went well with the resolution increases and model improvements in 2012 and 2015 with minimum changes. Recently, a set of stochastic physics schemes are coded in the Global Forecast System model and tested in the GEFS package. With these schemes turned on and STTP off, the forecast performance is comparable or even superior to the operational GEFS, in which STTP is the only contributor to the model related uncertainties. This is true especially in week one. However, over the second week and beyond, both the experimental and the operational GEFS has insufficient spread, especially over the warmer seasons. This is a major challenge when the GEFS is extended to sub-seasonal (week 4-6) time scales. The impact of STTP on the GEFS forecast in the presence of stochastic physics is investigated by turning both the stochastic physics schemes and STTP on and carefully tuning their amplitudes. Analysis will be focused on the forecast of extended range, especially week 2. Its impacts on week 3-4 will also be addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr
We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less
Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...
2017-06-07
We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brewer, Jasmine; Rajagopal, Krishna; Sadofyev, Andrey
Some of the most important experimentally accessible probes of the quark- gluon plasma (QGP) produced in heavy ion collisions come from the analysis of how the shape and energy of sprays of energetic particles produced within a cone with a specified opening angle (jets) in a hard scattering are modified by their passage through the strongly coupled, liquid, QGP. We model an ensemble of back-to-back dijets for the purpose of gaining a qualitative understanding of how the shapes of the individual jets and the asymmetry in the energy of the pairs of jets in the ensemble are modified by theirmore » passage through an expanding cooling droplet of strongly coupled plasma, in the model in a holographic gauge theory that is dual to a 4+1-dimensional black-hole spacetime that is asymptotically anti-de Sitter (AdS). We build our model by constructing an ensemble of strings in the dual gravitational description of the gauge theory. We model QCD jets in vacuum using strings whose endpoints are moving “downward” into the gravitational bulk spacetime with some fixed small angle, an angle that represents the opening angle (ratio of jet mass to jet energy) that the QCD jet would have in vacuum. Such strings must be moving through the gravitational bulk at (close to) the speed of light; they must be (close to) null. This condition does not specify the energy distribution along the string, meaning that it does not specify the shape of the jet being modeled. We study the dynamics of strings that are initially not null and show that strings with a wide range of initial conditions rapidly accelerate and become null and, as they do, develop a similar distribution of their energy density. We use this distribution of the energy density along the string, choose an ensemble of strings whose opening angles and energies are distributed as in perturbative QCD, and show that we can then fix one of the two model parameters such that the mean jet shape for the jets in the ensemble that we have built matches that measured in proton-proton collisions reasonably well. This is a novel way for hybridizing relevant inputs from perturbative QCD and a strongly coupled holographic gauge theory in the service of modeling jets in QGP. We send our ensemble of strings through an expanding cooling droplet of strongly coupled plasma, choosing the second model parameter so as to get a reasonable value for R AA jet , the suppression in the number of jets, and study how the mean jet shape and the dijet asymmetry are modified, comparing both to measurements from heavy ion collisions at the LHC.« less
Brewer, Jasmine; Rajagopal, Krishna; Sadofyev, Andrey; ...
2018-02-02
Some of the most important experimentally accessible probes of the quark- gluon plasma (QGP) produced in heavy ion collisions come from the analysis of how the shape and energy of sprays of energetic particles produced within a cone with a specified opening angle (jets) in a hard scattering are modified by their passage through the strongly coupled, liquid, QGP. We model an ensemble of back-to-back dijets for the purpose of gaining a qualitative understanding of how the shapes of the individual jets and the asymmetry in the energy of the pairs of jets in the ensemble are modified by theirmore » passage through an expanding cooling droplet of strongly coupled plasma, in the model in a holographic gauge theory that is dual to a 4+1-dimensional black-hole spacetime that is asymptotically anti-de Sitter (AdS). We build our model by constructing an ensemble of strings in the dual gravitational description of the gauge theory. We model QCD jets in vacuum using strings whose endpoints are moving “downward” into the gravitational bulk spacetime with some fixed small angle, an angle that represents the opening angle (ratio of jet mass to jet energy) that the QCD jet would have in vacuum. Such strings must be moving through the gravitational bulk at (close to) the speed of light; they must be (close to) null. This condition does not specify the energy distribution along the string, meaning that it does not specify the shape of the jet being modeled. We study the dynamics of strings that are initially not null and show that strings with a wide range of initial conditions rapidly accelerate and become null and, as they do, develop a similar distribution of their energy density. We use this distribution of the energy density along the string, choose an ensemble of strings whose opening angles and energies are distributed as in perturbative QCD, and show that we can then fix one of the two model parameters such that the mean jet shape for the jets in the ensemble that we have built matches that measured in proton-proton collisions reasonably well. This is a novel way for hybridizing relevant inputs from perturbative QCD and a strongly coupled holographic gauge theory in the service of modeling jets in QGP. We send our ensemble of strings through an expanding cooling droplet of strongly coupled plasma, choosing the second model parameter so as to get a reasonable value for R AA jet , the suppression in the number of jets, and study how the mean jet shape and the dijet asymmetry are modified, comparing both to measurements from heavy ion collisions at the LHC.« less
NASA Astrophysics Data System (ADS)
Brewer, Jasmine; Rajagopal, Krishna; Sadofyev, Andrey; van der Schee, Wilke
2018-02-01
Some of the most important experimentally accessible probes of the quark- gluon plasma (QGP) produced in heavy ion collisions come from the analysis of how the shape and energy of sprays of energetic particles produced within a cone with a specified opening angle (jets) in a hard scattering are modified by their passage through the strongly coupled, liquid, QGP. We model an ensemble of back-to-back dijets for the purpose of gaining a qualitative understanding of how the shapes of the individual jets and the asymmetry in the energy of the pairs of jets in the ensemble are modified by their passage through an expanding cooling droplet of strongly coupled plasma, in the model in a holographic gauge theory that is dual to a 4+1-dimensional black-hole spacetime that is asymptotically anti-de Sitter (AdS). We build our model by constructing an ensemble of strings in the dual gravitational description of the gauge theory. We model QCD jets in vacuum using strings whose endpoints are moving "downward" into the gravitational bulk spacetime with some fixed small angle, an angle that represents the opening angle (ratio of jet mass to jet energy) that the QCD jet would have in vacuum. Such strings must be moving through the gravitational bulk at (close to) the speed of light; they must be (close to) null. This condition does not specify the energy distribution along the string, meaning that it does not specify the shape of the jet being modeled. We study the dynamics of strings that are initially not null and show that strings with a wide range of initial conditions rapidly accelerate and become null and, as they do, develop a similar distribution of their energy density. We use this distribution of the energy density along the string, choose an ensemble of strings whose opening angles and energies are distributed as in perturbative QCD, and show that we can then fix one of the two model parameters such that the mean jet shape for the jets in the ensemble that we have built matches that measured in proton-proton collisions reasonably well. This is a novel way for hybridizing relevant inputs from perturbative QCD and a strongly coupled holographic gauge theory in the service of modeling jets in QGP. We send our ensemble of strings through an expanding cooling droplet of strongly coupled plasma, choosing the second model parameter so as to get a reasonable value for R AA jet , the suppression in the number of jets, and study how the mean jet shape and the dijet asymmetry are modified, comparing both to measurements from heavy ion collisions at the LHC.
Challenges in Visual Analysis of Ensembles
Crossno, Patricia
2018-04-12
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
Challenges in Visual Analysis of Ensembles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
NASA Technical Reports Server (NTRS)
Wilcox, E. M.; Sud, Y. C.; Walker, G.
2009-01-01
Aerosol perturbations over selected land regions are imposed in Version-4 of the Goddard Earth Observing System (GEOS-4) general circulation model (GCM) to assess the influence of increasing aerosol concentrations on regional circulation patterns and precipitation in four selected regions: India, Africa, and North and South America. Part 1 of this paper addresses the responses to aerosol perturbations in India and Africa. This paper presents the same for aerosol perturbations over the Americas. GEOS-4 is forced with prescribed aerosols based on climatological data, which interact with clouds using a prognostic scheme for cloud microphysics including aerosol nucleation of water and ice cloud hydrometeors. In clear-sky conditions the aerosols interact with radiation. Thus the model includes comprehensive physics describing the aerosol direct and indirect effects on climate (hereafter ADE and AIE respectively). Each simulation is started from analyzed initial conditions for 1 May and was integrated through June-July-August of each of the six years: 1982 1987 to provide a 6-ensemble set. Results are presented for the difference between simulations with double the climatological aerosol concentration and one-half the climatological aerosol concentration for three experiments: two where the ADE and AIE are applied separately and one in which both the ADE and AIE are applied. The ADE and AIE both yield reductions in net radiation at the top of the atmosphere and surface while the direct absorption of shortwave radiation contributes a net radiative heating in the atmosphere. A large net heating of the atmosphere is also apparent over the subtropical North Atlantic Ocean that is attributable to the large aerosol perturbation imposed over Africa. This atmospheric warming and the depression of the surface pressure over North America contribute to a northward shift of the inter-Tropical Convergence Zone over northern America, an increase in precipitation over Central America and the Caribbean, and an enhancement of convergence in the North American monsoon region.
Pumping approximately integrable systems
Lange, Florian; Lenarčič, Zala; Rosch, Achim
2017-01-01
Weak perturbations can drive an interacting many-particle system far from its initial equilibrium state if one is able to pump into degrees of freedom approximately protected by conservation laws. This concept has for example been used to realize Bose–Einstein condensates of photons, magnons and excitons. Integrable quantum systems, like the one-dimensional Heisenberg model, are characterized by an infinite set of conservation laws. Here, we develop a theory of weakly driven integrable systems and show that pumping can induce large spin or heat currents even in the presence of integrability breaking perturbations, since it activates local and quasi-local approximate conserved quantities. The resulting steady state is qualitatively captured by a truncated generalized Gibbs ensemble with Lagrange parameters that depend on the structure but not on the overall amplitude of perturbations nor the initial state. We suggest to use spin-chain materials driven by terahertz radiation to realize integrability-based spin and heat pumps. PMID:28598444
Wang, Chuangqi; Choi, Hee June; Kim, Sung-Jin; Desai, Aesha; Lee, Namgyu; Kim, Dohoon; Bae, Yongho; Lee, Kwonmoo
2018-04-27
Cell protrusion is morphodynamically heterogeneous at the subcellular level. However, the mechanism of cell protrusion has been understood based on the ensemble average of actin regulator dynamics. Here, we establish a computational framework called HACKS (deconvolution of heterogeneous activity in coordination of cytoskeleton at the subcellular level) to deconvolve the subcellular heterogeneity of lamellipodial protrusion from live cell imaging. HACKS identifies distinct subcellular protrusion phenotypes based on machine-learning algorithms and reveals their underlying actin regulator dynamics at the leading edge. Using our method, we discover "accelerating protrusion", which is driven by the temporally ordered coordination of Arp2/3 and VASP activities. We validate our finding by pharmacological perturbations and further identify the fine regulation of Arp2/3 and VASP recruitment associated with accelerating protrusion. Our study suggests HACKS can identify specific subcellular protrusion phenotypes susceptible to pharmacological perturbation and reveal how actin regulator dynamics are changed by the perturbation.
The Influence of Particle Charge on Heterogeneous Reaction Rate Coefficients
NASA Technical Reports Server (NTRS)
Aikin, A. C.; Pesnell, W. D.
2000-01-01
The effects of particle charge on heterogeneous reaction rates are presented. Many atmospheric particles, whether liquid or solid are charged. This surface charge causes a redistribution of charge within a liquid particle and as a consequence a perturbation in the gaseous uptake coefficient. The amount of perturbation is proportional to the external potential and the square of the ratio of debye length in the liquid to the particle radius. Previous modeling has shown how surface charge affects the uptake coefficient of charged aerosols. This effect is now included in the heterogeneous reaction rate of an aerosol ensemble. Extension of this analysis to ice particles will be discussed and examples presented.
Generalized canonical ensembles and ensemble equivalence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costeniuc, M.; Ellis, R.S.; Turkington, B.
2006-02-15
This paper is a companion piece to our previous work [J. Stat. Phys. 119, 1283 (2005)], which introduced a generalized canonical ensemble obtained by multiplying the usual Boltzmann weight factor e{sup -{beta}}{sup H} of the canonical ensemble with an exponential factor involving a continuous function g of the Hamiltonian H. We provide here a simplified introduction to our previous work, focusing now on a number of physical rather than mathematical aspects of the generalized canonical ensemble. The main result discussed is that, for suitable choices of g, the generalized canonical ensemble reproduces, in the thermodynamic limit, all the microcanonical equilibriummore » properties of the many-body system represented by H even if this system has a nonconcave microcanonical entropy function. This is something that in general the standard (g=0) canonical ensemble cannot achieve. Thus a virtue of the generalized canonical ensemble is that it can often be made equivalent to the microcanonical ensemble in cases in which the canonical ensemble cannot. The case of quadratic g functions is discussed in detail; it leads to the so-called Gaussian ensemble.« less
Single Aerosol Particle Studies Using Optical Trapping Raman And Cavity Ringdown Spectroscopy
NASA Astrophysics Data System (ADS)
Gong, Z.; Wang, C.; Pan, Y. L.; Videen, G.
2017-12-01
Due to the physical and chemical complexity of aerosol particles and the interdisciplinary nature of aerosol science that involves physics, chemistry, and biology, our knowledge of aerosol particles is rather incomplete; our current understanding of aerosol particles is limited by averaged (over size, composition, shape, and orientation) and/or ensemble (over time, size, and multi-particles) measurements. Physically, single aerosol particles are the fundamental units of any large aerosol ensembles. Chemically, single aerosol particles carry individual chemical components (properties and constituents) in particle ensemble processes. Therefore, the study of single aerosol particles can bridge the gap between aerosol ensembles and bulk/surface properties and provide a hierarchical progression from a simple benchmark single-component system to a mixed-phase multicomponent system. A single aerosol particle can be an effective reactor to study heterogeneous surface chemistry in multiple phases. Latest technological advances provide exciting new opportunities to study single aerosol particles and to further develop single aerosol particle instrumentation. We present updates on our recent studies of single aerosol particles optically trapped in air using the optical-trapping Raman and cavity ringdown spectroscopy.
NASA Astrophysics Data System (ADS)
Mandula, Jeffrey E.; Ogilvie, Michael C.
1998-02-01
In the lattice formulation of heavy quark effective theory, the value of the ``classical velocity'' v, as defined through the separation of the four-momentum of a heavy quark into a part proportional to the heavy quark mass and a residual part that remains finite in the heavy quark limit (P=Mv+p), is different from its value as it appears in the bare heavy quark propagator [S-1(p)=v.p]. The origin of the difference, which is effectively a lattice-induced renormalization, is the reduction of Lorentz [or O(4)] invariance to (hyper)cubic invariance. The renormalization is finite and depends specifically on the form of the discretization of the reduced heavy quark Dirac equation. For the forward time, centered space discretization, we compute this renormalization nonperturbatively, using an ensemble of lattices at β=6.1 provided by the Fermilab ACP-MAPS Collaboration. The calculation makes crucial use of a variationally optimized smeared operator for creating composite heavy-light mesons. It has the property that its propagator achieves an asymptotic plateau in just a few Euclidean time steps. For comparison, we also compute the shift perturbatively, to one loop in lattice perturbation theory. The nonperturbative calculation of the leading multiplicative shift in the classical velocity is considerably different from the one-loop estimate and indicates that for the above parameters v--> is reduced by about 10-13 %.
Characterization of Mesoscale Predictability
2013-09-30
2009), which, it had been argued, had high mesoscale predictability. More recently, we have considered the prediction of lowland snow in the Puget ...averaged total and perturbation kinetic energy spectra on the 5-km, convection-permitting grid. The ensembles clearly captured the observed k-5/3 total...kinetic energy spectrum at wavelengths less than approximately 400 km and also showed a transition to a roughly k-3 dependence at longer wavelengths
NASA Astrophysics Data System (ADS)
Wu, Xiongwu; Brooks, Bernard R.
2011-11-01
The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.
Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.
Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S
2012-11-01
One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.
A variational ensemble scheme for noisy image data assimilation
NASA Astrophysics Data System (ADS)
Yang, Yin; Robinson, Cordelia; Heitz, Dominique; Mémin, Etienne
2014-05-01
Data assimilation techniques aim at recovering a system state variables trajectory denoted as X, along time from partially observed noisy measurements of the system denoted as Y. These procedures, which couple dynamics and noisy measurements of the system, fulfill indeed a twofold objective. On one hand, they provide a denoising - or reconstruction - procedure of the data through a given model framework and on the other hand, they provide estimation procedures for unknown parameters of the dynamics. A standard variational data assimilation problem can be formulated as the minimization of the following objective function with respect to the initial discrepancy, η, from the background initial guess: δ« J(η(x)) = 1∥Xb (x) - X (t ,x)∥2 + 1 tf∥H(X (t,x ))- Y (t,x)∥2dt. 2 0 0 B 2 t0 R (1) where the observation operator H links the state variable and the measurements. The cost function can be interpreted as the log likelihood function associated to the a posteriori distribution of the state given the past history of measurements and the background. In this work, we aim at studying ensemble based optimal control strategies for data assimilation. Such formulation nicely combines the ingredients of ensemble Kalman filters and variational data assimilation (4DVar). It is also formulated as the minimization of the objective function (1), but similarly to ensemble filter, it introduces in its objective function an empirical ensemble-based background-error covariance defined as: B ≡ <(Xb -
Manual physical therapy and perturbation exercises in knee osteoarthritis.
Rhon, Daniel; Deyle, Gail; Gill, Norman; Rendeiro, Daniel
2013-11-01
Knee osteoarthritis (OA) causes disability among the elderly and is often associated with impaired balance and proprioception. Perturbation exercises may help improve these impairments. Although manual physical therapy is generally a well-tolerated treatment for knee OA, perturbation exercises have not been evaluated when used with a manual physical therapy approach. The purpose of this study was to observe tolerance to perturbation exercises and the effect of a manual physical therapy approach with perturbation exercises on patients with knee OA. This was a prospective observational cohort study of 15 patients with knee OA. The Western Ontario and McMaster Universities Arthritis Index (WOMAC), global rating of change (GROC), and 72-hour post-treatment tolerance were primary outcome measures. Patients received perturbation balance exercises along with a manual physical therapy approach, twice weekly for 4 weeks. Follow-up evaluation was done at 1, 3, and 6 months after beginning the program. Mean total WOMAC score significantly improved (P = 0.001) after the 4-week program (total WOMAC: initial, 105; 4 weeks, 56; 3 months, 54; 6 months, 57). Mean improvements were similar to previously published trials of manual physical therapy without perturbation exercises. The GROC score showed a minimal clinically important difference (MCID)≥+3 in 13 patients (87%) at 4 weeks, 12 patients (80%) at 3 months, and 9 patients (60%) at 6 months. No patients reported exacerbation of symptoms within 72 hours following each treatment session. A manual physical therapy approach that also included perturbation exercises was well tolerated and resulted in improved outcome scores in patients with knee OA.
Decadal climate predictions improved by ocean ensemble dispersion filtering
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-06-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.
NASA Astrophysics Data System (ADS)
Van Uytven, Els; Willems, Patrick
2017-04-01
Current trends in the hydro-meteorological variables indicate the potential impact of climate change on hydrological extremes. Therefore, they trigger an increased importance climate adaptation strategies in water management. The impact of climate change on hydro-meteorological and hydrological extremes is, however, highly uncertain. This is due to uncertainties introduced by the climate models, the internal variability inherent to the climate system, the greenhouse gas scenarios and the statistical downscaling methods. In view of the need to define sustainable climate adaptation strategies, there is a need to assess these uncertainties. This is commonly done by means of ensemble approaches. Because more and more climate models and statistical downscaling methods become available, there is a need to facilitate the climate impact and uncertainty analysis. A Climate Perturbation Tool has been developed for that purpose, which combines a set of statistical downscaling methods including weather typing, weather generator, transfer function and advanced perturbation based approaches. By use of an interactive interface, climate impact modelers can apply these statistical downscaling methods in a semi-automatic way to an ensemble of climate model runs. The tool is applicable to any region, but has been demonstrated so far to cases in Belgium, Suriname, Vietnam and Bangladesh. Time series representing future local-scale precipitation, temperature and potential evapotranspiration (PET) conditions were obtained, starting from time series of historical observations. Uncertainties on the future meteorological conditions are represented in two different ways: through an ensemble of time series, and a reduced set of synthetic scenarios. The both aim to span the full uncertainty range as assessed from the ensemble of climate model runs and downscaling methods. For Belgium, for instance, use was made of 100-year time series of 10-minutes precipitation observations and daily temperature and PET observations at Uccle and a large ensemble of 160 global climate model runs (CMIP5). They cover all four representative concentration pathway based greenhouse gas scenarios. While evaluating the downscaled meteorological series, particular attention was given to the performance of extreme value metrics (e.g. for precipitation, by means of intensity-duration-frequency statistics). Moreover, the total uncertainty was decomposed in the fractional uncertainties for each of the uncertainty sources considered. Research assessing the additional uncertainty due to parameter and structural uncertainties of the hydrological impact model is ongoing.
Can decadal climate predictions be improved by ocean ensemble dispersion filtering?
NASA Astrophysics Data System (ADS)
Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.
2017-12-01
Decadal predictions by Earth system models aim to capture the state and phase of the climate several years inadvance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-termweather forecasts represent an initial value problem and long-term climate projections represent a boundarycondition problem, the decadal climate prediction falls in-between these two time scales. The ocean memorydue to its heat capacity holds big potential skill on the decadal scale. In recent years, more precise initializationtechniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions.Ensembles are another important aspect. Applying slightly perturbed predictions results in an ensemble. Insteadof using and evaluating one prediction, but the whole ensemble or its ensemble average, improves a predictionsystem. However, climate models in general start losing the initialized signal and its predictive skill from oneforecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improvedby a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. Wefound that this procedure, called ensemble dispersion filter, results in more accurate results than the standarddecadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions showan increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with largerensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from oceanensemble dispersion filtering toward the ensemble mean. This study is part of MiKlip (fona-miklip.de) - a major project on decadal climate prediction in Germany.We focus on the Max-Planck-Institute Earth System Model using the low-resolution version (MPI-ESM-LR) andMiKlip's basic initialization strategy as in 2017 published decadal climate forecast: http://www.fona-miklip.de/decadal-forecast-2017-2026/decadal-forecast-for-2017-2026/ More informations about this study in JAMES:DOI: 10.1002/2016MS000787
Bernstein, Diana N.; Neelin, J. David
2016-04-28
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Diana N.; Neelin, J. David
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less
Understanding the Central Equatorial African long-term drought using AMIP-type simulations
NASA Astrophysics Data System (ADS)
Hua, Wenjian; Zhou, Liming; Chen, Haishan; Nicholson, Sharon E.; Jiang, Yan; Raghavendra, Ajay
2018-02-01
Previous studies show that Indo-Pacific sea surface temperature (SST) variations may help to explain the observed long-term drought during April-May-June (AMJ) since the 1990s over Central equatorial Africa (CEA). However, the underlying physical mechanisms for this drought are still not clear due to observation limitations. Here we use the AMIP-type simulations with 24 ensemble members forced by observed SSTs from the ECHAM4.5 model to explore the likely physical processes that determine the rainfall variations over CEA. We not only examine the ensemble mean (EM), but also compare the "good" and "poor" ensemble members to understand the intra-ensemble variability. In general, EM and the "good" ensemble member can simulate the drought and associated reduced vertical velocity and anomalous anti-cyclonic circulation in the lower troposphere. However, the "poor" ensemble members cannot simulate the drought and associated circulation patterns. These contrasts indicate that the drought is tightly associated with the tropical Walker circulation and atmospheric teleconnection patterns. If the observational circulation patterns cannot be reproduced, the CEA drought will not be captured. Despite the large intra-ensemble spread, the model simulations indicate an essential role of SST forcing in causing the drought. These results suggest that the long-term drought may result from tropical Indo-Pacific SST variations associated with the enhanced and westward extended tropical Walker circulation.
Guidelines for the analysis of free energy calculations.
Klimovich, Pavel V; Shirts, Michael R; Mobley, David L
2015-05-01
Free energy calculations based on molecular dynamics simulations show considerable promise for applications ranging from drug discovery to prediction of physical properties and structure-function studies. But these calculations are still difficult and tedious to analyze, and best practices for analysis are not well defined or propagated. Essentially, each group analyzing these calculations needs to decide how to conduct the analysis and, usually, develop its own analysis tools. Here, we review and recommend best practices for analysis yielding reliable free energies from molecular simulations. Additionally, we provide a Python tool, alchemical-analysis.py, freely available on GitHub as part of the pymbar package (located at http://github.com/choderalab/pymbar), that implements the analysis practices reviewed here for several reference simulation packages, which can be adapted to handle data from other packages. Both this review and the tool covers analysis of alchemical calculations generally, including free energy estimates via both thermodynamic integration and free energy perturbation-based estimators. Our Python tool also handles output from multiple types of free energy calculations, including expanded ensemble and Hamiltonian replica exchange, as well as standard fixed ensemble calculations. We also survey a range of statistical and graphical ways of assessing the quality of the data and free energy estimates, and provide prototypes of these in our tool. We hope this tool and discussion will serve as a foundation for more standardization of and agreement on best practices for analysis of free energy calculations.
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
Reciprocity in directed networks
NASA Astrophysics Data System (ADS)
Yin, Mei; Zhu, Lingjiong
2016-04-01
Reciprocity is an important characteristic of directed networks and has been widely used in the modeling of World Wide Web, email, social, and other complex networks. In this paper, we take a statistical physics point of view and study the limiting entropy and free energy densities from the microcanonical ensemble, the canonical ensemble, and the grand canonical ensemble whose sufficient statistics are given by edge and reciprocal densities. The sparse case is also studied for the grand canonical ensemble. Extensions to more general reciprocal models including reciprocal triangle and star densities will likewise be discussed.
NASA Astrophysics Data System (ADS)
van Westen, Thijs; Oyarzún, Bernardo; Vlugt, Thijs J. H.; Gross, Joachim
2015-06-01
We develop an equation of state (EoS) for describing isotropic-nematic (IN) phase equilibria of Lennard-Jones (LJ) chain fluids. The EoS is developed by applying a second order Barker-Henderson perturbation theory to a reference fluid of hard chain molecules. The chain molecules consist of tangentially bonded spherical segments and are allowed to be fully flexible, partially flexible (rod-coil), or rigid linear. The hard-chain reference contribution to the EoS is obtained from a Vega-Lago rescaled Onsager theory. For the description of the (attractive) dispersion interactions between molecules, we adopt a segment-segment approach. We show that the perturbation contribution for describing these interactions can be divided into an "isotropic" part, which depends only implicitly on orientational ordering of molecules (through density), and an "anisotropic" part, for which an explicit dependence on orientational ordering is included (through an expansion in the nematic order parameter). The perturbation theory is used to study the effect of chain length, molecular flexibility, and attractive interactions on IN phase equilibria of pure LJ chain fluids. Theoretical results for the IN phase equilibrium of rigid linear LJ 10-mers are compared to results obtained from Monte Carlo simulations in the isobaric-isothermal (NPT) ensemble, and an expanded formulation of the Gibbs-ensemble. Our results show that the anisotropic contribution to the dispersion attractions is irrelevant for LJ chain fluids. Using the isotropic (density-dependent) contribution only (i.e., using a zeroth order expansion of the attractive Helmholtz energy contribution in the nematic order parameter), excellent agreement between theory and simulations is observed. These results suggest that an EoS contribution for describing the attractive part of the dispersion interactions in real LCs can be obtained from conventional theoretical approaches designed for isotropic fluids, such as a Perturbed-Chain Statistical Associating Fluid Theory approach.
Kelly, Kristen L; Dalton, Shannon R; Wai, Rebecca B; Ramchandani, Kanika; Xu, Rosalind J; Linse, Sara; Londergan, Casey H
2018-03-22
Seven native residues on the regulatory protein calmodulin, including three key methionine residues, were replaced (one by one) by the vibrational probe amino acid cyanylated cysteine, which has a unique CN stretching vibration that reports on its local environment. Almost no perturbation was caused by this probe at any of the seven sites, as reported by CD spectra of calcium-bound and apo calmodulin and binding thermodynamics for the formation of a complex between calmodulin and a canonical target peptide from skeletal muscle myosin light chain kinase measured by isothermal titration. The surprising lack of perturbation suggests that this probe group could be applied directly in many protein-protein binding interfaces. The infrared absorption bands for the probe groups reported many dramatic changes in the probes' local environments as CaM went from apo- to calcium-saturated to target peptide-bound conditions, including large frequency shifts and a variety of line shapes from narrow (interpreted as a rigid and invariant local environment) to symmetric to broad and asymmetric (likely from multiple coexisting and dynamically exchanging structures). The fast intrinsic time scale of infrared spectroscopy means that the line shapes report directly on site-specific details of calmodulin's variable structural distribution. Though quantitative interpretation of the probe line shapes depends on a direct connection between simulated ensembles and experimental data that does not yet exist, formation of such a connection to data such as that reported here would provide a new way to evaluate conformational ensembles from data that directly contains the structural distribution. The calmodulin probe sites developed here will also be useful in evaluating the binding mode of calmodulin with many uncharacterized regulatory targets.
Influence of Aerosol Heating on the Stratospheric Transport of the Mt. Pinatubo Eruption
NASA Technical Reports Server (NTRS)
Aquila, Valentina; Oman, Luke D.; Stolarski, Richard S.
2011-01-01
On June 15th, 1991 the eruption of Mt. Pinatubo (15.1 deg. N, 120.3 Deg. E) in the Philippines injected about 20 Tg of sulfur dioxide in the stratosphere, which was transformed into sulfuric acid aerosol. The large perturbation of the background aerosol caused an increase in temperature in the lower stratosphere of 2-3 K. Even though stratospheric winds climatological]y tend to hinder the air mixing between the two hemispheres, observations have shown that a large part of the SO2 emitted by Mt. Pinatubo have been transported from the Northern to the Southern Hemisphere. We simulate the eruption of Mt. Pinatubo with the Goddard Earth Observing System (GEOS) version 5 global climate model, coupled to the aerosol module GOCART and the stratospheric chemistry module StratChem, to investigate the influence of the eruption of Mt. Pinatubo on the stratospheric transport pattern. We perform two ensembles of simulations: the first ensemble consists of runs without coupling between aerosol and radiation. In these simulations the plume of aerosols is treated as a passive tracer and the atmosphere is unperturbed. In the second ensemble of simulations aerosols and radiation are coupled. We show that the set of runs with interactive aerosol produces a larger cross-equatorial transport of the Pinatubo cloud. In our simulations the local heating perturbation caused by the sudden injection of volcanic aerosol changes the pattern of the stratospheric winds causing more intrusion of air from the Northern into the Southern Hemisphere. Furthermore, we perform simulations changing the injection height of the cloud, and study the transport of the plume resulting from the different scenarios. Comparisons of model results with SAGE II and AVHRR satellite observations will be shown.
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2016-01-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e., the observation that large-scale rainfall structures are more persistent and predictable than small-scale convective cells. This paper presents the development, adaptation and verification of the STEPS system for Belgium (STEPS-BE). STEPS-BE provides in real-time 20-member ensemble precipitation nowcasts at 1 km and 5 min resolutions up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 75-90 % of the forecast errors.
NASA Astrophysics Data System (ADS)
Foresti, L.; Reyniers, M.; Seed, A.; Delobbe, L.
2015-07-01
The Short-Term Ensemble Prediction System (STEPS) is implemented in real-time at the Royal Meteorological Institute (RMI) of Belgium. The main idea behind STEPS is to quantify the forecast uncertainty by adding stochastic perturbations to the deterministic Lagrangian extrapolation of radar images. The stochastic perturbations are designed to account for the unpredictable precipitation growth and decay processes and to reproduce the dynamic scaling of precipitation fields, i.e. the observation that large scale rainfall structures are more persistent and predictable than small scale convective cells. This paper presents the development, adaptation and verification of the system STEPS for Belgium (STEPS-BE). STEPS-BE provides in real-time 20 member ensemble precipitation nowcasts at 1 km and 5 min resolution up to 2 h lead time using a 4 C-band radar composite as input. In the context of the PLURISK project, STEPS forecasts were generated to be used as input in sewer system hydraulic models for nowcasting urban inundations in the cities of Ghent and Leuven. Comprehensive forecast verification was performed in order to detect systematic biases over the given urban areas and to analyze the reliability of probabilistic forecasts for a set of case studies in 2013 and 2014. The forecast biases over the cities of Leuven and Ghent were found to be small, which is encouraging for future integration of STEPS nowcasts into the hydraulic models. Probabilistic forecasts of exceeding 0.5 mm h-1 are reliable up to 60-90 min lead time, while the ones of exceeding 5.0 mm h-1 are only reliable up to 30 min. The STEPS ensembles are slightly under-dispersive and represent only 80-90 % of the forecast errors.
DNA origami as biocompatible surface to match single-molecule and ensemble experiments
Gietl, Andreas; Holzmeister, Phil; Grohmann, Dina; Tinnefeld, Philip
2012-01-01
Single-molecule experiments on immobilized molecules allow unique insights into the dynamics of molecular machines and enzymes as well as their interactions. The immobilization, however, can invoke perturbation to the activity of biomolecules causing incongruities between single molecule and ensemble measurements. Here we introduce the recently developed DNA origami as a platform to transfer ensemble assays to the immobilized single molecule level without changing the nano-environment of the biomolecules. The idea is a stepwise transfer of common functional assays first to the surface of a DNA origami, which can be checked at the ensemble level, and then to the microscope glass slide for single-molecule inquiry using the DNA origami as a transfer platform. We studied the structural flexibility of a DNA Holliday junction and the TATA-binding protein (TBP)-induced bending of DNA both on freely diffusing molecules and attached to the origami structure by fluorescence resonance energy transfer. This resulted in highly congruent data sets demonstrating that the DNA origami does not influence the functionality of the biomolecule. Single-molecule data collected from surface-immobilized biomolecule-loaded DNA origami are in very good agreement with data from solution measurements supporting the fact that the DNA origami can be used as biocompatible surface in many fluorescence-based measurements. PMID:22523083
Hamiltonian mean-field model: effect of temporal perturbation in coupling matrix
NASA Astrophysics Data System (ADS)
Bhadra, Nivedita; Patra, Soumen K.
2018-05-01
The Hamiltonian mean-field (HMF) model is a system of fully coupled rotators which exhibits a second-order phase transition at some critical energy in its canonical ensemble. We investigate the case where the interaction between the rotors is governed by a time-dependent coupling matrix. Our numerical study reveals a shift in the critical point due to the temporal modulation. The shift in the critical point is shown to be independent of the modulation frequency above some threshold value, whereas the impact of the amplitude of modulation is dominant. In the microcanonical ensemble, the system with constant coupling reaches a quasi-stationary state (QSS) at an energy near the critical point. Our result indicates that the QSS subsists in presence of such temporal modulation of the coupling parameter.
Gienger, Jonas; Bär, Markus; Neukammer, Jörg
2018-01-10
A method is presented to infer simultaneously the wavelength-dependent real refractive index (RI) of the material of microspheres and their size distribution from extinction measurements of particle suspensions. To derive the averaged spectral optical extinction cross section of the microspheres from such ensemble measurements, we determined the particle concentration by flow cytometry to an accuracy of typically 2% and adjusted the particle concentration to ensure that perturbations due to multiple scattering are negligible. For analysis of the extinction spectra, we employ Mie theory, a series-expansion representation of the refractive index and nonlinear numerical optimization. In contrast to other approaches, our method offers the advantage to simultaneously determine size, size distribution, and spectral refractive index of ensembles of microparticles including uncertainty estimation.
Regge trajectories and Hagedorn behavior: Hadronic realizations of dynamical dark matter
NASA Astrophysics Data System (ADS)
Dienes, Keith R.; Huang, Fei; Su, Shufang; Thomas, Brooks
2017-11-01
Dynamical Dark Matter (DDM) is an alternative framework for dark-matter physics in which the dark sector comprises a vast ensemble of particle species whose Standard-Model decay widths are balanced against their cosmological abundances. In this talk, we study the properties of a hitherto-unexplored class of DDM ensembles in which the ensemble constituents are the "hadronic" resonances associated with the confining phase of a strongly-coupled dark sector. Such ensembles exhibit masses lying along Regge trajectories and Hagedorn-like densities of states that grow exponentially with mass. We investigate the applicable constraints on such dark-"hadronic" DDM ensembles and find that these constraints permit a broad range of mass and confinement scales for these ensembles. We also find that the distribution of the total present-day abundance across the ensemble is highly correlated with the values of these scales. This talk reports on research originally presented in Ref. [1].
NASA Astrophysics Data System (ADS)
Pan, Yujie; Xue, Ming; Zhu, Kefeng; Wang, Mingjun
2018-05-01
A dual-resolution (DR) version of a regional ensemble Kalman filter (EnKF)-3D ensemble variational (3DEnVar) coupled hybrid data assimilation system is implemented as a prototype for the operational Rapid Refresh forecasting system. The DR 3DEnVar system combines a high-resolution (HR) deterministic background forecast with lower-resolution (LR) EnKF ensemble perturbations used for flow-dependent background error covariance to produce a HR analysis. The computational cost is substantially reduced by running the ensemble forecasts and EnKF analyses at LR. The DR 3DEnVar system is tested with 3-h cycles over a 9-day period using a 40/˜13-km grid spacing combination. The HR forecasts from the DR hybrid analyses are compared with forecasts launched from HR Gridpoint Statistical Interpolation (GSI) 3D variational (3DVar) analyses, and single LR hybrid analyses interpolated to the HR grid. With the DR 3DEnVar system, a 90% weight for the ensemble covariance yields the lowest forecast errors and the DR hybrid system clearly outperforms the HR GSI 3DVar. Humidity and wind forecasts are also better than those launched from interpolated LR hybrid analyses, but the temperature forecasts are slightly worse. The humidity forecasts are improved most. For precipitation forecasts, the DR 3DEnVar always outperforms HR GSI 3DVar. It also outperforms the LR 3DEnVar, except for the initial forecast period and lower thresholds.
Minimization for conditional simulation: Relationship to optimal transport
NASA Astrophysics Data System (ADS)
Oliver, Dean S.
2014-05-01
In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.
Challenges in Downscaling Surge and Flooding Predictions Associated with Major Coastal Storm Events
NASA Astrophysics Data System (ADS)
Bowman, M. J.
2015-12-01
Coastal zone managers, elected officials and emergency planning personnel are continually seeking more reliable estimates of storm surge and inundation for better land use planning, the design, construction and operation of coastal defense systems, resilience evaluation and evacuation planning. Customers of modern regional weather and storm surge prediction models demand high resolution, speed, accuracy, with informative, interactive graphics and easy evaluation of potentially dangerous threats to life and property. These challenges continue to get more difficult as the demand for street-scale and even building-scale predictions increase. Fluctuations in sub-grid-scale wind and water velocities can lead to unsuspected, unanticipated and dangerous flooding in local communities. But how reliable and believable are these models given the inherent natural uncertainty and chaotic behavior in the underlying dynamics, which can lead to rapid and unexpected perturbations in the wind and pressure fields and hence coastal flooding? Traditionally this uncertainty has been quantified by the use of the ensemble method, where a suite of model runs are made with varying physics and initial conditions, presenting the mean and variance of the ensemble as the best metrics possible. But this assumes that each component is equally possible and is statistically independent of the others. But this is rarely true, although the "safety in numbers" approach is comforting to those faced with life and death decisions. An example of the ensemble method is presented for the trajectory of superstorm Sandy's storm center as it approached coastal New Jersey. If one were to ask the question "was Sandy a worst case scenario", the answer would be "no: small variations in the timing (vis-à-vis tide phase) and location of landfall could easily have led to an additional surge of +50 cm at The Battery NY with even more catastrophic consequences to those experienced".
Accessing protein conformational ensembles using room-temperature X-ray crystallography
Fraser, James S.; van den Bedem, Henry; Samelson, Avi J.; Lang, P. Therese; Holton, James M.; Echols, Nathaniel; Alber, Tom
2011-01-01
Modern protein crystal structures are based nearly exclusively on X-ray data collected at cryogenic temperatures (generally 100 K). The cooling process is thought to introduce little bias in the functional interpretation of structural results, because cryogenic temperatures minimally perturb the overall protein backbone fold. In contrast, here we show that flash cooling biases previously hidden structural ensembles in protein crystals. By analyzing available data for 30 different proteins using new computational tools for electron-density sampling, model refinement, and molecular packing analysis, we found that crystal cryocooling remodels the conformational distributions of more than 35% of side chains and eliminates packing defects necessary for functional motions. In the signaling switch protein, H-Ras, an allosteric network consistent with fluctuations detected in solution by NMR was uncovered in the room-temperature, but not the cryogenic, electron-density maps. These results expose a bias in structural databases toward smaller, overpacked, and unrealistically unique models. Monitoring room-temperature conformational ensembles by X-ray crystallography can reveal motions crucial for catalysis, ligand binding, and allosteric regulation. PMID:21918110
Holographic Jet Shapes and their Evolution in Strongly Coupled Plasma
NASA Astrophysics Data System (ADS)
Brewer, Jasmine; Rajagopal, Krishna; Sadofyev, Andrey; van der Schee, Wilke
2017-11-01
Recently our group analyzed how the probability distribution for the jet opening angle is modified in an ensemble of jets that has propagated through an expanding cooling droplet of plasma [K. Rajagopal, A. V. Sadofyev, W. van der Schee, Phys. Rev. Lett. 116 (2016) 211603]. Each jet in the ensemble is represented holographically by a string in the dual 4+1- dimensional gravitational theory with the distribution of initial energies and opening angles in the ensemble given by perturbative QCD. In [K. Rajagopal, A. V. Sadofyev, W. van der Schee, Phys. Rev. Lett. 116 (2016) 211603], the full string dynamics were approximated by assuming that the string moves at the speed of light. We are now able to analyze the full string dynamics for a range of possible initial conditions, giving us access to the dynamics of holographic jets just after their creation. The nullification timescale and the features of the string when it has nullified are all results of the string evolution. This emboldens us to analyze the full jet shape modification, rather than just the opening angle modification of each jet in the ensemble as in [K. Rajagopal, A. V. Sadofyev, W. van der Schee, Phys. Rev. Lett. 116 (2016) 211603]. We find the result that the jet shape scales with the opening angle at any particular energy. We construct an ensemble of dijets with energies and energy asymmetry distributions taken from events in proton-proton collisions, opening angle distribution as in [K. Rajagopal, A. V. Sadofyev, W. van der Schee, Phys. Rev. Lett. 116 (2016) 211603], and jet shape taken from proton-proton collisions and scaled according to our result. We study how these observables are modified after we send the ensemble of dijets through the strongly-coupled plasma.
NASA Astrophysics Data System (ADS)
Yin, Shengwen; Yu, Dejie; Yin, Hui; Lü, Hui; Xia, Baizhan
2017-09-01
Considering the epistemic uncertainties within the hybrid Finite Element/Statistical Energy Analysis (FE/SEA) model when it is used for the response analysis of built-up systems in the mid-frequency range, the hybrid Evidence Theory-based Finite Element/Statistical Energy Analysis (ETFE/SEA) model is established by introducing the evidence theory. Based on the hybrid ETFE/SEA model and the sub-interval perturbation technique, the hybrid Sub-interval Perturbation and Evidence Theory-based Finite Element/Statistical Energy Analysis (SIP-ETFE/SEA) approach is proposed. In the hybrid ETFE/SEA model, the uncertainty in the SEA subsystem is modeled by a non-parametric ensemble, while the uncertainty in the FE subsystem is described by the focal element and basic probability assignment (BPA), and dealt with evidence theory. Within the hybrid SIP-ETFE/SEA approach, the mid-frequency response of interest, such as the ensemble average of the energy response and the cross-spectrum response, is calculated analytically by using the conventional hybrid FE/SEA method. Inspired by the probability theory, the intervals of the mean value, variance and cumulative distribution are used to describe the distribution characteristics of mid-frequency responses of built-up systems with epistemic uncertainties. In order to alleviate the computational burdens for the extreme value analysis, the sub-interval perturbation technique based on the first-order Taylor series expansion is used in ETFE/SEA model to acquire the lower and upper bounds of the mid-frequency responses over each focal element. Three numerical examples are given to illustrate the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Li, Xiaojing; Tang, Youmin; Yao, Zhixiong
2017-04-01
The predictability of the convection related to the Madden-Julian Oscillation (MJO) is studied using a coupled model CESM (Community Earth System Model) and the climatically relevant singular vector (CSV) approach. The CSV approach is an ensemble-based strategy to calculate the optimal initial error on climate scale. In this study, we focus on the optimal initial error of the sea surface temperature in Indian Ocean, where is the location of the MJO onset. Six MJO events are chosen from the 10 years model simulation output. The results show that the large values of the SVs are mainly located in the bay of Bengal and the south central IO (around (25°S, 90°E)), which is a meridional dipole-like pattern. The fast error growth of the CSVs have important impacts on the prediction of the convection related to the MJO. The initial perturbations with the SV pattern result in the deep convection damping more quickly in the east Pacific Ocean. Moreover, the sensitivity studies of the CSVs show that different initial fields do not affect the CSVs obviously, while the perturbation domain is a more responsive factor to the CSVs. The rapid growth of the CSVs is found to be related to the west bay of Bengal, where the wind stress starts to be perturbed due to the CSV initial error. These results contribute to the establishment of an ensemble prediction system, as well as the optimal observation network. In addition, the analysis of the error growth can provide us some enlightment about the relationship between SST and the intraseasonal convection related to the MJO.
ICE CONTROL - Towards optimizing wind energy production during icing events
NASA Astrophysics Data System (ADS)
Dorninger, Manfred; Strauss, Lukas; Serafin, Stefano; Beck, Alexander; Wittmann, Christoph; Weidle, Florian; Meier, Florian; Bourgeois, Saskia; Cattin, René; Burchhart, Thomas; Fink, Martin
2017-04-01
Forecasts of wind power production loss caused by icing weather conditions are produced by a chain of physical models. The model chain consists of a numerical weather prediction model, an icing model and a production loss model. Each element of the model chain is affected by significant uncertainty, which can be quantified using targeted observations and a probabilistic forecasting approach. In this contribution, we present preliminary results from the recently launched project ICE CONTROL, an Austrian research initiative on measurements, probabilistic forecasting, and verification of icing on wind turbine blades. ICE CONTROL includes an experimental field phase, consisting of measurement campaigns in a wind park in Rhineland-Palatinate, Germany, in the winters 2016/17 and 2017/18. Instruments deployed during the campaigns consist of a conventional icing detector on the turbine hub and newly devised ice sensors (eologix Sensor System) on the turbine blades, as well as meteorological sensors for wind, temperature, humidity, visibility, and precipitation type and spectra. Liquid water content and spectral characteristics of super-cooled water droplets are measured using a Fog Monitor FM-120. Three cameras document the icing conditions on the instruments and on the blades. Different modelling approaches are used to quantify the components of the model-chain uncertainties. The uncertainty related to the initial conditions of the weather prediction is evaluated using the existing global ensemble prediction system (EPS) of the European Centre for Medium-Range Weather Forecasts (ECMWF). Furthermore, observation system experiments are conducted with the AROME model and its 3D-Var data assimilation to investigate the impact of additional observations (such as Mode-S aircraft data, SCADA data and MSG cloud mask initialization) on the numerical icing forecast. The uncertainty related to model formulation is estimated from multi-physics ensembles based on the Weather Research and Forecasting model (WRF) by perturbing parameters in the physical parameterization schemes. In addition, uncertainties of the icing model and of its adaptations to the rotating turbine blade are addressed. The model forecasts combined with the suite of instruments and their measurements make it possible to conduct a step-wise verification of all the components of the model chain - a novel aspect compared to similar ongoing and completed forecasting projects.
NASA Technical Reports Server (NTRS)
Taylor, Patrick C.; Baker, Noel C.
2015-01-01
Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.
NASA Technical Reports Server (NTRS)
Todling, Ricardo; Diniz, F. L. R.; Takacs, L. L.; Suarez, M. J.
2018-01-01
Many hybrid data assimilation systems currently used for NWP employ some form of dual-analysis system approach. Typically a hybrid variational analysis is responsible for creating initial conditions for high-resolution forecasts, and an ensemble analysis system is responsible for creating sample perturbations used to form the flow-dependent part of the background error covariance required in the hybrid analysis component. In many of these, the two analysis components employ different methodologies, e.g., variational and ensemble Kalman filter. In such cases, it is not uncommon to have observations treated rather differently between the two analyses components; recentering of the ensemble analysis around the hybrid analysis is used to compensated for such differences. Furthermore, in many cases, the hybrid variational high-resolution system implements some type of four-dimensional approach, whereas the underlying ensemble system relies on a three-dimensional approach, which again introduces discrepancies in the overall system. Connected to these is the expectation that one can reliably estimate observation impact on forecasts issued from hybrid analyses by using an ensemble approach based on the underlying ensemble strategy of dual-analysis systems. Just the realization that the ensemble analysis makes substantially different use of observations as compared to their hybrid counterpart should serve as enough evidence of the implausibility of such expectation. This presentation assembles numerous anecdotal evidence to illustrate the fact that hybrid dual-analysis systems must, at the very minimum, strive for consistent use of the observations in both analysis sub-components. Simpler than that, this work suggests that hybrid systems can reliably be constructed without the need to employ a dual-analysis approach. In practice, the idea of relying on a single analysis system is appealing from a cost-maintenance perspective. More generally, single-analysis systems avoid contradictions such as having to choose one sub-component to generate performance diagnostics to another, possibly not fully consistent, component.
Zhou, Shiqi; Jamnik, Andrej
2005-09-22
The structure of a Lennard-Jones (LJ) fluid subjected to diverse external fields maintaining the equilibrium with the bulk LJ fluid is studied on the basis of the third-order+second-order perturbation density-functional approximation (DFA). The chosen density and potential parameters for the bulk fluid correspond to the conditions situated at "dangerous" regions of the phase diagram, i.e., near the critical temperature or close to the gas-liquid coexistence curve. The accuracy of DFA predictions is tested against the results of a grand canonical ensemble Monte Carlo simulation. It is found that the DFA theory presented in this work performs successfully for the nonuniform LJ fluid only on the condition of high accuracy of the required bulk second-order direct correlation function. The present report further indicates that the proposed perturbation DFA is efficient and suitable for both supercritical and subcritical temperatures.
NASA Astrophysics Data System (ADS)
Luo, Jing-Jia; Masson, Sebastien; Behera, Swadhin; Shingu, Satoru; Yamagata, Toshio
2005-11-01
Predictabilities of tropical climate signals are investigated using a relatively high resolution Scale Interaction Experiment Frontier Research Center for Global Change (FRCGC) coupled GCM (SINTEX-F). Five ensemble forecast members are generated by perturbing the model’s coupling physics, which accounts for the uncertainties of both initial conditions and model physics. Because of the model’s good performance in simulating the climatology and ENSO in the tropical Pacific, a simple coupled SST-nudging scheme generates realistic thermocline and surface wind variations in the equatorial Pacific. Several westerly and easterly wind bursts in the western Pacific are also captured.Hindcast results for the period 1982 2001 show a high predictability of ENSO. All past El Niño and La Niña events, including the strongest 1997/98 warm episode, are successfully predicted with the anomaly correlation coefficient (ACC) skill scores above 0.7 at the 12-month lead time. The predicted signals of some particular events, however, become weak with a delay in the phase at mid and long lead times. This is found to be related to the intraseasonal wind bursts that are unpredicted beyond a few months of lead time. The model forecasts also show a “spring prediction barrier” similar to that in observations. Spatial SST anomalies, teleconnection, and global drought/flood during three different phases of ENSO are successfully predicted at 9 12-month lead times.In the tropical North Atlantic and southwestern Indian Ocean, where ENSO has predominant influences, the model shows skillful predictions at the 7 12-month lead times. The distinct signal of the Indian Ocean dipole (IOD) event in 1994 is predicted at the 6-month lead time. SST anomalies near the western coast of Australia are also predicted beyond the 12-month lead time because of pronounced decadal signals there.
A Localized Ensemble Kalman Smoother
NASA Technical Reports Server (NTRS)
Butala, Mark D.
2012-01-01
Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.
Rethinking the Default Construction of Multimodel Climate Ensembles
Rauser, Florian; Gleckler, Peter; Marotzke, Jochem
2015-07-21
Here, we discuss the current code of practice in the climate sciences to routinely create climate model ensembles as ensembles of opportunity from the newest phase of the Coupled Model Intercomparison Project (CMIP). We give a two-step argument to rethink this process. First, the differences between generations of ensembles corresponding to different CMIP phases in key climate quantities are not large enough to warrant an automatic separation into generational ensembles for CMIP3 and CMIP5. Second, we suggest that climate model ensembles cannot continue to be mere ensembles of opportunity but should always be based on a transparent scientific decision process.more » If ensembles can be constrained by observation, then they should be constructed as target ensembles that are specifically tailored to a physical question. If model ensembles cannot be constrained by observation, then they should be constructed as cross-generational ensembles, including all available model data to enhance structural model diversity and to better sample the underlying uncertainties. To facilitate this, CMIP should guide the necessarily ongoing process of updating experimental protocols for the evaluation and documentation of coupled models. Finally, with an emphasis on easy access to model data and facilitating the filtering of climate model data across all CMIP generations and experiments, our community could return to the underlying idea of using model data ensembles to improve uncertainty quantification, evaluation, and cross-institutional exchange.« less
Ensembles of physical states and random quantum circuits on graphs
NASA Astrophysics Data System (ADS)
Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo
2012-11-01
In this paper we continue and extend the investigations of the ensembles of random physical states introduced in Hamma [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.040502 109, 040502 (2012)]. These ensembles are constructed by finite-length random quantum circuits (RQC) acting on the (hyper)edges of an underlying (hyper)graph structure. The latter encodes for the locality structure associated with finite-time quantum evolutions generated by physical, i.e., local, Hamiltonians. Our goal is to analyze physical properties of typical states in these ensembles; in particular here we focus on proxies of quantum entanglement as purity and α-Renyi entropies. The problem is formulated in terms of matrix elements of superoperators which depend on the graph structure, choice of probability measure over the local unitaries, and circuit length. In the α=2 case these superoperators act on a restricted multiqubit space generated by permutation operators associated to the subsets of vertices of the graph. For permutationally invariant interactions the dynamics can be further restricted to an exponentially smaller subspace. We consider different families of RQCs and study their typical entanglement properties for finite time as well as their asymptotic behavior. We find that area law holds in average and that the volume law is a typical property (that is, it holds in average and the fluctuations around the average are vanishing for the large system) of physical states. The area law arises when the evolution time is O(1) with respect to the size L of the system, while the volume law arises as is typical when the evolution time scales like O(L).
Sanchez-Martinez, M; Crehuet, R
2014-12-21
We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.
Dynamical predictive power of the generalized Gibbs ensemble revealed in a second quench.
Zhang, J M; Cui, F C; Hu, Jiangping
2012-04-01
We show that a quenched and relaxed completely integrable system is hardly distinguishable from the corresponding generalized Gibbs ensemble in a dynamical sense. To be specific, the response of the quenched and relaxed system to a second quench can be accurately reproduced by using the generalized Gibbs ensemble as a substitute. Remarkably, as demonstrated with the transverse Ising model and the hard-core bosons in one dimension, not only the steady values but even the transient, relaxation dynamics of the physical variables can be accurately reproduced by using the generalized Gibbs ensemble as a pseudoinitial state. This result is an important complement to the previously established result that a quenched and relaxed system is hardly distinguishable from the generalized Gibbs ensemble in a static sense. The relevance of the generalized Gibbs ensemble in the nonequilibrium dynamics of completely integrable systems is then greatly strengthened.
Assimilation of sea ice concentration data in the Arctic via DART/CICE5 in the CESM1
NASA Astrophysics Data System (ADS)
Zhang, Y.; Bitz, C. M.; Anderson, J. L.; Collins, N.; Hendricks, J.; Hoar, T. J.; Raeder, K.
2016-12-01
Arctic sea ice cover has been experiencing significant reduction in the past few decades. Climate models predict that the Arctic Ocean may be ice-free in late summer within a few decades. Better sea ice prediction is crucial for regional and global climate prediction that are vital to human activities such as maritime shipping and subsistence hunting, as well as wildlife protection as animals face habitat loss. The physical processes involved with the persistence and re-emergence of sea ice cover are found to extend the predictability of sea ice concentration (SIC) and thickness at the regional scale up to several years. This motivates us to investigate sea ice predictability stemming from initial values of the sea ice cover. Data assimilation is a useful technique to combine observations and model forecasts to reconstruct the states of sea ice in the past and provide more accurate initial conditions for sea ice prediction. This work links the most recent version of the Los Alamos sea ice model (CICE5) within the Community Earth System Model version 1.5 (CESM1.5) and the Data Assimilation Research Testbed (DART). The linked DART/CICE5 is ideal to assimilate multi-scale and multivariate sea ice observations using an ensemble Kalman filter (EnKF). The study is focused on the assimilation of SIC data that impact SIC, sea ice thickness, and snow thickness. The ensemble sea ice model states are constructed by introducing uncertainties in atmospheric forcing and key model parameters. The ensemble atmospheric forcing is a reanalysis product generated with DART and the Community Atmosphere Model (CAM). We also perturb two model parameters that are found to contribute significantly to the model uncertainty in previous studies. This study applies perfect model observing system simulation experiments (OSSEs) to investigate data assimilation algorithms and post-processing methods. One of the ensemble members of a CICE5 free run is chosen as the truth. Daily synthetic observations are obtained by adding 15% random noise to the truth. Experiments assimilating the synthetic observations are then conducted to test the effectiveness of different data assimilation algorithms (e.g., localization and inflation) and post-processing methods (e.g., how to distribute the total increment of SIC into each ice thickness category).
Coherently coupling distinct spin ensembles through a high-Tc superconducting resonator
NASA Astrophysics Data System (ADS)
Ghirri, A.; Bonizzoni, C.; Troiani, F.; Buccheri, N.; Beverina, L.; Cassinese, A.; Affronte, M.
2016-06-01
The problem of coupling multiple spin ensembles through cavity photons is revisited by using (3,5-dichloro-4-pyridyl)bis(2,4,6-trichlorophenyl)methyl (PyBTM) organic radicals and a high-Tc superconducting coplanar resonator. An exceptionally strong coupling is obtained and up to three spin ensembles are simultaneously coupled. The ensembles are made physically distinguishable by chemically varying the g factor and by exploiting the inhomogeneities of the applied magnetic field. The coherent mixing of the spin and field modes is demonstrated by the observed multiple anticrossing, along with the simulations performed within the input-output formalism, and quantified by suitable entropic measures.
Constraining Future Sea Level Rise Estimates from the Amundsen Sea Embayment, West Antarctica
NASA Astrophysics Data System (ADS)
Nias, I.; Cornford, S. L.; Edwards, T.; Gourmelen, N.; Payne, A. J.
2016-12-01
The Amundsen Sea Embayment (ASE) is the primary source of mass loss from the West Antarctic Ice Sheet. The catchment is particularly susceptible to grounding line retreat, because the ice sheet is grounded on bedrock that is below sea level and deepening towards its interior. Mass loss from the ASE ice streams, which include Pine Island, Thwaites and Smith glaciers, is a major uncertainty on future sea level rise, and understanding the dynamics of these ice streams is essential to constraining this uncertainty. The aim of this study is to construct a distribution of future ASE sea level contributions from an ensemble of ice sheet model simulations and observations of surface elevation change. A 284 member ensemble was performed using BISICLES, a vertically-integrated ice flow model with adaptive mesh refinement. Within the ensemble parameters associated with basal traction, ice rheology and sub-shelf melt rate were perturbed, and the effect of bed topography and sliding law were also investigated. Initially each configuration was run to 50 model years. Satellite observations of surface height change were then used within a Bayesian framework to assign likelihoods to each ensemble member. Simulations that better reproduced the current thinning patterns across the catchment were given a higher score. The resulting posterior distribution of sea level contributions is narrower than the prior distribution, although the central estimates of sea level rise are similar between the prior and posterior. The most extreme simulations were eliminated and the remaining ensemble members were extended to 200 years, using a simple melt rate forcing.
NASA Astrophysics Data System (ADS)
Schunk, R. W.; Scherliess, L.; Eccles, V.; Gardner, L. C.; Sojka, J. J.; Zhu, L.; Pi, X.; Mannucci, A. J.; Komjathy, A.; Wang, C.; Rosen, G.
2016-12-01
As part of the NASA-NSF Space Weather Modeling Collaboration, we created a Multimodel Ensemble Prediction System (MEPS) for the Ionosphere-Thermosphere-Electrodynamics system that is based on Data Assimilation (DA) models. MEPS is composed of seven physics-based data assimilation models that cover the globe. Ensemble modeling can be conducted for the mid-low latitude ionosphere using the four GAIM data assimilation models, including the Gauss Markov (GM), Full Physics (FP), Band Limited (BL) and 4DVAR DA models. These models can assimilate Total Electron Content (TEC) from a constellation of satellites, bottom-side electron density profiles from digisondes, in situ plasma densities, occultation data and ultraviolet emissions. The four GAIM models were run for the March 16-17, 2013, geomagnetic storm period with the same data, but we also systematically added new data types and re-ran the GAIM models to see how the different data types affected the GAIM results, with the emphasis on elucidating differences in the underlying ionospheric dynamics and thermospheric coupling. Also, for each scenario the outputs from the four GAIM models were used to produce an ensemble mean for TEC, NmF2, and hmF2. A simple average of the models was used in the ensemble averaging to see if there was an improvement of the ensemble average over the individual models. For the scenarios considered, the ensemble average yielded better specifications than the individual GAIM models. The model differences and averages, and the consequent differences in ionosphere-thermosphere coupling and dynamics will be discussed.
Admissible perturbations and false instabilities in PT -symmetric quantum systems
NASA Astrophysics Data System (ADS)
Znojil, Miloslav
2018-03-01
One of the most characteristic mathematical features of the PT -symmetric quantum mechanics is the explicit Hamiltonian dependence of its physical Hilbert space of states H =H (H ) . Some of the most important physical consequences are discussed, with emphasis on the dynamical regime in which the system is close to phase transition. Consistent perturbation treatment of such a regime is proposed. An illustrative application of the innovated perturbation theory to a non-Hermitian but PT -symmetric user-friendly family of J -parametric "discrete anharmonic" quantum Hamiltonians H =H (λ ⃗) is provided. The models are shown to admit the standard probabilistic interpretation if and only if the parameters remain compatible with the reality of the spectrum, λ ⃗∈D(physical ) . In contradiction to conventional wisdom, the systems are then shown to be stable with respect to admissible perturbations, inside the domain D(physical ), even in the immediate vicinity of the phase-transition boundaries ∂ D(physical ) .
Impacts of snow cover fraction data assimilation on modeled energy and moisture budgets
NASA Astrophysics Data System (ADS)
Arsenault, Kristi R.; Houser, Paul R.; De Lannoy, Gabriëlle J. M.; Dirmeyer, Paul A.
2013-07-01
Two data assimilation (DA) methods, a simple rule-based direct insertion (DI) approach and a one-dimensional ensemble Kalman filter (EnKF) method, are evaluated by assimilating snow cover fraction observations into the Community Land surface Model. The ensemble perturbation needed for the EnKF resulted in negative snowpack biases. Therefore, a correction is made to the ensemble bias using an approach that constrains the ensemble forecasts with a single unperturbed deterministic LSM run. This is shown to improve the final snow state analyses. The EnKF method produces slightly better results in higher elevation locations, whereas results indicate that the DI method has a performance advantage in lower elevation regions. In addition, the two DA methods are evaluated in terms of their overall impacts on the other land surface state variables (e.g., soil moisture) and fluxes (e.g., latent heat flux). The EnKF method is shown to have less impact overall than the DI method and causes less distortion of the hydrological budget. However, the land surface model adjusts more slowly to the smaller EnKF increments, which leads to smaller but slightly more persistent moisture budget errors than found with the DI updates. The DI method can remove almost instantly much of the modeled snowpack, but this also allows the model system to quickly revert to hydrological balance for nonsnowpack conditions.
A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions
NASA Astrophysics Data System (ADS)
Lienert, Sebastian; Joos, Fortunat
2018-05-01
A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yawen; Zhang, Kai; Qian, Yun
Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less
Liu, Yawen; Zhang, Kai; Qian, Yun; ...
2018-01-03
Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less
NASA Astrophysics Data System (ADS)
Sanderson, B. M.
2017-12-01
The CMIP ensembles represent the most comprehensive source of information available to decision-makers for climate adaptation, yet it is clear that there are fundamental limitations in our ability to treat the ensemble as an unbiased sample of possible future climate trajectories. There is considerable evidence that models are not independent, and increasing complexity and resolution combined with computational constraints prevent a thorough exploration of parametric uncertainty or internal variability. Although more data than ever is available for calibration, the optimization of each model is influenced by institutional priorities, historical precedent and available resources. The resulting ensemble thus represents a miscellany of climate simulators which defy traditional statistical interpretation. Models are in some cases interdependent, but are sufficiently complex that the degree of interdependency is conditional on the application. Configurations have been updated using available observations to some degree, but not in a consistent or easily identifiable fashion. This means that the ensemble cannot be viewed as a true posterior distribution updated by available data, but nor can observational data alone be used to assess individual model likelihood. We assess recent literature for combining projections from an imperfect ensemble of climate simulators. Beginning with our published methodology for addressing model interdependency and skill in the weighting scheme for the 4th US National Climate Assessment, we consider strategies for incorporating process-based constraints on future response, perturbed parameter experiments and multi-model output into an integrated framework. We focus on a number of guiding questions: Is the traditional framework of confidence in projections inferred from model agreement leading to biased or misleading conclusions? Can the benefits of upweighting skillful models be reconciled with the increased risk of truth lying outside the weighted ensemble distribution? If CMIP is an ensemble of partially informed best-guesses, can we infer anything about the parent distribution of all possible models of the climate system (and if not, are we implicitly under-representing the risk of a climate catastrophe outside of the envelope of CMIP simulations)?
Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data
Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the f...
NASA Astrophysics Data System (ADS)
Biju, K. G.; Bagchi, Joydeep; Ishwara-Chandra, C. H.; Pandey-Pommier, M.; Jacob, Joe; Patil, M. K.; Kumar, P. Sunil; Pandge, Mahadev; Dabhade, Pratik; Gaikwad, Madhuri; Dhurde, Samir; Abraham, Sheelu; Vivek, M.; Mahabal, Ashish A.; Djorgovski, S. G.
2017-10-01
We report the results of our radio, optical and infrared studies of a peculiar radio source 4C 35.06, an extended radio-loud active galactic nucleus (AGN) at the centre of galaxy cluster Abell 407 (z = 0.047). The central region of this cluster hosts a remarkably tight ensemble of nine galaxies, the spectra of which resemble those of passive red ellipticals, embedded within a diffuse stellar halo of ˜1 arcmin size. This system (named 'Zwicky's Nonet') provides unique and compelling evidence for a multiple-nucleus cD galaxy precursor. Multifrequency radio observations of 4C 35.06 with the Giant Meterwave Radio Telescope (GMRT) at 610, 235 and 150 MHz reveal a system of 400-kpc scale helically twisted and kinked radio jets and outer diffuse lobes. The outer extremities of jets contain extremely steep-spectrum (spectral index -1.7 to -2.5) relic/fossil radio plasma with a spectral age of a few ×(107-108) yr. Such ultra-steep spectrum relic radio lobes without definitive hotspots are rare and they provide an opportunity to understand the life cycle of relativistic jets and physics of black hole mergers in dense environments. We interpret our observations of this radio source in the context of growth of its central black hole, triggering of its AGN activity and jet precession, all possibly caused by galaxy mergers in this dense galactic system. A slow conical precession of the jet axis due to gravitational perturbation between interacting black holes is invoked to explain the unusual jet morphology.
Evaluation and uncertainty analysis of regional-scale CLM4.5 net carbon flux estimates
NASA Astrophysics Data System (ADS)
Post, Hanna; Hendricks Franssen, Harrie-Jan; Han, Xujun; Baatz, Roland; Montzka, Carsten; Schmidt, Marius; Vereecken, Harry
2018-01-01
Modeling net ecosystem exchange (NEE) at the regional scale with land surface models (LSMs) is relevant for the estimation of regional carbon balances, but studies on it are very limited. Furthermore, it is essential to better understand and quantify the uncertainty of LSMs in order to improve them. An important key variable in this respect is the prognostic leaf area index (LAI), which is very sensitive to forcing data and strongly affects the modeled NEE. We applied the Community Land Model (CLM4.5-BGC) to the Rur catchment in western Germany and compared estimated and default ecological key parameters for modeling carbon fluxes and LAI. The parameter estimates were previously estimated with the Markov chain Monte Carlo (MCMC) approach DREAM(zs) for four of the most widespread plant functional types in the catchment. It was found that the catchment-scale annual NEE was strongly positive with default parameter values but negative (and closer to observations) with the estimated values. Thus, the estimation of CLM parameters with local NEE observations can be highly relevant when determining regional carbon balances. To obtain a more comprehensive picture of model uncertainty, CLM ensembles were set up with perturbed meteorological input and uncertain initial states in addition to uncertain parameters. C3 grass and C3 crops were particularly sensitive to the perturbed meteorological input, which resulted in a strong increase in the standard deviation of the annual NEE sum (σ
Perturbed effects at radiation physics
NASA Astrophysics Data System (ADS)
Külahcı, Fatih; Şen, Zekâi
2013-09-01
Perturbation methodology is applied in order to assess the linear attenuation coefficient, mass attenuation coefficient and cross-section behavior with random components in the basic variables such as the radiation amounts frequently used in the radiation physics and chemistry. Additionally, layer attenuation coefficient (LAC) and perturbed LAC (PLAC) are proposed for different contact materials. Perturbation methodology provides opportunity to obtain results with random deviations from the average behavior of each variable that enters the whole mathematical expression. The basic photon intensity variation expression as the inverse exponential power law (as Beer-Lambert's law) is adopted for perturbation method exposition. Perturbed results are presented not only in terms of the mean but additionally the standard deviation and the correlation coefficients. Such perturbation expressions provide one to assess small random variability in basic variables.
Diagnostics of sources of tropospheric ozone using data assimilation during the KORUS-AQ campaign
NASA Astrophysics Data System (ADS)
Gaubert, B.; Emmons, L. K.; Miyazaki, K.; Buchholz, R. R.; Tang, W.; Arellano, A. F., Jr.; Tilmes, S.; Barré, J.; Worden, H. M.; Raeder, K.; Anderson, J. L.; Edwards, D. P.
2017-12-01
Atmospheric oxidative capacity plays a crucial role in the fate of greenhouse gases and air pollutants as well as in the formation of secondary pollutants such as tropospheric ozone. The attribution of sources of tropospheric ozone is a difficult task because of biases in input parameters and forcings such as emissions and meteorology in addition to errors in chemical schemes. We assimilate satellite remote sensing observations of ozone precursors such as carbon monoxide (CO) and nitrogen dioxide (NO2) in the global coupled chemistry-transport model: Community Atmosphere Model with Chemistry (CAM-Chem). The assimilation is completed using an Ensemble Adjustment Kalman Filter (EAKF) in the Data Assimilation Research Testbed (DART) framework which allows estimates of unobserved parameters and potential constraints on secondary pollutants and emissions. The ensemble will be constructed using perturbations in chemical kinetics, different emission fields and by assimilating meteorological observations to fully assess uncertainties in the chemical fields of targeted species. We present a set of tools such as emission tags (CO and propane), combined with diagnostic analysis of chemical regimes and perturbation of emissions ratios to estimate a regional budget of primary and secondary pollutants in East Asia and their sensitivity to data assimilation. This study benefits from the large set of aircraft and ozonesonde in-situ observations from the Korea-United States Air Quality (KORUS-AQ) campaign that occurred in South Korea in May-June 2016.
NASA Astrophysics Data System (ADS)
Westervelt, D. M.; Fiore, A. M.; Lamarque, J. F.; Previdi, M. J.; Conley, A. J.; Shindell, D. T.; Mascioli, N. R.; Correa, G. J. P.; Faluvegi, G.; Horowitz, L. W.
2016-12-01
Regional emissions of anthropogenic aerosols and their precursors will likely decrease for the remainder of the 21st century, due to emission reduction policies enacted to protect human health. Although there is some evidence that regional climate effects of aerosols can be significant, we currently lack a robust understanding of the magnitude, spatio-temporal pattern, statistical significance, and physical processes responsible for these influences, especially for precipitation. Here, we aim to quantify systematically the precipitation response to regional changes in aerosols and investigate underlying mechanisms using three fully coupled chemistry-climate models: NOAA Geophysical Fluid Dynamics Laboratory Coupled Model 3 (GFDL-CM3), NCAR Community Earth System Model (CESM), and NASA Goddard Institute for Space Studies ModelE2 (GISS-E2). The central approach we use is to contrast a long control experiment (400 years, run with perpetual year 2000 emissions) with 14 individual aerosol emissions perturbation experiments ( 200 years each). We perturb emissions of sulfur dioxide (SO2) and carbonaceous aerosol (BC and OM) within several world regions and assess which responses are significant relative to internal variability determined by the control run and robust across the three models. Initial results show significant changes in precipitation in several vulnerable regions including the Western Sahel and the Indian subcontinent. SO2 emissions reductions from Europe and the United States have the largest impact on precipitation among most of the selected response regions. The precipitation response to emissions changes from these regions projects onto known modes of variability, such as the North Atlantic Oscillation (NAO) and the El Niño Southern Oscillation (ENSO). Across all perturbation experiments, we find a strong linear relationship between the responses of Sahel precipitation and the interhemispheric temperature difference, suggesting a common mechanism of an anomalous Hadley cell circulation and a shift of the Intertropical Convergence Zone (ITCZ). GFDL-CM3 and CESM1 each show strong changes in regional precipitation in response to the various regional aerosol emissions perturbations, whereas a more modest response occurs in GISS-E2, owing to a weaker aerosol indirect effect.
Probabilistic Climate Scenario Information for Risk Assessment
NASA Astrophysics Data System (ADS)
Dairaku, K.; Ueno, G.; Takayabu, I.
2014-12-01
Climate information and services for Impacts, Adaptation and Vulnerability (IAV) Assessments are of great concern. In order to develop probabilistic regional climate information that represents the uncertainty in climate scenario experiments in Japan, we compared the physics ensemble experiments using the 60km global atmospheric model of the Meteorological Research Institute (MRI-AGCM) with multi-model ensemble experiments with global atmospheric-ocean coupled models (CMIP3) of SRES A1b scenario experiments. The MRI-AGCM shows relatively good skills particularly in tropics for temperature and geopotential height. Variability in surface air temperature of physical ensemble experiments with MRI-AGCM was within the range of one standard deviation of the CMIP3 model in the Asia region. On the other hand, the variability of precipitation was relatively well represented compared with the variation of the CMIP3 models. Models which show the similar reproducibility in the present climate shows different future climate change. We couldn't find clear relationships between present climate and future climate change in temperature and precipitation. We develop a new method to produce probabilistic information of climate change scenarios by weighting model ensemble experiments based on a regression model (Krishnamurti et al., Science, 1999). The method can be easily applicable to other regions and other physical quantities, and also to downscale to finer-scale dependent on availability of observation dataset. The prototype of probabilistic information in Japan represents the quantified structural uncertainties of multi-model ensemble experiments of climate change scenarios. Acknowledgments: This study was supported by the SOUSEI Program, funded by Ministry of Education, Culture, Sports, Science and Technology, Government of Japan.
Propagation of radar rainfall uncertainty in urban flood simulations
NASA Astrophysics Data System (ADS)
Liguori, Sara; Rico-Ramirez, Miguel
2013-04-01
This work discusses the results of the implementation of a novel probabilistic system designed to improve ensemble sewer flow predictions for the drainage network of a small urban area in the North of England. The probabilistic system has been developed to model the uncertainty associated to radar rainfall estimates and propagate it through radar-based ensemble sewer flow predictions. The assessment of this system aims at outlining the benefits of addressing the uncertainty associated to radar rainfall estimates in a probabilistic framework, to be potentially implemented in the real-time management of the sewer network in the study area. Radar rainfall estimates are affected by uncertainty due to various factors [1-3] and quality control and correction techniques have been developed in order to improve their accuracy. However, the hydrological use of radar rainfall estimates and forecasts remains challenging. A significant effort has been devoted by the international research community to the assessment of the uncertainty propagation through probabilistic hydro-meteorological forecast systems [4-5], and various approaches have been implemented for the purpose of characterizing the uncertainty in radar rainfall estimates and forecasts [6-11]. A radar-based ensemble stochastic approach, similar to the one implemented for use in the Southern-Alps by the REAL system [6], has been developed for the purpose of this work. An ensemble generator has been calibrated on the basis of the spatial-temporal characteristics of the residual error in radar estimates assessed with reference to rainfall records from around 200 rain gauges available for the year 2007, previously post-processed and corrected by the UK Met Office [12-13]. Each ensemble member is determined by summing a perturbation field to the unperturbed radar rainfall field. The perturbations are generated by imposing the radar error spatial and temporal correlation structure to purely stochastic fields. A hydrodynamic sewer network model implemented in the Infoworks software was used to model the rainfall-runoff process in the urban area. The software calculates the flow through the sewer conduits of the urban model using rainfall as the primary input. The sewer network is covered by 25 radar pixels with a spatial resolution of 1 km2. The majority of the sewer system is combined, carrying both urban rainfall runoff as well as domestic and trade waste water [11]. The urban model was configured to receive the probabilistic radar rainfall fields. The results showed that the radar rainfall ensembles provide additional information about the uncertainty in the radar rainfall measurements that can be propagated in urban flood modelling. The peaks of the measured flow hydrographs are often bounded within the uncertainty area produced by using the radar rainfall ensembles. This is in fact one of the benefits of using radar rainfall ensembles in urban flood modelling. More work needs to be done in improving the urban models, but this is out of the scope of this research. The rainfall uncertainty cannot explain the whole uncertainty shown in the flow simulations, and additional sources of uncertainty will come from the structure of the urban models as well as the large number of parameters required by these models. Acknowledgements The authors would like to acknowledge the BADC, the UK Met Office and the UK Environment Agency for providing the various data sets. We also thank Yorkshire Water Services Ltd for providing the urban model. The authors acknowledge the support from the Engineering and Physical Sciences Research Council (EPSRC) via grant EP/I012222/1. References [1] Browning KA, 1978. Meteorological applications of radar. Reports on Progress in Physics 41 761 Doi: 10.1088/0034-4885/41/5/003 [2] Rico-Ramirez MA, Cluckie ID, Shepherd G, Pallot A, 2007. A high-resolution radar experiment on the island of Jersey. Meteorological Applications 14: 117-129. [3] Villarini G, Krajewski WF, 2010. Review of the different sources of uncertainty in single polarization radar-based estimates of rainfall. Surveys in Geophysics 31: 107-129. [4] Rossa A, Liechti K, Zappa M, Bruen M, Germann U, Haase G, Keil C, Krahe P, 2011. The COST 731 Action: A review on uncertainty propagation in advanced hydrometeorological forecast systems. Atmospheric Research 100, 150-167. [5] Rossa A, Bruen M, Germann U, Haase G, Keil C, Krahe P, Zappa M, 2010. Overview and Main Results on the interdisciplinary effort in flood forecasting COST 731-Propagation of Uncertainty in Advanced Meteo-Hydrological Forecast Systems. Proceedings of Sixth European Conference on Radar in Meteorology and Hydrology ERAD 2010. [6] Germann U, Berenguer M, Sempere-Torres D, Zappa M, 2009. REAL - ensemble radar precipitation estimation for hydrology in a mountainous region. Quarterly Journal of the Royal Meteorological Society 135: 445-456. [8] Bowler NEH, Pierce CE, Seed AW, 2006. STEPS: a probabilistic precipitation forecasting scheme which merges and extrapolation nowcast with downscaled NWP. Quarterly Journal of the Royal Meteorological Society 132: 2127-2155. [9] Zappa M, Rotach MW, Arpagaus M, Dorninger M, Hegg C, Montani A, Ranzi R, Ament F, Germann U, Grossi G et al., 2008. MAP D-PHASE: real-time demonstration of hydrological ensemble prediction systems. Atmospheric Science Letters 9, 80-87. [10] Liguori S, Rico-Ramirez MA. Quantitative assessment of short-term rainfall forecasts from radar nowcasts and MM5 forecasts. Hydrological Processes, accepted article. DOI: 10.1002/hyp.8415 [11] Liguori S, Rico-Ramirez MA, Schellart ANA, Saul AJ, 2012. Using probabilistic radar rainfall nowcasts and NWP forecasts for flow prediction in urban catchments. Atmospheric Research 103: 80-95. [12] Harrison DL, Driscoll SJ, Kitchen M, 2000. Improving precipitation estimates from weather radar using quality control and correction techniques. Meteorological Applications 7: 135-144. [13] Harrison DL, Scovell RW, Kitchen M, 2009. High-resolution precipitation estimates for hydrological uses. Proceedings of the Institution of Civil Engineers - Water Management 162: 125-135.
Exploring the calibration of a wind forecast ensemble for energy applications
NASA Astrophysics Data System (ADS)
Heppelmann, Tobias; Ben Bouallegue, Zied; Theis, Susanne
2015-04-01
In the German research project EWeLiNE, Deutscher Wetterdienst (DWD) and Fraunhofer Institute for Wind Energy and Energy System Technology (IWES) are collaborating with three German Transmission System Operators (TSO) in order to provide the TSOs with improved probabilistic power forecasts. Probabilistic power forecasts are derived from probabilistic weather forecasts, themselves derived from ensemble prediction systems (EPS). Since the considered raw ensemble wind forecasts suffer from underdispersiveness and bias, calibration methods are developed for the correction of the model bias and the ensemble spread bias. The overall aim is to improve the ensemble forecasts such that the uncertainty of the possible weather deployment is depicted by the ensemble spread from the first forecast hours. Additionally, the ensemble members after calibration should remain physically consistent scenarios. We focus on probabilistic hourly wind forecasts with horizon of 21 h delivered by the convection permitting high-resolution ensemble system COSMO-DE-EPS which has become operational in 2012 at DWD. The ensemble consists of 20 ensemble members driven by four different global models. The model area includes whole Germany and parts of Central Europe with a horizontal resolution of 2.8 km and a vertical resolution of 50 model levels. For verification we use wind mast measurements around 100 m height that corresponds to the hub height of wind energy plants that belong to wind farms within the model area. Calibration of the ensemble forecasts can be performed by different statistical methods applied to the raw ensemble output. Here, we explore local bivariate Ensemble Model Output Statistics at individual sites and quantile regression with different predictors. Applying different methods, we already show an improvement of ensemble wind forecasts from COSMO-DE-EPS for energy applications. In addition, an ensemble copula coupling approach transfers the time-dependencies of the raw ensemble to the calibrated ensemble. The calibrated wind forecasts are evaluated first with univariate probabilistic scores and additionally with diagnostics of wind ramps in order to assess the time-consistency of the calibrated ensemble members.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ullrich, C. A.; Kohn, W.
An electron density distribution n(r) which can be represented by that of a single-determinant ground state of noninteracting electrons in an external potential v(r) is called pure-state v -representable (P-VR). Most physical electronic systems are P-VR. Systems which require a weighted sum of several such determinants to represent their density are called ensemble v -representable (E-VR). This paper develops formal Kohn-Sham equations for E-VR physical systems, using the appropriate coupling constant integration. It also derives local density- and generalized gradient approximations, and conditions and corrections specific to ensembles.
A brief history of the introduction of generalized ensembles to Markov chain Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Berg, Bernd A.
2017-03-01
The most efficient weights for Markov chain Monte Carlo calculations of physical observables are not necessarily those of the canonical ensemble. Generalized ensembles, which do not exist in nature but can be simulated on computers, lead often to a much faster convergence. In particular, they have been used for simulations of first order phase transitions and for simulations of complex systems in which conflicting constraints lead to a rugged free energy landscape. Starting off with the Metropolis algorithm and Hastings' extension, I present a minireview which focuses on the explosive use of generalized ensembles in the early 1990s. Illustrations are given, which range from spin models to peptides.
Friedman, Beth; Brophy, Patrick; Brune, William H; Farmer, Delphine K
2016-02-02
In order to probe how anthropogenic pollutants can impact the atmospheric oxidation of biogenic emissions, we investigated how sulfur dioxide (SO2) perturbations impact the oxidation of two monoterpenes, α-and β-pinene. We used chemical ionization mass spectrometry to examine changes in both individual molecules and gas-phase bulk properties of oxidation products as a function of SO2 addition. SO2 perturbations impacted the oxidation systems of α-and β-pinene, leading to an ensemble of products with a lesser degree of oxygenation than unperturbed systems. These changes may be due to shifts in the OH:HO2 ratio from SO2 oxidation and/or to SO3 reacting directly with organic molecules. Van Krevelen diagrams suggest a shift from gas-phase functionalization by alcohol/peroxide groups to functionalization by carboxylic acid or carbonyl groups, consistent with a decreased OH:HO2 ratio. Increasing relative humidity dampens the impact of the perturbation. This decrease in oxygenation may impact secondary organic aerosol formation in regions dominated by biogenic emissions with nearby SO2 sources. We observed sulfur-containing organic compounds following SO2 perturbations of monoterpene oxidation; whether these are the result of photochemistry or an instrumental artifact from ion-molecule clustering remains uncertain. However, our results demonstrate that the two monoterpene isomers produce unique suites of oxidation products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregoire, Lauren J.; Otto-Bliesner, Bette; Valdes, Paul J.
Elucidating the source(s) of Meltwater Pulse 1a, the largest rapid sea level rise caused by ice melt (14-18 m in less than 340 years, 14,600 years ago), is important for understanding mechanisms of rapid ice melt and the links with abrupt climate change. Here we quantify how much and by what mechanisms the North American ice sheet could have contributed to Meltwater Pulse 1a, by driving an ice sheet model with two transient climate simulations of the last 21,000 years. Ice sheet perturbed physics ensembles were run to account for model uncertainties, constraining ice extent and volume with reconstructions ofmore » 21,000 years ago to present. We determine that the North American ice sheet produced 3-4 m global mean sea level rise in 340 years due to the abrupt Bølling warming, but this response is amplified to 5-6 m when it triggers the ice sheet saddle collapse.« less
Gregoire, Lauren J.; Otto-Bliesner, Bette; Valdes, Paul J.; ...
2016-08-23
Elucidating the source(s) of Meltwater Pulse 1a, the largest rapid sea level rise caused by ice melt (14-18 m in less than 340 years, 14,600 years ago), is important for understanding mechanisms of rapid ice melt and the links with abrupt climate change. Here we quantify how much and by what mechanisms the North American ice sheet could have contributed to Meltwater Pulse 1a, by driving an ice sheet model with two transient climate simulations of the last 21,000 years. Ice sheet perturbed physics ensembles were run to account for model uncertainties, constraining ice extent and volume with reconstructions ofmore » 21,000 years ago to present. We determine that the North American ice sheet produced 3-4 m global mean sea level rise in 340 years due to the abrupt Bølling warming, but this response is amplified to 5-6 m when it triggers the ice sheet saddle collapse.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark
Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less
NASA Astrophysics Data System (ADS)
Alexandrou, Constantia; Athenodorou, Andreas; Cichy, Krzysztof; Constantinou, Martha; Horkel, Derek P.; Jansen, Karl; Koutsou, Giannis; Larkin, Conor
2018-04-01
We compare lattice QCD determinations of topological susceptibility using a gluonic definition from the gradient flow and a fermionic definition from the spectral-projector method. We use ensembles with dynamical light, strange and charm flavors of maximally twisted mass fermions. For both definitions of the susceptibility we employ ensembles at three values of the lattice spacing and several quark masses at each spacing. The data are fitted to chiral perturbation theory predictions with a discretization term to determine the continuum chiral condensate in the massless limit and estimate the overall discretization errors. We find that both approaches lead to compatible results in the continuum limit, but the gluonic ones are much more affected by cutoff effects. This finally yields a much smaller total error in the spectral-projector results. We show that there exists, in principle, a value of the spectral cutoff which would completely eliminate discretization effects in the topological susceptibility.
Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel
2014-06-05
Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.
NASA Technical Reports Server (NTRS)
Kalnay, Eugenia; Dalcher, Amnon
1987-01-01
It is shown that it is possible to predict the skill of numerical weather forecasts - a quantity which is variable from day to day and region to region. This has been accomplished using as predictor the dispersion (measured by the average correlation) between members of an ensemble of forecasts started from five different analyses. The analyses had been previously derived for satellite-data-impact studies and included, in the Northern Hemisphere, moderate perturbations associated with the use of different observing systems. When the Northern Hemisphere was used as a verification region, the prediction of skill was rather poor. This is due to the fact that such a large area usually contains regions with excellent forecasts as well as regions with poor forecasts, and does not allow for discrimination between them. However, when regional verifications were used, the ensemble forecast dispersion provided a very good prediction of the quality of the individual forecasts.
Interpolation of property-values between electron numbers is inconsistent with ensemble averaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.
2016-06-28
In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less
Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics
NASA Astrophysics Data System (ADS)
Lazarus, S. M.; Holman, B. P.; Splitt, M. E.
2017-12-01
A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latypov, Ramil F.; Liu, Dingjiang; Jacob, Jaby
2010-01-12
Conformational properties of the folded and unfolded ensembles of human interleukin-1 receptor antagonist (IL-1ra) are strongly denaturant-dependent as evidenced by high-resolution two-dimensional nuclear magnetic resonance (NMR), limited proteolysis, and small-angle X-ray scattering (SAXS). The folded ensemble was characterized in detail in the presence of different urea concentrations by 1H-15N HSQC NMR. The {beta}-trefoil fold characteristic of native IL-1ra was preserved until the unfolding transition region beginning at 4 M urea. At the same time, a subset of native resonances disappeared gradually starting at low denaturant concentrations, indicating noncooperative changes in the folded state. Additional evidence of structural perturbations came frommore » the chemical shift analysis, nonuniform and bell-shaped peak intensity profiles, and limited proteolysis. In particular, the following nearby regions of the tertiary structure became progressively destabilized with increasing urea concentrations: the {beta}-hairpin interface of trefoils 1 and 2 and the H2a-H2 helical region. These regions underwent small-scale perturbations within the native baseline region in the absence of populated molten globule-like states. Similar regions were affected by elevated temperatures known to induce irreversible aggregation of IL-1ra. Further evidence of structural transitions invoking near-native conformations came from an optical spectroscopy analysis of its single-tryptophan variant W17A. The increase in the radius of gyration was associated with a single equilibrium unfolding transition in the case of two different denaturants, urea and guanidine hydrochloride (GuHCl). However, the compactness of urea- and GuHCl-unfolded molecules was comparable only at high denaturant concentrations and deviated under less denaturing conditions. Our results identified the role of conformational flexibility in IL-1ra aggregation and shed light on the nature of structural transitions within the folded ensembles of other {beta}-trefoil proteins, such as IL-1{beta} and hFGF-1.« less
Ensemble inequivalence and Maxwell construction in the self-gravitating ring model
NASA Astrophysics Data System (ADS)
Rocha Filho, T. M.; Silvestre, C. H.; Amato, M. A.
2018-06-01
The statement that Gibbs equilibrium ensembles are equivalent is a base line in many approaches in the context of equilibrium statistical mechanics. However, as a known fact, for some physical systems this equivalence may not be true. In this paper we illustrate from first principles the inequivalence between the canonical and microcanonical ensembles for a system with long range interactions. We make use of molecular dynamics simulations and Monte Carlo simulations to explore the thermodynamics properties of the self-gravitating ring model and discuss on what conditions the Maxwell construction is applicable.
NASA Astrophysics Data System (ADS)
Erfanian, A.; Fomenko, L.; Wang, G.
2016-12-01
Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling
Interactions between moist heating and dynamics in atmospheric predictability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Straus, D.M.; Huntley, M.A.
1994-02-01
The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less
Christ, Norman H.; Flynn, Jonathan M.; Izubuchi, Taku; ...
2015-03-10
We calculate the B-meson decay constants f B, f Bs, and their ratio in unquenched lattice QCD using domain-wall light quarks and relativistic b quarks. We use gauge-field ensembles generated by the RBC and UKQCD collaborations using the domain-wall fermion action and Iwasaki gauge action with three flavors of light dynamical quarks. We analyze data at two lattice spacings of a ≈ 0.11, 0.086 fm with unitary pion masses as light as M π ≈ 290 MeV; this enables us to control the extrapolation to the physical light-quark masses and continuum. For the b quarks we use the anisotropic clovermore » action with the relativistic heavy-quark interpretation, such that discretization errors from the heavy-quark action are of the same size as from the light-quark sector. We renormalize the lattice heavy-light axial-vector current using a mostly nonperturbative method in which we compute the bulk of the matching factor nonperturbatively, with a small correction, that is close to unity, in lattice perturbation theory. We also improve the lattice heavy-light current through O(α sa). We extrapolate our results to the physical light-quark masses and continuum using SU(2) heavy-meson chiral perturbation theory, and provide a complete systematic error budget. We obtain f B0 = 199.5(12.6) MeV, f B+=195.6(14.9) MeV, f Bs=235.4(12.2) MeV, f Bs/f B0=1.197(50), and f Bs/f B+=1.223(71), where the errors are statistical and total systematic added in quadrature. Finally, these results are in good agreement with other published results and provide an important independent cross-check of other three-flavor determinations of B-meson decay constants using staggered light quarks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christ, Norman H.; Flynn, Jonathan M.; Izubuchi, Taku
2015-03-10
We calculate the B-meson decay constants f B, f Bs, and their ratio in unquenched lattice QCD using domain-wall light quarks and relativistic b-quarks. We use gauge-field ensembles generated by the RBC and UKQCD collaborations using the domain-wall fermion action and Iwasaki gauge action with three flavors of light dynamical quarks. We analyze data at two lattice spacings of a ≈ 0.11, 0.086 fm with unitary pion masses as light as M π ≈ 290 MeV; this enables us to control the extrapolation to the physical light-quark masses and continuum. For the b-quarks we use the anisotropic clover action withmore » the relativistic heavy-quark interpretation, such that discretization errors from the heavy-quark action are of the same size as from the light-quark sector. We renormalize the lattice heavy-light axial-vector current using a mostly nonperturbative method in which we compute the bulk of the matching factor nonperturbatively, with a small correction, that is close to unity, in lattice perturbation theory. We also improve the lattice heavy-light current through O(α sa). We extrapolate our results to the physical light-quark masses and continuum using SU(2) heavy-meson chiral perturbation theory, and provide a complete systematic error budget. We obtain f B0 = 196.2(15.7) MeV, f B+ = 195.4(15.8) MeV, f Bs = 235.4(12.2) MeV, f Bs/f B0 = 1.193(59), and f Bs/f B+ = 1.220(82), where the errors are statistical and total systematic added in quadrature. In addition, these results are in good agreement with other published results and provide an important independent cross check of other three-flavor determinations of B-meson decay constants using staggered light quarks.« less
Sagoo, Navjit; Valdes, Paul; Flecker, Rachel; Gregoire, Lauren J
2013-10-28
Geological data for the Early Eocene (56-47.8 Ma) indicate extensive global warming, with very warm temperatures at both poles. However, despite numerous attempts to simulate this warmth, there are remarkable data-model differences in the prediction of these polar surface temperatures, resulting in the so-called 'equable climate problem'. In this paper, for the first time an ensemble with a perturbed climate-sensitive model parameters approach has been applied to modelling the Early Eocene climate. We performed more than 100 simulations with perturbed physics parameters, and identified two simulations that have an optimal fit with the proxy data. We have simulated the warmth of the Early Eocene at 560 ppmv CO2, which is a much lower CO2 level than many other models. We investigate the changes in atmospheric circulation, cloud properties and ocean circulation that are common to these simulations and how they differ from the remaining simulations in order to understand what mechanisms contribute to the polar warming. The parameter set from one of the optimal Early Eocene simulations also produces a favourable fit for the last glacial maximum boundary climate and outperforms the control parameter set for the present day. Although this does not 'prove' that this model is correct, it is very encouraging that there is a parameter set that creates a climate model able to simulate well very different palaeoclimates and the present-day climate. Interestingly, to achieve the great warmth of the Early Eocene this version of the model does not have a strong future climate change Charney climate sensitivity. It produces a Charney climate sensitivity of 2.7(°)C, whereas the mean value of the 18 models in the IPCC Fourth Assessment Report (AR4) is 3.26(°)C±0.69(°)C. Thus, this value is within the range and below the mean of the models included in the AR4.
Performance of multi-physics ensembles in convective precipitation events over northeastern Spain
NASA Astrophysics Data System (ADS)
García-Ortega, E.; Lorenzana, J.; Merino, A.; Fernández-González, S.; López, L.; Sánchez, J. L.
2017-07-01
Convective precipitation with hail greatly affects southwestern Europe, causing major economic losses. The local character of this meteorological phenomenon is a serious obstacle to forecasting. Therefore, the development of reliable short-term forecasts constitutes an essential challenge to minimizing and managing risks. However, deterministic outcomes are affected by different uncertainty sources, such as physics parameterizations. This study examines the performance of different combinations of physics schemes of the Weather Research and Forecasting model to describe the spatial distribution of precipitation in convective environments with hail falls. Two 30-member multi-physics ensembles, with two and three domains of maximum resolution 9 and 3km each, were designed using various combinations of cumulus, microphysics and radiation schemes. The experiment was evaluated for 10 convective precipitation days with hail over 2005-2010 in northeastern Spain. Different indexes were used to evaluate the ability of each ensemble member to capture the precipitation patterns, which were compared with observations of a rain-gauge network. A standardized metric was constructed to identify optimal performers. Results show interesting differences between the two ensembles. In two domain simulations, the selection of cumulus parameterizations was crucial, with the Betts-Miller-Janjic scheme the best. In contrast, the Kain-Fristch cumulus scheme gave the poorest results, suggesting that it should not be used in the study area. Nevertheless, in three domain simulations, the cumulus schemes used in coarser domains were not critical and the best results depended mainly on microphysics schemes. The best performance was shown by Morrison, New Thomson and Goddard microphysics.
Pervasive orbital eccentricities dictate the habitability of extrasolar earths.
Kita, Ryosuke; Rasio, Frederic; Takeda, Genya
2010-09-01
The long-term habitability of Earth-like planets requires low orbital eccentricities. A secular perturbation from a distant stellar companion is a very important mechanism in exciting planetary eccentricities, as many of the extrasolar planetary systems are associated with stellar companions. Although the orbital evolution of an Earth-like planet in a stellar binary system is well understood, the effect of a binary perturbation on a more realistic system containing additional gas-giant planets has been very little studied. Here, we provide analytic criteria confirmed by a large ensemble of numerical integrations that identify the initial orbital parameters leading to eccentric orbits. We show that an extrasolar earth is likely to experience a broad range of orbital evolution dictated by the location of a gas-giant planet, which necessitates more focused studies on the effect of eccentricity on the potential for life.
NASA Astrophysics Data System (ADS)
Kumar, Narender; Singh, Ram Kishor; Sharma, Swati; Uma, R.; Sharma, R. P.
2018-01-01
This paper presents numerical simulations of laser beam (x-mode) coupling with a magnetosonic wave (MSW) in a collisionless plasma. The coupling arises through ponderomotive non-linearity. The pump beam has been perturbed by a periodic perturbation that leads to the nonlinear evolution of the laser beam. It is observed that the frequency spectra of the MSW have peaks at terahertz frequencies. The simulation results show quite complex localized structures that grow with time. The ensemble averaged power spectrum has also been studied which indicates that the spectral index follows an approximate scaling of the order of ˜ k-2.1 at large scales and scaling of the order of ˜ k-3.6 at smaller scales. The results indicate considerable randomness in the spatial structure of the magnetic field profile which gives sufficient indication of turbulence.
Bullying Victimization among Music Ensemble and Theatre Students in the United States
ERIC Educational Resources Information Center
Elpus, Kenneth; Carter, Bruce Allen
2016-01-01
The purpose of this study was to analyze the prevalence of reported school victimization through physical, verbal, social/relational, and cyberbullying aggression among music ensemble and theatre students in the middle and high schools of the United States as compared to their peers involved in other school-based activities. We analyzed nationally…
Patterns of variability in steady- and non steady-state Ross Ice Shelf flow
NASA Astrophysics Data System (ADS)
Campbell, A. J.; Hulbe, C. L.; Scambos, T. A.; Klinger, M. J.; Lee, C. K.
2016-12-01
Ice shelves are gateways through which climate change can be transmitted from the ocean or atmosphere to a grounded ice sheet. It is thus important to separate patterns of ice shelf change driven internally (from the ice sheet) and patterns driven externally (by the ocean or atmosphere) so that modern observations can be viewed in an appropriate context. Here, we focus on the Ross Ice Shelf (RIS), a major component of the West Antarctic Ice Sheet system and a feature known to experience variable ice flux from tributary ice streams and glaciers, for example, ice stream stagnation and glacier surges. We perturb a model of the Ross Ice Shelf with periodic influx variations, ice rise and ice plain grounding events, and iceberg calving in order to generate transients in the ice shelf flow and thickness. Characteristic patterns associated with those perturbations are identified using empirical orthogonal functions (EOFs). The leading EOFs reveal shelf-wide pattern of response to local perturbations that can be interpreted in terms of coupled mass and momentum balance. For example, speed changes on Byrd Glacier cause both thinning and thickening in a broad region that extends to Roosevelt Island. We calculate decay times at various locations for various perturbations and find that mutli-decadal to century time scales are typical. Unique identification of responses to particular forcings may thus be difficlult to achieve and flow divergence cannot be assumed to be constant when interpreting observed changes in ice thickness. In reality, perturbations to the ice shelf do not occur individually, rather the ice shelf contains a history of boundary perturbations. To explore the degree individual perturbations are seperable from their ensemble, EOFs from individual events are combined in pairs and compared against experiments with the same periodic perturbations pairs. Residuals between these EOFs reveal the degree interaction between between disctinct perturbations.
Ligare, Martin
2016-05-01
Multiple-pulse NMR experiments are a powerful tool for the investigation of molecules with coupled nuclear spins. The product operator formalism provides a way to understand the quantum evolution of an ensemble of weakly coupled spins in such experiments using some of the more intuitive concepts of classical physics and semi-classical vector representations. In this paper I present a new way in which to interpret the quantum evolution of an ensemble of spins. I recast the quantum problem in terms of mixtures of pure states of two spins whose expectation values evolve identically to those of classical moments. Pictorial representations of these classically evolving states provide a way to calculate the time evolution of ensembles of weakly coupled spins without the full machinery of quantum mechanics, offering insight to anyone who understands precession of magnetic moments in magnetic fields.
NASA Astrophysics Data System (ADS)
Ghirri, Alberto; Bonizzoni, Claudio; Troiani, Filippo; Affronte, Marco
The problem of coupling remote ensembles of two-level systems through cavity photons is revisited by using molecular spin centers and a high critical temperature superconducting coplanar resonator. By using PyBTM organic radicals, we achieved the strong coupling regime with values of the cooperativity reaching 4300 at 2 K. We show that up to three distinct spin ensembles are simultaneously coupled through the resonator mode. The ensembles are made physically distinguishable by chemically varying the g-factor and by exploiting the inhomogeneities of the applied magnetic field. The coherent mixing of the spin and field modes is demonstrated by the observed multiple anticrossing, along with the simulations performed within the input-output formalism, and quantified by suitable entropic measures.
Developing Stochastic Models as Inputs for High-Frequency Ground Motion Simulations
NASA Astrophysics Data System (ADS)
Savran, William Harvey
High-frequency ( 10 Hz) deterministic ground motion simulations are challenged by our understanding of the small-scale structure of the earth's crust and the rupture process during an earthquake. We will likely never obtain deterministic models that can accurately describe these processes down to the meter scale length required for broadband wave propagation. Instead, we can attempt to explain the behavior, in a statistical sense, by including stochastic models defined by correlations observed in the natural earth and through physics based simulations of the earthquake rupture process. Toward this goal, we develop stochastic models to address both of the primary considerations for deterministic ground motion simulations: namely, the description of the material properties in the crust, and broadband earthquake source descriptions. Using borehole sonic log data recorded in Los Angeles basin, we estimate the spatial correlation structure of the small-scale fluctuations in P-wave velocities by determining the best-fitting parameters of a von Karman correlation function. We find that Hurst exponents, nu, between 0.0-0.2, vertical correlation lengths, az, of 15-150m, an standard deviation, sigma of about 5% characterize the variability in the borehole data. Usin these parameters, we generated a stochastic model of velocity and density perturbations and combined with leading seismic velocity models to perform a validation exercise for the 2008, Chino Hills, CA using heterogeneous media. We find that models of velocity and density perturbations can have significant effects on the wavefield at frequencies as low as 0.3 Hz, with ensemble median values of various ground motion metrics varying up to +/-50%, at certain stations, compared to those computed solely from the CVM. Finally, we develop a kinematic rupture generator based on dynamic rupture simulations on geometrically complex faults. We analyze 100 dynamic rupture simulations on strike-slip faults ranging from Mw 6.4-7.2. We find that our dynamic simulations follow empirical scaling relationships for inter-plate strike-slip events, and provide source spectra comparable with an o -2 model. Our rupture generator reproduces GMPE medians and intra-event standard deviations spectral accelerations for an ensemble of 10 Hz fully-deterministic ground motion simulations, as compared to NGA West2 GMPE relationships up to 0.2 seconds.
Space weather forecasting with a Multimodel Ensemble Prediction System (MEPS)
NASA Astrophysics Data System (ADS)
Schunk, R. W.; Scherliess, L.; Eccles, V.; Gardner, L. C.; Sojka, J. J.; Zhu, L.; Pi, X.; Mannucci, A. J.; Butala, M.; Wilson, B. D.; Komjathy, A.; Wang, C.; Rosen, G.
2016-07-01
The goal of the Multimodel Ensemble Prediction System (MEPS) program is to improve space weather specification and forecasting with ensemble modeling. Space weather can have detrimental effects on a variety of civilian and military systems and operations, and many of the applications pertain to the ionosphere and upper atmosphere. Space weather can affect over-the-horizon radars, HF communications, surveying and navigation systems, surveillance, spacecraft charging, power grids, pipelines, and the Federal Aviation Administration (FAA's) Wide Area Augmentation System (WAAS). Because of its importance, numerous space weather forecasting approaches are being pursued, including those involving empirical, physics-based, and data assimilation models. Clearly, if there are sufficient data, the data assimilation modeling approach is expected to be the most reliable, but different data assimilation models can produce different results. Therefore, like the meteorology community, we created a Multimodel Ensemble Prediction System (MEPS) for the Ionosphere-Thermosphere-Electrodynamics (ITE) system that is based on different data assimilation models. The MEPS ensemble is composed of seven physics-based data assimilation models for the ionosphere, ionosphere-plasmasphere, thermosphere, high-latitude ionosphere-electrodynamics, and middle to low latitude ionosphere-electrodynamics. Hence, multiple data assimilation models can be used to describe each region. A selected storm event that was reconstructed with four different data assimilation models covering the middle and low latitude ionosphere is presented and discussed. In addition, the effect of different data types on the reconstructions is shown.
Ensemble climate projections of mean and extreme rainfall over Vietnam
NASA Astrophysics Data System (ADS)
Raghavan, S. V.; Vu, M. T.; Liong, S. Y.
2017-01-01
A systematic ensemble high resolution climate modelling study over Vietnam has been performed using the PRECIS model developed by the Hadley Center in UK. A 5 member subset of the 17-member Perturbed Physics Ensembles (PPE) of the Quantifying Uncertainty in Model Predictions (QUMP) project were simulated and analyzed. The PRECIS model simulations were conducted at a horizontal resolution of 25 km for the baseline period 1961-1990 and a future climate period 2061-2090 under scenario A1B. The results of model simulations show that the model was able to reproduce the mean state of climate over Vietnam when compared to observations. The annual cycles and seasonal averages of precipitation over different sub-regions of Vietnam show the ability of the model in also reproducing the observed peak and magnitude of monthly rainfall. The climate extremes of precipitation were also fairly well captured. Projections of future climate show both increases and decreases in the mean climate over different regions of Vietnam. The analyses of future extreme rainfall using the STARDEX precipitation indices show an increase in 90th percentile precipitation (P90p) over the northern provinces (15-25%) and central highland (5-10%) and over southern Vietnam (up to 5%). The total number of wet days (Prcp) indicates a decrease of about 5-10% all over Vietnam. Consequently, an increase in the wet day rainfall intensity (SDII), is likely inferring that the projected rainfall would be much more severe and intense which have the potential to cause flooding in some regions. Risks due to extreme drought also exist in other regions where the number of wet days decreases. In addition, the maximum 5 day consecutive rainfall (R5d) increases by 20-25% over northern Vietnam but decreases in a similar range over the central and southern Vietnam. These results have strong implications for the management water resources, agriculture, bio diversity and economy and serve as some useful findings to be considered by the policy makers within a wider range of climate uncertainties.
NASA Astrophysics Data System (ADS)
Slawinska, J. M.; Bartoszek, K.; Gabriel, C. J.
2016-12-01
Long-term predictions of changes in extreme event frequency are of utmost importance due to their high societal and economic impact. Yet, current projections are of limited skills as they rely on satellite records that are relatively short compared to the timescale of interest, and also due to the presence of a significant anthropogenic trend superimposed onto other low-frequency variabilities. Novel simulations of past climates provide unique opportunity to separate external perturbations from internal climate anomalies and to attribute the latter to systematic changes in different types of synoptic scale circulation and distributions of high-frequency events. Here we study such changes by employing the Last Millennium Ensemble of climate simulations carried out with the Community Earth System Model (CESM) at the U.S. National Center for Atmospheric Research, focusing in particular on decadal changes in frequency of extreme precipitation events over south-east Poland. We analyze low-frequency modulations of dominant patterns of synoptic scale circulations over Europe and their dependence on the Atlantic Meridional Overturning Circulation, along with their coupling with the North Atlantic Oscillation. Moreover, we examine whether some decades of persistently anomalous statistics of extreme events can be attributed to externally forced (e.g., via volcanic eruptions) perturbations of the North Atlantic climate. In the end, we discuss the possible linkages and physical mechanisms connecting volcanic eruptions, low-frequency variabilities of North Atlantic climate and changes in statistics of high impact weather, and compare briefly our results with some historical and paleontological records.
Geospace Environment Modeling 2008-2009 Challenge: Ground Magnetic Field Perturbations
NASA Technical Reports Server (NTRS)
Pulkkinen, A.; Kuznetsova, M.; Ridley, A.; Raeder, J.; Vapirev, A.; Weimer, D.; Weigel, R. S.; Wiltberger, M.; Millward, G.; Rastatter, L.;
2011-01-01
Acquiring quantitative metrics!based knowledge about the performance of various space physics modeling approaches is central for the space weather community. Quantification of the performance helps the users of the modeling products to better understand the capabilities of the models and to choose the approach that best suits their specific needs. Further, metrics!based analyses are important for addressing the differences between various modeling approaches and for measuring and guiding the progress in the field. In this paper, the metrics!based results of the ground magnetic field perturbation part of the Geospace Environment Modeling 2008 2009 Challenge are reported. Predictions made by 14 different models, including an ensemble model, are compared to geomagnetic observatory recordings from 12 different northern hemispheric locations. Five different metrics are used to quantify the model performances for four storm events. It is shown that the ranking of the models is strongly dependent on the type of metric used to evaluate the model performance. None of the models rank near or at the top systematically for all used metrics. Consequently, one cannot pick the absolute winner : the choice for the best model depends on the characteristics of the signal one is interested in. Model performances vary also from event to event. This is particularly clear for root!mean!square difference and utility metric!based analyses. Further, analyses indicate that for some of the models, increasing the global magnetohydrodynamic model spatial resolution and the inclusion of the ring current dynamics improve the models capability to generate more realistic ground magnetic field fluctuations.
Meson and baryon dispersion relations with Brillouin fermions
NASA Astrophysics Data System (ADS)
Dürr, Stephan; Koutsou, Giannis; Lippert, Thomas
2012-12-01
We study the dispersion relations of mesons and baryons built from Brillouin quarks on one Nf=2 gauge ensemble provided by QCDSF. For quark masses up to the physical strange quark mass, there is hardly any improvement over the Wilson discretization, if either action is link-smeared and tree-level clover improved. For quark masses in the range of the physical charm quark mass, the Brillouin action still shows a perfect relativistic behavior, while the Wilson action induces severe cutoff effects. As an application we determine the masses of the Ωc0, Ωcc+ and Ωccc++ baryons on that ensemble.
NASA Astrophysics Data System (ADS)
Pribram-Jones, Aurora
Warm dense matter (WDM) is a high energy phase between solids and plasmas, with characteristics of both. It is present in the centers of giant planets, within the earth's core, and on the path to ignition of inertial confinement fusion. The high temperatures and pressures of warm dense matter lead to complications in its simulation, as both classical and quantum effects must be included. One of the most successful simulation methods is density functional theory-molecular dynamics (DFT-MD). Despite great success in a diverse array of applications, DFT-MD remains computationally expensive and it neglects the explicit temperature dependence of electron-electron interactions known to exist within exact DFT. Finite-temperature density functional theory (FT DFT) is an extension of the wildly successful ground-state DFT formalism via thermal ensembles, broadening its quantum mechanical treatment of electrons to include systems at non-zero temperatures. Exact mathematical conditions have been used to predict the behavior of approximations in limiting conditions and to connect FT DFT to the ground-state theory. An introduction to FT DFT is given within the context of ensemble DFT and the larger field of DFT is discussed for context. Ensemble DFT is used to describe ensembles of ground-state and excited systems. Exact conditions in ensemble DFT and the performance of approximations depend on ensemble weights. Using an inversion method, exact Kohn-Sham ensemble potentials are found and compared to approximations. The symmetry eigenstate Hartree-exchange approximation is in good agreement with exact calculations because of its inclusion of an ensemble derivative discontinuity. Since ensemble weights in FT DFT are temperature-dependent Fermi weights, this insight may help develop approximations well-suited to both ground-state and FT DFT. A novel, highly efficient approach to free energy calculations, finite-temperature potential functional theory, is derived, which has the potential to transform the simulation of warm dense matter. As a semiclassical method, it connects the normally disparate regimes of cold condensed matter physics and hot plasma physics. This orbital-free approach captures the smooth classical density envelope and quantum density oscillations that are both crucial to accurate modeling of materials where temperature and pressure effects are influential.
NASA Astrophysics Data System (ADS)
Tinker, Jonathan; Palmer, Matthew; Lowe, Jason; Howard, Tom
2017-04-01
The North Sea, and wider Northwest European Shelf seas (NWS) are economically, environmentally, and culturally important for a number of European countries. They are protected by European legislation, often with specific reference to the potential impacts of climate change. Coastal climate change projections are an important source of information for effective management of European Shelf Seas. For example, potential changes in the marine environment are a key component of the climate change risk assessments (CCRAs) carried out under the UK Climate Change Act We use the NEMO shelf seas model combined with CMIP5 climate model and EURO-CORDEX regional atmospheric model data to generate new simulations of the NWS. Building on previous work using a climate model perturbed physics ensemble and the POLCOMS, this new model setup is used to provide first indication of the uncertainties associated with: (i) the driving climate model; (ii) the atmospheric downscaling model (iii) the shelf seas downscaling model; (iv) the choice of climate change scenario. Our analysis considers a range of physical marine impacts and the drivers of coastal variability and change, including sea level and the propagation of open ocean signals onto the shelf. The simulations are being carried out as part of the UK Climate Projections 2018 (UKCP18) and will feed into the following UK CCRA.
NASA Astrophysics Data System (ADS)
Lee, Lindsay; Mann, Graham; Carslaw, Ken; Toohey, Matthew; Aquila, Valentina
2016-04-01
The World Climate Research Program's SPARC initiative has a new international activity "Stratospheric Sulphur and its Role in Climate" (SSiRC) to better understand changes in stratospheric aerosol and precursor gaseous sulphur species. One component of SSiRC involves an intercomparison "ISA-MIP" of composition-climate models that simulate the stratospheric aerosol layer interactively. Within PoEMS each modelling group will run a "perturbed physics ensemble" (PPE) of interactive stratospheric aerosol (ISA) simulations of the Pinatubo eruption, varying several uncertain parameters associated with the eruption's SO2 emissions and model processes. A powerful new technique to quantify and attribute sources of uncertainty in complex global models is described by Lee et al. (2011, ACP). The analysis uses Gaussian emulation to derive a probability density function (pdf) of predicted quantities, essentially interpolating the PPE results in multi-dimensional parameter space. Once trained on the ensemble, a Monte Carlo simulation with the fast Gaussian emulator enabling a full variance-based sensitivity analysis. The approach has already been used effectively by Carslaw et al., (2013, Nature) to quantify the uncertainty in the cloud albedo effect forcing from a 3D global aerosol-microphysics model allowing to compare the sensitivy of different predicted quantities to uncertainties in natural and anthropogenic emissions types, and structural parameters in the models. Within ISA-MIP, each group will carry out a PPE of runs, with the subsequent analysis with the emulator assessing the uncertainty in the volcanic forcings predicted by each model. In this poster presentation we will give an outline of the "PoEMS" analysis, describing the uncertain parameters to be varied and the relevance to further understanding differences identified in previous international stratospheric aerosol assessments.
B → Dℓν form factors at nonzero recoil and |V cb| from 2+1-flavor lattice QCD
Bailey, Jon A.
2015-08-10
We present the first unquenched lattice-QCD calculation of the hadronic form factors for the exclusive decay B¯→Dℓν¯ at nonzero recoil. We carry out numerical simulations on 14 ensembles of gauge-field configurations generated with 2+1 flavors of asqtad-improved staggered sea quarks. The ensembles encompass a wide range of lattice spacings (approximately 0.045 to 0.12 fm) and ratios of light (up and down) to strange sea-quark masses ranging from 0.05 to 0.4. For the b and c valence quarks we use improved Wilson fermions with the Fermilab interpretation, while for the light valence quarks we use asqtad-improved staggered fermions. We extrapolate ourmore » results to the physical point using rooted staggered heavy-light meson chiral perturbation theory. We then parametrize the form factors and extend them to the full kinematic range using model-independent functions based on analyticity and unitarity. We present our final results for f +(q 2) and f 0(q 2), including statistical and systematic errors, as coefficients of a series in the variable z and the covariance matrix between these coefficients. We then fit the lattice form-factor data jointly with the experimentally measured differential decay rate from BABAR to determine the CKM matrix element, |V cb|=(39.6 ± 1.7 QCD+exp ± 0.2 QED) × 10 –3. As a byproduct of the joint fit we obtain the form factors with improved precision at large recoil. In conclusion, we use them to update our calculation of the ratio R(D) in the Standard Model, which yields R(D)=0.299(11).« less
NASA Astrophysics Data System (ADS)
Soundharajan, Bankaru-Swamy; Adeloye, Adebayo J.; Remesan, Renji
2016-07-01
This study employed a Monte-Carlo simulation approach to characterise the uncertainties in climate change induced variations in storage requirements and performance (reliability (time- and volume-based), resilience, vulnerability and sustainability) of surface water reservoirs. Using a calibrated rainfall-runoff (R-R) model, the baseline runoff scenario was first simulated. The R-R inputs (rainfall and temperature) were then perturbed using plausible delta-changes to produce simulated climate change runoff scenarios. Stochastic models of the runoff were developed and used to generate ensembles of both the current and climate-change-perturbed future runoff scenarios. The resulting runoff ensembles were used to force simulation models of the behaviour of the reservoir to produce 'populations' of required reservoir storage capacity to meet demands, and the performance. Comparing these parameters between the current and the perturbed provided the population of climate change effects which was then analysed to determine the variability in the impacts. The methodology was applied to the Pong reservoir on the Beas River in northern India. The reservoir serves irrigation and hydropower needs and the hydrology of the catchment is highly influenced by Himalayan seasonal snow and glaciers, and Monsoon rainfall, both of which are predicted to change due to climate change. The results show that required reservoir capacity is highly variable with a coefficient of variation (CV) as high as 0.3 as the future climate becomes drier. Of the performance indices, the vulnerability recorded the highest variability (CV up to 0.5) while the volume-based reliability was the least variable. Such variabilities or uncertainties will, no doubt, complicate the development of climate change adaptation measures; however, knowledge of their sheer magnitudes as obtained in this study will help in the formulation of appropriate policy and technical interventions for sustaining and possibly enhancing water security for irrigation and other uses served by Pong reservoir.
Wu, Shenping; Liu, Jun; Perz-Edwards, Robert J.; Tregear, Richard T.; Winkler, Hanspeter; Franzini-Armstrong, Clara; Sasaki, Hiroyuki; Goldman, Yale E.; Reedy, Michael K.; Taylor, Kenneth A.
2012-01-01
The application of rapidly applied length steps to actively contracting muscle is a classic method for synchronizing the response of myosin cross-bridges so that the average response of the ensemble can be measured. Alternatively, electron tomography (ET) is a technique that can report the structure of the individual members of the ensemble. We probed the structure of active myosin motors (cross-bridges) by applying 0.5% changes in length (either a stretch or a release) within 2 ms to isometrically contracting insect flight muscle (IFM) fibers followed after 5–6 ms by rapid freezing against a liquid helium cooled copper mirror. ET of freeze-substituted fibers, embedded and thin-sectioned, provides 3-D cross-bridge images, sorted by multivariate data analysis into ∼40 classes, distinct in average structure, population size and lattice distribution. Individual actin subunits are resolved facilitating quasi-atomic modeling of each class average to determine its binding strength (weak or strong) to actin. ∼98% of strong-binding acto-myosin attachments present after a length perturbation are confined to “target zones” of only two actin subunits located exactly midway between successive troponin complexes along each long-pitch helical repeat of actin. Significant changes in the types, distribution and structure of actin-myosin attachments occurred in a manner consistent with the mechanical transients. Most dramatic is near disappearance, after either length perturbation, of a class of weak-binding cross-bridges, attached within the target zone, that are highly likely to be precursors of strong-binding cross-bridges. These weak-binding cross-bridges were originally observed in isometrically contracting IFM. Their disappearance following a quick stretch or release can be explained by a recent kinetic model for muscle contraction, as behaviour consistent with their identification as precursors of strong-binding cross-bridges. The results provide a detailed model for contraction in IFM that may be applicable to contraction in other types of muscle. PMID:22761792
Fast and Slow Precipitation Responses to Individual Climate Forcers: A PDRMIP Multimodel Study
NASA Technical Reports Server (NTRS)
Samset, B. H.; Myhre, G.; Forster, P.M.; Hodnebrog, O.; Andrews, T.; Faluvegi, G.; Flaschner, D.; Kasoar, M.; Kharin, V.; Kirkevag, A.;
2016-01-01
Precipitation is expected to respond differently to various drivers of anthropogenic climate change. We present the first results from the Precipitation Driver and Response Model Intercomparison Project (PDRMIP), where nine global climate models have perturbed CO2, CH4, black carbon, sulfate, and solar insolation. We divide the resulting changes to global mean and regional precipitation into fast responses that scale with changes in atmospheric absorption and slow responses scaling with surface temperature change. While the overall features are broadly similar between models, we find significant regional intermodel variability, especially over land. Black carbon stands out as a component that may cause significant model diversity in predicted precipitation change. Processes linked to atmospheric absorption are less consistently modeled than those linked to top-of-atmosphere radiative forcing. We identify a number of land regions where the model ensemble consistently predicts that fast precipitation responses to climate perturbations dominate over the slow, temperature-driven responses.
Kinetic field theory: exact free evolution of Gaussian phase-space correlations
NASA Astrophysics Data System (ADS)
Fabis, Felix; Kozlikin, Elena; Lilow, Robert; Bartelmann, Matthias
2018-04-01
In recent work we developed a description of cosmic large-scale structure formation in terms of non-equilibrium ensembles of classical particles, with time evolution obtained in the framework of a statistical field theory. In these works, the initial correlations between particles sampled from random Gaussian density and velocity fields have so far been treated perturbatively or restricted to pure momentum correlations. Here we treat the correlations between all phase-space coordinates exactly by adopting a diagrammatic language for the different forms of correlations, directly inspired by the Mayer cluster expansion. We will demonstrate that explicit expressions for phase-space density cumulants of arbitrary n-point order, which fully capture the non-linear coupling of free streaming kinematics due to initial correlations, can be obtained from a simple set of Feynman rules. These cumulants will be the foundation for future investigations of perturbation theory in particle interactions.
Stability of Granular Packings Jammed under Gravity: Avalanches and Unjamming
NASA Astrophysics Data System (ADS)
Merrigan, Carl; Birwa, Sumit; Tewari, Shubha; Chakraborty, Bulbul
Granular avalanches indicate the sudden destabilization of a jammed state due to a perturbation. We propose that the perturbation needed depends on the entire force network of the jammed configuration. Some networks are stable, while others are fragile, leading to the unpredictability of avalanches. To test this claim, we simulated an ensemble of jammed states in a hopper using LAMMPS. These simulations were motivated by experiments with vibrated hoppers where the unjamming times followed power-law distributions. We compare the force networks for these simulated states with respect to their overall stability. The states are classified by how long they remain stable when subject to continuous vibrations. We characterize the force networks through both their real space geometry and representations in the associated force-tile space, extending this tool to jammed states with body forces. Supported by NSF Grant DMR1409093 and DGE1068620.
NASA Astrophysics Data System (ADS)
Arellano, A. F., Jr.; Tang, W.
2017-12-01
Assimilating observational data of chemical constituents into a modeling system is a powerful approach in assessing changes in atmospheric composition and estimating associated emissions. However, the results of such chemical data assimilation (DA) experiments are largely subject to various key factors such as: a) a priori information, b) error specification and representation, and c) structural biases in the modeling system. Here we investigate the sensitivity of an ensemble-based data assimilation state and emission estimates to these key factors. We focus on investigating the assimilation performance of the Community Earth System Model (CESM)/CAM-Chem with the Data Assimilation Research Testbed (DART) in representing biomass burning plumes in the Amazonia during the 2008 fire season. We conduct the following ensemble DA MOPITT CO experiments: 1) use of monthly-average NCAR's FINN surface fire emissionss, 2) use of daily FINN surface fire emissions, 3) use of daily FINN emissions with climatological injection heights, and 4) use of perturbed FINN emission parameters to represent not only the uncertainties in combustion activity but also in combustion efficiency. We show key diagnostics of assimilation performance for these experiments and verify with available ground-based and aircraft-based measurements.
Emergence of a Stable Cortical Map for Neuroprosthetic Control
Ganguly, Karunesh; Carmena, Jose M.
2009-01-01
Cortical control of neuroprosthetic devices is known to require neuronal adaptations. It remains unclear whether a stable cortical representation for prosthetic function can be stored and recalled in a manner that mimics our natural recall of motor skills. Especially in light of the mixed evidence for a stationary neuron-behavior relationship in cortical motor areas, understanding this relationship during long-term neuroprosthetic control can elucidate principles of neural plasticity as well as improve prosthetic function. Here, we paired stable recordings from ensembles of primary motor cortex neurons in macaque monkeys with a constant decoder that transforms neural activity to prosthetic movements. Proficient control was closely linked to the emergence of a surprisingly stable pattern of ensemble activity, indicating that the motor cortex can consolidate a neural representation for prosthetic control in the presence of a constant decoder. The importance of such a cortical map was evident in that small perturbations to either the size of the neural ensemble or to the decoder could reversibly disrupt function. Moreover, once a cortical map became consolidated, a second map could be learned and stored. Thus, long-term use of a neuroprosthetic device is associated with the formation of a cortical map for prosthetic function that is stable across time, readily recalled, resistant to interference, and resembles a putative memory engram. PMID:19621062
Cortical Specializations Underlying Fast Computations
Volgushev, Maxim
2016-01-01
The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988
Modelling of Heat and Moisture Loss Through NBC Ensembles
1991-11-01
the heat and moisture transport through various NBC clothing ensembles. The analysis involves simplifying the three dimensional physical problem of... clothing on a person to that of a one dimensional problem of flow through parallel layers of clothing and air. Body temperatures are calculated based on...prescribed work rates, ambient conditions and clothing properties. Sweat response and respiration rates are estimated based on empirical data to
Time arrow is influenced by the dark energy.
Allahverdyan, A E; Gurzadyan, V G
2016-05-01
The arrow of time and the accelerated expansion are two fundamental empirical facts of the universe. We advance the viewpoint that the dark energy (positive cosmological constant) accelerating the expansion of the universe also supports the time asymmetry. It is related to the decay of metastable states under generic perturbations, as we show on example of a microcanonical ensemble. These states will not be metastable without dark energy. The latter also ensures a hyperbolic motion leading to dynamic entropy production with the rate determined by the cosmological constant.
Refined counting of necklaces in one-loop N=4 SYM
NASA Astrophysics Data System (ADS)
Suzuki, Ryo
2017-06-01
We compute the grand partition function of N=4 SYM at one-loop in the SU(2) sector with general chemical potentials, extending the results of Pólya's theorem. We make use of finite group theory, applicable to all orders of perturbative 1 /N c expansion. We show that only the planar terms contribute to the grand partition function, which is therefore equal to the grand partition function of an ensemble of {XXX}_{1/2} spin chains. We discuss how Hagedorn temperature changes on the complex plane of chemical potentials.
Insights into the deterministic skill of air quality ensembles ...
Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each stati
Uncertainty Quantification in Climate Modeling and Projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qian, Yun; Jackson, Charles; Giorgi, Filippo
The projection of future climate is one of the most complex problems undertaken by the scientific community. Although scientists have been striving to better understand the physical basis of the climate system and to improve climate models, the overall uncertainty in projections of future climate has not been significantly reduced (e.g., from the IPCC AR4 to AR5). With the rapid increase of complexity in Earth system models, reducing uncertainties in climate projections becomes extremely challenging. Since uncertainties always exist in climate models, interpreting the strengths and limitations of future climate projections is key to evaluating risks, and climate change informationmore » for use in Vulnerability, Impact, and Adaptation (VIA) studies should be provided with both well-characterized and well-quantified uncertainty. The workshop aimed at providing participants, many of them from developing countries, information on strategies to quantify the uncertainty in climate model projections and assess the reliability of climate change information for decision-making. The program included a mixture of lectures on fundamental concepts in Bayesian inference and sampling, applications, and hands-on computer laboratory exercises employing software packages for Bayesian inference, Markov Chain Monte Carlo methods, and global sensitivity analyses. The lectures covered a range of scientific issues underlying the evaluation of uncertainties in climate projections, such as the effects of uncertain initial and boundary conditions, uncertain physics, and limitations of observational records. Progress in quantitatively estimating uncertainties in hydrologic, land surface, and atmospheric models at both regional and global scales was also reviewed. The application of Uncertainty Quantification (UQ) concepts to coupled climate system models is still in its infancy. The Coupled Model Intercomparison Project (CMIP) multi-model ensemble currently represents the primary data for assessing reliability and uncertainties of climate change information. An alternative approach is to generate similar ensembles by perturbing parameters within a single-model framework. One of workshop’s objectives was to give participants a deeper understanding of these approaches within a Bayesian statistical framework. However, there remain significant challenges still to be resolved before UQ can be applied in a convincing way to climate models and their projections.« less
HESS Opinions "Should we apply bias correction to global and regional climate model data?"
NASA Astrophysics Data System (ADS)
Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.
2012-04-01
Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.
Utilization of Short-Simulations for Tuning High-Resolution Climate Model
NASA Astrophysics Data System (ADS)
Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.
2016-12-01
Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (< 10 days ) and longer ( 1 year) Perturbed Parameters Ensemble (PPE) simulations at low resolution to identify model feature sensitivity to parameter changes. The CAPT tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in greater detail once an educated set of parameter choice is selected. Limitations on using short-term simulations for tuning climate model are also discussed.
Ensemble of European regional climate simulations for the winter of 2013 and 2014 from HadAM3P-RM3P
NASA Astrophysics Data System (ADS)
Schaller, Nathalie; Sparrow, Sarah N.; Massey, Neil R.; Bowery, Andy; Miller, Jonathan; Wilson, Simon; Wallom, David C. H.; Otto, Friederike E. L.
2018-04-01
Large data sets used to study the impact of anthropogenic climate change on the 2013/14 floods in the UK are provided. The data consist of perturbed initial conditions simulations using the Weather@Home regional climate modelling framework. Two different base conditions, Actual, including atmospheric conditions (anthropogenic greenhouse gases and human induced aerosols) as at present and Natural, with these forcings all removed are available. The data set is made up of 13 different ensembles (2 actual and 11 natural) with each having more than 7500 members. The data is available as NetCDF V3 files representing monthly data within the period of interest (1st Dec 2013 to 15th February 2014) for both a specified European region at a 50 km horizontal resolution and globally at N96 resolution. The data is stored within the UK Natural and Environmental Research Council Centre for Environmental Data Analysis repository.
Dai, Tie; Schutgens, Nick A J; Goto, Daisuke; Shi, Guangyu; Nakajima, Teruyuki
2014-12-01
A new global aerosol assimilation system adopting a more complex icosahedral grid configuration is developed. Sensitivity tests for the assimilation system are performed utilizing satellite retrieved aerosol optical depth (AOD) from the Moderate Resolution Imaging Spectroradiometer (MODIS), and the results over Eastern Asia are analyzed. The assimilated results are validated through independent Aerosol Robotic Network (AERONET) observations. Our results reveal that the ensemble and local patch sizes have little effect on the assimilation performance, whereas the ensemble perturbation method has the largest effect. Assimilation leads to significantly positive effect on the simulated AOD field, improving agreement with all of the 12 AERONET sites over the Eastern Asia based on both the correlation coefficient and the root mean square difference (assimilation efficiency). Meanwhile, better agreement of the Ångström Exponent (AE) field is achieved for 8 of the 12 sites due to the assimilation of AOD only. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mushroom body output neurons encode valence and guide memory-based action selection in Drosophila
Aso, Yoshinori; Sitaraman, Divya; Ichinose, Toshiharu; Kaun, Karla R; Vogt, Katrin; Belliart-Guérin, Ghislain; Plaçais, Pierre-Yves; Robie, Alice A; Yamagata, Nobuhiro; Schnaitmann, Christopher; Rowell, William J; Johnston, Rebecca M; Ngo, Teri-T B; Chen, Nan; Korff, Wyatt; Nitabach, Michael N; Heberlein, Ulrike; Preat, Thomas; Branson, Kristin M; Tanimoto, Hiromu; Rubin, Gerald M
2014-01-01
Animals discriminate stimuli, learn their predictive value and use this knowledge to modify their behavior. In Drosophila, the mushroom body (MB) plays a key role in these processes. Sensory stimuli are sparsely represented by ∼2000 Kenyon cells, which converge onto 34 output neurons (MBONs) of 21 types. We studied the role of MBONs in several associative learning tasks and in sleep regulation, revealing the extent to which information flow is segregated into distinct channels and suggesting possible roles for the multi-layered MBON network. We also show that optogenetic activation of MBONs can, depending on cell type, induce repulsion or attraction in flies. The behavioral effects of MBON perturbation are combinatorial, suggesting that the MBON ensemble collectively represents valence. We propose that local, stimulus-specific dopaminergic modulation selectively alters the balance within the MBON network for those stimuli. Our results suggest that valence encoded by the MBON ensemble biases memory-based action selection. DOI: http://dx.doi.org/10.7554/eLife.04580.001 PMID:25535794
Measuring effective temperatures in a generalized Gibbs ensemble
NASA Astrophysics Data System (ADS)
Foini, Laura; Gambassi, Andrea; Konik, Robert; Cugliandolo, Leticia F.
2017-05-01
The local physical properties of an isolated quantum statistical system in the stationary state reached long after a quench are generically described by the Gibbs ensemble, which involves only its Hamiltonian and the temperature as a parameter. If the system is instead integrable, additional quantities conserved by the dynamics intervene in the description of the stationary state. The resulting generalized Gibbs ensemble involves a number of temperature-like parameters, the determination of which is practically difficult. Here we argue that in a number of simple models these parameters can be effectively determined by using fluctuation-dissipation relationships between response and correlation functions of natural observables, quantities which are accessible in experiments.
Bayesian refinement of protein structures and ensembles against SAXS data using molecular dynamics
Shevchuk, Roman; Hub, Jochen S.
2017-01-01
Small-angle X-ray scattering is an increasingly popular technique used to detect protein structures and ensembles in solution. However, the refinement of structures and ensembles against SAXS data is often ambiguous due to the low information content of SAXS data, unknown systematic errors, and unknown scattering contributions from the solvent. We offer a solution to such problems by combining Bayesian inference with all-atom molecular dynamics simulations and explicit-solvent SAXS calculations. The Bayesian formulation correctly weights the SAXS data versus prior physical knowledge, it quantifies the precision or ambiguity of fitted structures and ensembles, and it accounts for unknown systematic errors due to poor buffer matching. The method further provides a probabilistic criterion for identifying the number of states required to explain the SAXS data. The method is validated by refining ensembles of a periplasmic binding protein against calculated SAXS curves. Subsequently, we derive the solution ensembles of the eukaryotic chaperone heat shock protein 90 (Hsp90) against experimental SAXS data. We find that the SAXS data of the apo state of Hsp90 is compatible with a single wide-open conformation, whereas the SAXS data of Hsp90 bound to ATP or to an ATP-analogue strongly suggest heterogenous ensembles of a closed and a wide-open state. PMID:29045407
Inner Radiation Belt Dynamics and Climatology
NASA Astrophysics Data System (ADS)
Guild, T. B.; O'Brien, P. P.; Looper, M. D.
2012-12-01
We present preliminary results of inner belt proton data assimilation using an augmented version of the Selesnick et al. Inner Zone Model (SIZM). By varying modeled physics parameters and solar particle injection parameters to generate many ensembles of the inner belt, then optimizing the ensemble weights according to inner belt observations from SAMPEX/PET at LEO and HEO/DOS at high altitude, we obtain the best-fit state of the inner belt. We need to fully sample the range of solar proton injection sources among the ensemble members to ensure reasonable agreement between the model ensembles and observations. Once this is accomplished, we find the method is fairly robust. We will demonstrate the data assimilation by presenting an extended interval of solar proton injections and losses, illustrating how these short-term dynamics dominate long-term inner belt climatology.
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
NASA Astrophysics Data System (ADS)
Ntegeka, Victor; Willems, Patrick; Baguis, Pierre; Roulin, Emmanuel
2015-04-01
It is advisable to account for a wide range of uncertainty by including the maximum possible number of climate models and scenarios for future impacts. As this is not always feasible, impact assessments are inevitably performed with a limited set of scenarios. The development of tailored scenarios is a challenge that needs more attention as the number of available climate change simulations grows. Whether these scenarios are representative enough for climate change impacts is a question that needs addressing. This study presents a methodology of constructing tailored scenarios for assessing runoff flows including extreme conditions (peak flows) from an ensemble of future climate change signals of precipitation and potential evapotranspiration (ETo) derived from the climate model simulations. The aim of the tailoring process is to formulate scenarios that can optimally represent the uncertainty spectrum of climate scenarios. These tailored scenarios have the advantage of being few in number as well as having a clear description of the seasonal variation of the climate signals, hence allowing easy interpretation of the implications of future changes. The tailoring process requires an analysis of the hydrological impacts from the likely future change signals from all available climate model simulations in a simplified (computationally less expensive) impact model. Historical precipitation and ETo time series are perturbed with the climate change signals based on a quantile perturbation technique that accounts for the changes in extremes. For precipitation, the change in wetday frequency is taken into account using a markov-chain approach. Resulting hydrological impacts from the perturbed time series are then subdivided into high, mean and low hydrological impacts using a quantile change analysis. From this classification, the corresponding precipitation and ETo change factors are back-tracked on a seasonal basis to determine precipitation-ETo covariation. The established precipitation-ETo covariations are used to inform the scenario construction process. Additionally, the back-tracking of extreme flows from driving scenarios allows for a diagnosis of the physical responses to climate change scenarios. The method is demonstrated through the application of scenarios from 10 Regional Climate Models,21 Global Climate Models and selected catchments in central Belgium. Reference Ntegeka, V., Baguis, P., Roulin, E., & Willems, P. (2014). Developing tailored climate change scenarios for hydrological impact assessments. Journal of Hydrology, 508, 307-321.
Abbs, J H; Gracco, V L
1984-04-01
The contribution of ascending afferents to the control of speech movement was evaluated by applying unanticipated loads to the lower lip during the generation of combined upper lip-lower lip speech gestures. To eliminate potential contamination due to anticipation or adaptation, loads were applied randomly on only 10-15% of the trials. Physical characteristics of the perturbations were within the normal range of forces and movements involved in natural lip actions for speech. Compensatory responses in multiple facial muscles and lip movements were observed the first time a load was introduced, and achievement of the multimovement speech goals was never disrupted by these perturbations. Muscle responses were seen in the lower lip muscles, implicating corrective, feedback processes. Additionally, compensatory responses to these lower lip loads were also observed in the independently controlled muscles of the upper lip, reflecting the parallel operation of open-loop, sensorimotor mechanisms. Compensatory responses from both the upper and lower lip muscles were observed with small (1 mm) as well as large (15 mm) perturbations. The latencies of these compensatory responses were not discernible by conventional ensemble averaging. Moreover, responses at latencies of lower brain stem-mediated reflexes (i.e., 10-18 ms) were not apparent with inspection of individual records. Response latencies were determined on individual loaded trials through the use of a computer algorithm that took into account the variability of electromyograms (EMG) among the control trials. These latency measures confirmed the absence of brain stem-mediated responses and yielded response latencies that ranged from 22 to 75 ms. Response latencies appeared to be influenced by the time relation between load onset and the initiation of muscle activation. Examination of muscle activity changes for individual loaded trials revealed complementary variations in the magnitude of responses among multiple muscles contributing to a movement compensation. These observations may have implications for limb movement control if multimovement speech gestures are considered analogous to a limb action requiring coordinated movements around multiple joints. In this context, these speech motor control data might be interpreted to suggest that for complex movements, both corrective feedback and open-loop predictive processes are operating, with the latter involved in the control of coordination among multiple movement subcomponents.
Application of Generalized Feynman-Hellmann Theorem in Quantization of LC Circuit in Thermo Bath
NASA Astrophysics Data System (ADS)
Fan, Hong-Yi; Tang, Xu-Bing
For the quantized LC electric circuit, when taking the Joule thermal effect into account, we think that physical observables should be evaluated in the context of ensemble average. We then use the generalized Feynman-Hellmann theorem for ensemble average to calculate them, which seems convenient. Fluctuation of observables in various LC electric circuits in the presence of thermo bath growing with temperature is exhibited.
NASA Astrophysics Data System (ADS)
Lee, Hyun-Chul; Kumar, Arun; Wang, Wanqiu
2018-03-01
Coupled prediction systems for seasonal and inter-annual variability in the tropical Pacific are initialized from ocean analyses. In ocean initial states, small scale perturbations are inevitably smoothed or distorted by the observational limits and data assimilation procedures, which tends to induce potential ocean initial errors for the El Nino-Southern Oscillation (ENSO) prediction. Here, the evolution and effects of ocean initial errors from the small scale perturbation on the developing phase of ENSO are investigated by an ensemble of coupled model predictions. Results show that the ocean initial errors at the thermocline in the western tropical Pacific grow rapidly to project on the first mode of equatorial Kelvin wave and propagate to the east along the thermocline. In boreal spring when the surface buoyancy flux weakens in the eastern tropical Pacific, the subsurface errors influence sea surface temperature variability and would account for the seasonal dependence of prediction skill in the NINO3 region. It is concluded that the ENSO prediction in the eastern tropical Pacific after boreal spring can be improved by increasing the observational accuracy of subsurface ocean initial states in the western tropical Pacific.
Mid-Piacensian mean annual sea surface temperature: an analysis for data-model comparisons
Dowsett, Harry J.; Robinson, Marci M.; Foley, Kevin M.; Stoll, Danielle K.
2010-01-01
Numerical models of the global climate system are the primary tools used to understand and project climate disruptions in the form of future global warming. The Pliocene has been identified as the closest, albeit imperfect, analog to climate conditions expected for the end of this century, making an independent data set of Pliocene conditions necessary for ground truthing model results. Because most climate model output is produced in the form ofmean annual conditions, we present a derivative of the USGS PRISM3 Global Climate Reconstruction which integrates multiple proxies of sea surface temperature (SST) into single surface temperature anomalies. We analyze temperature estimates from faunal and floral assemblage data,Mg/Ca values and alkenone unsaturation indices to arrive at a single mean annual SST anomaly (Pliocene minus modern) best describing each PRISM site, understanding that multiple proxies should not necessarily show concordance. The power of themultiple proxy approach lies within its diversity, as no two proxies measure the same environmental variable. This data set can be used to verify climate model output, to serve as a starting point for model inter-comparisons, and for quantifying uncertainty in Pliocene model prediction in perturbed physics ensembles.
Observation of discrete time-crystalline order in a disordered dipolar many-body system
NASA Astrophysics Data System (ADS)
Choi, Soonwon; Choi, Joonhee; Landig, Renate; Kucsko, Georg; Zhou, Hengyun; Isoya, Junichi; Jelezko, Fedor; Onoda, Shinobu; Sumiya, Hitoshi; Khemani, Vedika; von Keyserlingk, Curt; Yao, Norman Y.; Demler, Eugene; Lukin, Mikhail D.
2017-03-01
Understanding quantum dynamics away from equilibrium is an outstanding challenge in the modern physical sciences. Out-of-equilibrium systems can display a rich variety of phenomena, including self-organized synchronization and dynamical phase transitions. More recently, advances in the controlled manipulation of isolated many-body systems have enabled detailed studies of non-equilibrium phases in strongly interacting quantum matter; for example, the interplay between periodic driving, disorder and strong interactions has been predicted to result in exotic ‘time-crystalline’ phases, in which a system exhibits temporal correlations at integer multiples of the fundamental driving period, breaking the discrete time-translational symmetry of the underlying drive. Here we report the experimental observation of such discrete time-crystalline order in a driven, disordered ensemble of about one million dipolar spin impurities in diamond at room temperature. We observe long-lived temporal correlations, experimentally identify the phase boundary and find that the temporal order is protected by strong interactions. This order is remarkably stable to perturbations, even in the presence of slow thermalization. Our work opens the door to exploring dynamical phases of matter and controlling interacting, disordered many-body systems.
The effect of denaturant on protein stability: a Monte Carlo lattice simulation
NASA Astrophysics Data System (ADS)
Choi, Ho Sup; Huh, June; Jo, Won Ho
2003-03-01
Denaturants are the reagents that decrease protein stability by interacting with both nonpolar and polar surfaces of protein when added to the aqueous solvent. However, the physical nature of these interactions has not been clearly understood. It is not easy to elucidate the nature of denaturant theoretically or experimentally. Even in computer simulation, the denaturant atoms are unable to be dealt explicitly due to computationally enormous costs. We have used a lattice model of protein and denaturant. By varying concentration of denaturant and interaction energy between protein and denaturant, we have measured the change of stability of the protein. This simple model reflects the experimental observation that the free energy of unfolding is a linear function of denaturant concentration in the transition range. We have also performed a simulation under isotropic perturbation. In this case, denaturant molecules are not included and a biasing potential is introduced in order to increase the radius of gyration of protein, which incorporates the effect of denaturant implicitly. The calculated free energy landscape and conformational ensembles sampled under this condition is very close to those of simulation using denaturant molecules interacting with protein. We have applied this simple approach for simulating the effect of denaturant to real proteins.
Weinkam, Patrick; Romesberg, Floyd E.; Wolynes, Peter G.
2010-01-01
A grand canonical formalism is developed to combine discrete simulations for chemically distinct species in equilibrium. Each simulation is based on a perturbed funneled landscape. The formalism is illustrated using the alkaline-induced transitions of cytochrome c as observed by FTIR spectroscopy and with various other experimental approaches. The grand canonical simulation method accounts for the acid/base chemistry of deprotonation, the inorganic chemistry of heme ligation and misligation, and the minimally frustrated folding energy landscape, thus elucidating the physics of protein folding involved with an acid/base titration of a protein. The formalism combines simulations for each of the relevant chemical species, varying by protonation and ligation states. In contrast to models based on perfectly funneled energy landscapes that contain only contacts found in the native structure, the current study introduces “chemical frustration” from deprotonation and misligation that gives rise to many intermediates at alkaline pH. While the nature of these intermediates cannot be easily inferred from available experimental data, the current study provides specific structural details of these intermediates thus extending our understanding of how cytochrome c changes with increasing pH. The results demonstrate the importance of chemical frustration for understanding biomolecular energy landscapes. PMID:19199810
Space-time-modulated stochastic processes
NASA Astrophysics Data System (ADS)
Giona, Massimiliano
2017-10-01
Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.
Random density matrices versus random evolution of open system
NASA Astrophysics Data System (ADS)
Pineda, Carlos; Seligman, Thomas H.
2015-10-01
We present and compare two families of ensembles of random density matrices. The first, static ensemble, is obtained foliating an unbiased ensemble of density matrices. As criterion we use fixed purity as the simplest example of a useful convex function. The second, dynamic ensemble, is inspired in random matrix models for decoherence where one evolves a separable pure state with a random Hamiltonian until a given value of purity in the central system is achieved. Several families of Hamiltonians, adequate for different physical situations, are studied. We focus on a two qubit central system, and obtain exact expressions for the static case. The ensemble displays a peak around Werner-like states, modulated by nodes on the degeneracies of the density matrices. For moderate and strong interactions good agreement between the static and the dynamic ensembles is found. Even in a model where one qubit does not interact with the environment excellent agreement is found, but only if there is maximal entanglement with the interacting one. The discussion is started recalling similar considerations for scattering theory. At the end, we comment on the reach of the results for other convex functions of the density matrix, and exemplify the situation with the von Neumann entropy.
Collective coupling in hybrid superconducting circuits
NASA Astrophysics Data System (ADS)
Saito, Shiro
Hybrid quantum systems utilizing superconducting circuits have attracted significant recent attention, not only for quantum information processing tasks but also as a way to explore fundamentally new physics regimes. In this talk, I will discuss two superconducting circuit based hybrid quantum system approaches. The first is a superconducting flux qubit - electron spin ensemble hybrid system in which quantum information manipulated in the flux qubit can be transferred to, stored in and retrieved from the ensemble. Although the coherence time of the ensemble is short, about 20 ns, this is a significant first step to utilize the spin ensemble as quantum memory for superconducting flux qubits. The second approach is a superconducting resonator - flux qubit ensemble hybrid system in which we fabricated a superconducting LC resonator coupled to a large ensemble of flux qubits. Here we observed a dispersive frequency shift of approximately 250 MHz in the resonators transmission spectrum. This indicates thousands of flux qubits are coupling to the resonator collectively. Although we need to improve our qubits inhomogeneity, our system has many potential uses including the creation of new quantum metamaterials, novel applications in quantum metrology and so on. This work was partially supported by JSPS KAKENHI Grant Number 25220601.
Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case
NASA Astrophysics Data System (ADS)
Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann
2017-04-01
Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.
Perturbational formulation of principal component analysis in molecular dynamics simulation.
Koyama, Yohei M; Kobayashi, Tetsuya J; Tomoda, Shuji; Ueda, Hiroki R
2008-10-01
Conformational fluctuations of a molecule are important to its function since such intrinsic fluctuations enable the molecule to respond to the external environmental perturbations. For extracting large conformational fluctuations, which predict the primary conformational change by the perturbation, principal component analysis (PCA) has been used in molecular dynamics simulations. However, several versions of PCA, such as Cartesian coordinate PCA and dihedral angle PCA (dPCA), are limited to use with molecules with a single dominant state or proteins where the dihedral angle represents an important internal coordinate. Other PCAs with general applicability, such as the PCA using pairwise atomic distances, do not represent the physical meaning clearly. Therefore, a formulation that provides general applicability and clearly represents the physical meaning is yet to be developed. For developing such a formulation, we consider the conformational distribution change by the perturbation with arbitrary linearly independent perturbation functions. Within the second order approximation of the Kullback-Leibler divergence by the perturbation, the PCA can be naturally interpreted as a method for (1) decomposing a given perturbation into perturbations that independently contribute to the conformational distribution change or (2) successively finding the perturbation that induces the largest conformational distribution change. In this perturbational formulation of PCA, (i) the eigenvalue measures the Kullback-Leibler divergence from the unperturbed to perturbed distributions, (ii) the eigenvector identifies the combination of the perturbation functions, and (iii) the principal component determines the probability change induced by the perturbation. Based on this formulation, we propose a PCA using potential energy terms, and we designate it as potential energy PCA (PEPCA). The PEPCA provides both general applicability and clear physical meaning. For demonstrating its power, we apply the PEPCA to an alanine dipeptide molecule in vacuum as a minimal model of a nonsingle dominant conformational biomolecule. The first and second principal components clearly characterize two stable states and the transition state between them. Positive and negative components with larger absolute values of the first and second eigenvectors identify the electrostatic interactions, which stabilize or destabilize each stable state and the transition state. Our result therefore indicates that PCA can be applied, by carefully selecting the perturbation functions, not only to identify the molecular conformational fluctuation but also to predict the conformational distribution change by the perturbation beyond the limitation of the previous methods.
Perturbational formulation of principal component analysis in molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Koyama, Yohei M.; Kobayashi, Tetsuya J.; Tomoda, Shuji; Ueda, Hiroki R.
2008-10-01
Conformational fluctuations of a molecule are important to its function since such intrinsic fluctuations enable the molecule to respond to the external environmental perturbations. For extracting large conformational fluctuations, which predict the primary conformational change by the perturbation, principal component analysis (PCA) has been used in molecular dynamics simulations. However, several versions of PCA, such as Cartesian coordinate PCA and dihedral angle PCA (dPCA), are limited to use with molecules with a single dominant state or proteins where the dihedral angle represents an important internal coordinate. Other PCAs with general applicability, such as the PCA using pairwise atomic distances, do not represent the physical meaning clearly. Therefore, a formulation that provides general applicability and clearly represents the physical meaning is yet to be developed. For developing such a formulation, we consider the conformational distribution change by the perturbation with arbitrary linearly independent perturbation functions. Within the second order approximation of the Kullback-Leibler divergence by the perturbation, the PCA can be naturally interpreted as a method for (1) decomposing a given perturbation into perturbations that independently contribute to the conformational distribution change or (2) successively finding the perturbation that induces the largest conformational distribution change. In this perturbational formulation of PCA, (i) the eigenvalue measures the Kullback-Leibler divergence from the unperturbed to perturbed distributions, (ii) the eigenvector identifies the combination of the perturbation functions, and (iii) the principal component determines the probability change induced by the perturbation. Based on this formulation, we propose a PCA using potential energy terms, and we designate it as potential energy PCA (PEPCA). The PEPCA provides both general applicability and clear physical meaning. For demonstrating its power, we apply the PEPCA to an alanine dipeptide molecule in vacuum as a minimal model of a nonsingle dominant conformational biomolecule. The first and second principal components clearly characterize two stable states and the transition state between them. Positive and negative components with larger absolute values of the first and second eigenvectors identify the electrostatic interactions, which stabilize or destabilize each stable state and the transition state. Our result therefore indicates that PCA can be applied, by carefully selecting the perturbation functions, not only to identify the molecular conformational fluctuation but also to predict the conformational distribution change by the perturbation beyond the limitation of the previous methods.
Aerosol microphysical and radiative effects on continental cloud ensembles
NASA Astrophysics Data System (ADS)
Wang, Yuan; Vogel, Jonathan M.; Lin, Yun; Pan, Bowen; Hu, Jiaxi; Liu, Yangang; Dong, Xiquan; Jiang, Jonathan H.; Yung, Yuk L.; Zhang, Renyi
2018-02-01
Aerosol-cloud-radiation interactions represent one of the largest uncertainties in the current climate assessment. Much of the complexity arises from the non-monotonic responses of clouds, precipitation and radiative fluxes to aerosol perturbations under various meteorological conditions. In this study, an aerosol-aware WRF model is used to investigate the microphysical and radiative effects of aerosols in three weather systems during the March 2000 Cloud Intensive Observational Period campaign at the US Southern Great Plains. Three simulated cloud ensembles include a low-pressure deep convective cloud system, a collection of less-precipitating stratus and shallow cumulus, and a cold frontal passage. The WRF simulations are evaluated by several ground-based measurements. The microphysical properties of cloud hydrometeors, such as their mass and number concentrations, generally show monotonic trends as a function of cloud condensation nuclei concentrations. Aerosol radiative effects do not influence the trends of cloud microphysics, except for the stratus and shallow cumulus cases where aerosol semi-direct effects are identified. The precipitation changes by aerosols vary with the cloud types and their evolving stages, with a prominent aerosol invigoration effect and associated enhanced precipitation from the convective sources. The simulated aerosol direct effect suppresses precipitation in all three cases but does not overturn the aerosol indirect effect. Cloud fraction exhibits much smaller sensitivity (typically less than 2%) to aerosol perturbations, and the responses vary with aerosol concentrations and cloud regimes. The surface shortwave radiation shows a monotonic decrease by increasing aerosols, while the magnitude of the decrease depends on the cloud type.
Using the theory of small perturbations in performance calculations of the RBMK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isaev, N.V.; Druzhinin, V.E.; Pogosbekyan, L.R.
The theory of small perturbations in reactor physics is discussed and applied to two-dimensional calculations of the RBMK. The classical theory of small perturbations implies considerable errors in calculations because the perturbations cannot be considered small. The modified theory of small perturbations presented here can be used in atomic power stations for determining reactivity effects and reloading rates of channels in reactors and also for assessing the reactivity storage in control rods.
Dynamic principle for ensemble control tools.
Samoletov, A; Vasiev, B
2017-11-28
Dynamical equations describing physical systems in contact with a thermal bath are commonly extended by mathematical tools called "thermostats." These tools are designed for sampling ensembles in statistical mechanics. Here we propose a dynamic principle underlying a range of thermostats which is derived using fundamental laws of statistical physics and ensures invariance of the canonical measure. The principle covers both stochastic and deterministic thermostat schemes. Our method has a clear advantage over a range of proposed and widely used thermostat schemes that are based on formal mathematical reasoning. Following the derivation of the proposed principle, we show its generality and illustrate its applications including design of temperature control tools that differ from the Nosé-Hoover-Langevin scheme.
The GMAO Hybrid Ensemble-Variational Atmospheric Data Assimilation System: Version 2.0
NASA Technical Reports Server (NTRS)
Todling, Ricardo; El Akkraoui, Amal
2018-01-01
This document describes the implementation and usage of the Goddard Earth Observing System (GEOS) Hybrid Ensemble-Variational Atmospheric Data Assimilation System (Hybrid EVADAS). Its aim is to provide comprehensive guidance to users of GEOS ADAS interested in experimenting with its hybrid functionalities. The document is also aimed at providing a short summary of the state-of-science in this release of the hybrid system. As explained here, the ensemble data assimilation system (EnADAS) mechanism added to GEOS ADAS to enable hybrid data assimilation applications has been introduced to the pre-existing machinery of GEOS in the most non-intrusive possible way. Only very minor changes have been made to the original scripts controlling GEOS ADAS with the objective of facilitating its usage by both researchers and the GMAO's near-real-time Forward Processing applications. In a hybrid scenario two data assimilation systems run concurrently in a two-way feedback mode such that: the ensemble provides background ensemble perturbations required by the ADAS deterministic (typically high resolution) hybrid analysis; and the deterministic ADAS provides analysis information for recentering of the EnADAS analyses and information necessary to ensure that observation bias correction procedures are consistent between both the deterministic ADAS and the EnADAS. The nonintrusive approach to introducing hybrid capability to GEOS ADAS means, in particular, that previously existing features continue to be available. Thus, not only is this upgraded version of GEOS ADAS capable of supporting new applications such as Hybrid 3D-Var, 3D-EnVar, 4D-EnVar and Hybrid 4D-EnVar, it remains possible to use GEOS ADAS in its traditional 3D-Var mode which has been used in both MERRA and MERRA-2. Furthermore, as described in this document, GEOS ADAS also supports a configuration for exercising a purely ensemble-based assimilation strategy which can be fully decoupled from its variational component. We should point out that Release 1.0 of this document was made available to GMAO in mid-2013, when we introduced Hybrid 3D-Var capability to GEOS ADAS. This initial version of the documentation included a considerably different state-of-science introductory section but many of the same detailed description of the mechanisms of GEOS EnADAS. We are glad to report that a few of the desirable Future Works listed in Release 1.0 have now been added to the present version of GEOS EnADAS. These include the ability to exercise an Ensemble Prediction System that uses the ensemble analyses of GEOS EnADAS and (a very early, but functional version of) a tool to support Ensemble Forecast Sensitivity and Observation Impact applications.
High Performance Nuclear Magnetic Resonance Imaging Using Magnetic Resonance Force Microscopy
2013-12-12
Micron- Size Ferromagnet . Physical Review Letters, 92(3) 037205 (2004) [22] A. Z. Genack and A. G. Redeld. Theory of nuclear spin diusion in a...perform spatially resolved scanned probe studies of spin dynamics in nanoscale ensembles of few electron spins of varying size . Our research culminated...perform spatially resolved scanned probe studies of spin dynamics in nanoscale ensembles of few electron spins of varying size . Our research culminated
ERIC Educational Resources Information Center
Dega, Bekele Gashe; Kriek, Jeanne; Mogese, Temesgen Fereja
2013-01-01
The purpose of this study was to investigate Ethiopian physics undergraduate students' conceptual change in the concepts of electric potential and energy (EPE) and electromagnetic induction (EMI). A quasi-experimental design was used to study the effect of cognitive perturbation using physics interactive simulations (CPS) in relation to cognitive…
A relativistic signature in large-scale structure
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David
2016-09-01
In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.
Measuring effective temperatures in a generalized Gibbs ensemble
Foini, Laura; Gambassi, Andrea; Konik, Robert; ...
2017-05-11
The local physical properties of an isolated quantum statistical system in the stationary state reached long after a quench are generically described by the Gibbs ensemble, which involves only its Hamiltonian and the temperature as a parameter. Additional quantities conserved by the dynamics intervene in the description of the stationary state, if the system is instead integrable. The resulting generalized Gibbs ensemble involves a number of temperature-like parameters, the determination of which is practically difficult. We argue that in a number of simple models these parameters can be effectively determined by using fluctuation-dissipation relationships between response and correlation functions ofmore » natural observables, quantities which are accessible in experiments.« less
Quantifying rapid changes in cardiovascular state with a moving ensemble average.
Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T
2018-04-01
MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.
Analyses and forecasts of a tornadic supercell outbreak using a 3DVAR system ensemble
NASA Astrophysics Data System (ADS)
Zhuang, Zhaorong; Yussouf, Nusrat; Gao, Jidong
2016-05-01
As part of NOAA's "Warn-On-Forecast" initiative, a convective-scale data assimilation and prediction system was developed using the WRF-ARW model and ARPS 3DVAR data assimilation technique. The system was then evaluated using retrospective short-range ensemble analyses and probabilistic forecasts of the tornadic supercell outbreak event that occurred on 24 May 2011 in Oklahoma, USA. A 36-member multi-physics ensemble system provided the initial and boundary conditions for a 3-km convective-scale ensemble system. Radial velocity and reflectivity observations from four WSR-88Ds were assimilated into the ensemble using the ARPS 3DVAR technique. Five data assimilation and forecast experiments were conducted to evaluate the sensitivity of the system to data assimilation frequencies, in-cloud temperature adjustment schemes, and fixed- and mixed-microphysics ensembles. The results indicated that the experiment with 5-min assimilation frequency quickly built up the storm and produced a more accurate analysis compared with the 10-min assimilation frequency experiment. The predicted vertical vorticity from the moist-adiabatic in-cloud temperature adjustment scheme was larger in magnitude than that from the latent heat scheme. Cycled data assimilation yielded good forecasts, where the ensemble probability of high vertical vorticity matched reasonably well with the observed tornado damage path. Overall, the results of the study suggest that the 3DVAR analysis and forecast system can provide reasonable forecasts of tornadic supercell storms.
NASA Astrophysics Data System (ADS)
Botto, Anna; Camporese, Matteo
2017-04-01
Hydrological models allow scientists to predict the response of water systems under varying forcing conditions. In particular, many physically-based integrated models were recently developed in order to understand the fundamental hydrological processes occurring at the catchment scale. However, the use of this class of hydrological models is still relatively limited, as their prediction skills heavily depend on reliable parameter estimation, an operation that is never trivial, being normally affected by large uncertainty and requiring huge computational effort. The objective of this work is to test the potential of data assimilation to be used as an inverse modeling procedure for the broad class of integrated hydrological models. To pursue this goal, a Bayesian data assimilation (DA) algorithm based on a Monte Carlo approach, namely the ensemble Kalman filter (EnKF), is combined with the CATchment HYdrology (CATHY) model. In this approach, input variables (atmospheric forcing, soil parameters, initial conditions) are statistically perturbed providing an ensemble of realizations aimed at taking into account the uncertainty involved in the process. Each realization is propagated forward by the CATHY hydrological model within a parallel R framework, developed to reduce the computational effort. When measurements are available, the EnKF is used to update both the system state and soil parameters. In particular, four different assimilation scenarios are applied to test the capability of the modeling framework: first only pressure head or water content are assimilated, then, the combination of both, and finally both pressure head and water content together with the subsurface outflow. To demonstrate the effectiveness of the approach in a real-world scenario, an artificial hillslope was designed and built to provide real measurements for the DA analyses. The experimental facility, located in the Department of Civil, Environmental and Architectural Engineering of the University of Padova (Italy), consists of a reinforced concrete box containing a soil prism with maximum height of 3.5 m, length of 6 m and width of 2 m. The hillslope is equipped with six pairs of tensiometers and water content reflectometers, to monitor the pressure head and soil moisture content, respectively. Moreover, two tipping bucket flow gages were used to measure the surface and subsurface discharges at the outlet. A 12-day long experiment was carried out, during which a series of four rainfall events with constant rainfall rate were generated, interspersed with phases of drainage. During the experiment, measurements were collected at a relatively high resolution of 0.5 Hz. We report here on the capability of the data assimilation framework to estimate sets of plausible parameters that are consistent with the experimental setup.
Zhang, Ming; Xu, Yan; Li, Lei; Liu, Zi; Yang, Xibei; Yu, Dong-Jun
2018-06-01
RNA 5-methylcytosine (m 5 C) is an important post-transcriptional modification that plays an indispensable role in biological processes. The accurate identification of m 5 C sites from primary RNA sequences is especially useful for deeply understanding the mechanisms and functions of m 5 C. Due to the difficulty and expensive costs of identifying m 5 C sites with wet-lab techniques, developing fast and accurate machine-learning-based prediction methods is urgently needed. In this study, we proposed a new m 5 C site predictor, called M5C-HPCR, by introducing a novel heuristic nucleotide physicochemical property reduction (HPCR) algorithm and classifier ensemble. HPCR extracts multiple reducts of physical-chemical properties for encoding discriminative features, while the classifier ensemble is applied to integrate multiple base predictors, each of which is trained based on a separate reduct of the physical-chemical properties obtained from HPCR. Rigorous jackknife tests on two benchmark datasets demonstrate that M5C-HPCR outperforms state-of-the-art m 5 C site predictors, with the highest values of MCC (0.859) and AUC (0.962). We also implemented the webserver of M5C-HPCR, which is freely available at http://cslab.just.edu.cn:8080/M5C-HPCR/. Copyright © 2018 Elsevier Inc. All rights reserved.
A new approach to the stability analysis of transient natural convection in porous media
NASA Astrophysics Data System (ADS)
Tilton, Nils
2016-11-01
Onset of natural convection due to transient diffusion in porous media has attracted considerable attention for its applications to CO2 sequestration. Stability analyses typically investigate onset of convection using an initial value problem approach in which a perturbation is introduced to the concentration field at an initial time t =tp . This leads to debate concerning physically appropriate perturbations, the critical time tc for linear instability, and to the counter-intuitive notion of an optimal initial time tp that maximizes perturbation growth. We propose a new approach in which transient diffusion is continuously perturbed by small variations in the porosity. With this approach, instability occurs immediately (tc = 0) without violating any physical constraints, such that the concepts of initial time tp and critical time tc have less relevance. We argue that the onset time for nonlinear convection is a more physically relevant parameter, and show that it can be predicted using a simple asymptotic expansion. Using the expansion, we consider porosity perturbations that vary sinusoidally in the horizontal and vertical directions, and show there are optimal combinations of wavelengths that minimize the onset time of nonlinear convection.
NASA Astrophysics Data System (ADS)
Tong, M.; Xue, M.
2006-12-01
An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.
Ensemble Bayesian forecasting system Part I: Theory and algorithms
NASA Astrophysics Data System (ADS)
Herr, Henry D.; Krzysztofowicz, Roman
2015-05-01
The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.
Wang, Jihua; Zhao, Liling; Dou, Xianghua; Zhang, Zhiyong
2008-06-01
Forty nine molecular dynamics simulations of unfolding trajectories of the segment B1 of streptococcal protein G (GB1) provide a direct demonstration of the diversity of unfolding pathway and give a statistically utmost unfolding pathway under the physical property space. Twelve physical properties of the protein were chosen to construct a 12-dimensional property space. Then the 12-dimensional property space was reduced to a 3-dimensional principle component property space. Under the property space, the multiple unfolding trajectories look like "trees", which have some common characters. The "root of the tree" corresponds to the native state, the "bole" homologizes the partially unfolded conformations, and the "crown" is in correspondence to the unfolded state. These unfolding trajectories can be divided into three types. The first one has the characters of straight "bole" and "crown" corresponding to a fast two-state unfolding pathway of GB1. The second one has the character of "the standstill in the middle tree bole", which may correspond to a three-state unfolding pathway. The third one has the character of "the circuitous bole" corresponding to a slow two-state unfolding pathway. The fast two-state unfolding pathway is a statistically utmost unfolding pathway or preferred pathway of GB1, which occupies 53% of 49 unfolding trajectories. In the property space all the unfolding trajectories construct a thermal unfolding pathway ensemble of GB1. The unfolding pathway ensemble resembles a funnel that is gradually emanative from the native state ensemble to the unfolded state ensemble. In the property space, the thermal unfolded state distribution looks like electronic cloud in quantum mechanics. The unfolded states of the independent unfolding simulation trajectories have substantial overlaps, indicating that the thermal unfolded states are confined by the physical property values, and the number of protein unfolded state are much less than that was believed before.
Constraining the ensemble Kalman filter for improved streamflow forecasting
NASA Astrophysics Data System (ADS)
Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James
2018-05-01
Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, which has a profound effect on filter performance. We note an interesting tension exists between specifying an error which reflects known uncertainties and errors in the measurement versus an error that allows "optimal" filter updating.
Is ``the Theory of Everything'' Merely the Ultimate Ensemble Theory?
NASA Astrophysics Data System (ADS)
Tegmark, Max
1998-11-01
We discuss some physical consequences of what might be called "the ultimate ensemble theory,", where not only worlds corresponding to say different sets of initial data or different physical constants are considered equally real, but also worlds ruled by altogether different equations. The only postulate in this theory is that all structures that exist mathematically exist also physically, by which we mean that in those complex enough to contain self-aware substructures (SASs), these SASs will subjectively perceive themselves as existing in a physically "real" world. We find that it is far from clear that this simple theory, which has no free parameters whatsoever, is observationally ruled out. The predictions of the theory take the form of probability distributions for the outcome of experiments, which makes it testable. In addition, it may be possible to rule it out by comparing its a priori predictions for the observable attributes of nature (the particle masses, the dimensionality of spacetime, etc.) with what is observed.
NASA Astrophysics Data System (ADS)
Arnault, Joel; Rummler, Thomas; Baur, Florian; Lerch, Sebastian; Wagner, Sven; Fersch, Benjamin; Zhang, Zhenyu; Kerandi, Noah; Keil, Christian; Kunstmann, Harald
2017-04-01
Precipitation predictability can be assessed by the spread within an ensemble of atmospheric simulations being perturbed in the initial, lateral boundary conditions and/or modeled processes within a range of uncertainty. Surface-related processes are more likely to change precipitation when synoptic forcing is weak. This study investigates the effect of uncertainty in the representation of terrestrial water flows on precipitation predictability. The tools used for this investigation are the Weather Research and Forecasting (WRF) model and its hydrologically-enhanced version WRF-Hydro, applied over Central Europe during April-October 2008. The WRF grid is that of COSMO-DE, with a resolution of 2.8 km. In WRF-Hydro, the WRF grid is coupled with a sub-grid at 280 m resolution to resolve lateral terrestrial water flows. Vertical flow uncertainty is considered by modifying the parameter controlling the partitioning between surface runoff and infiltration in WRF, and horizontal flow uncertainty is considered by comparing WRF with WRF-Hydro. Precipitation predictability is deduced from the spread of an ensemble based on three turbulence parameterizations. Model results are validated with E-OBS precipitation and surface temperature, ESA-CCI soil moisture, FLUXNET-MTE surface evaporation and GRDC discharge. It is found that the uncertainty in the representation of terrestrial water flows is more likely to significantly affect precipitation predictability when surface flux spatial variability is high. In comparison to the WRF ensemble, WRF-Hydro slightly improves the adjusted continuous ranked probability score of daily precipitation. The reproduction of observed daily discharge with Nash-Sutcliffe model efficiency coefficients up to 0.91 demonstrates the potential of WRF-Hydro for flood forecasting.
Nullspace Sampling with Holonomic Constraints Reveals Molecular Mechanisms of Protein Gαs.
Pachov, Dimitar V; van den Bedem, Henry
2015-07-01
Proteins perform their function or interact with partners by exchanging between conformational substates on a wide range of spatiotemporal scales. Structurally characterizing these exchanges is challenging, both experimentally and computationally. Large, diffusional motions are often on timescales that are difficult to access with molecular dynamics simulations, especially for large proteins and their complexes. The low frequency modes of normal mode analysis (NMA) report on molecular fluctuations associated with biological activity. However, NMA is limited to a second order expansion about a minimum of the potential energy function, which limits opportunities to observe diffusional motions. By contrast, kino-geometric conformational sampling (KGS) permits large perturbations while maintaining the exact geometry of explicit conformational constraints, such as hydrogen bonds. Here, we extend KGS and show that a conformational ensemble of the α subunit Gαs of heterotrimeric stimulatory protein Gs exhibits structural features implicated in its activation pathway. Activation of protein Gs by G protein-coupled receptors (GPCRs) is associated with GDP release and large conformational changes of its α-helical domain. Our method reveals a coupled α-helical domain opening motion while, simultaneously, Gαs helix α5 samples an activated conformation. These motions are moderated in the activated state. The motion centers on a dynamic hub near the nucleotide-binding site of Gαs, and radiates to helix α4. We find that comparative NMA-based ensembles underestimate the amplitudes of the motion. Additionally, the ensembles fall short in predicting the accepted direction of the full activation pathway. Taken together, our findings suggest that nullspace sampling with explicit, holonomic constraints yields ensembles that illuminate molecular mechanisms involved in GDP release and protein Gs activation, and further establish conformational coupling between key structural elements of Gαs.
Nullspace Sampling with Holonomic Constraints Reveals Molecular Mechanisms of Protein Gαs
Pachov, Dimitar V.; van den Bedem, Henry
2015-01-01
Proteins perform their function or interact with partners by exchanging between conformational substates on a wide range of spatiotemporal scales. Structurally characterizing these exchanges is challenging, both experimentally and computationally. Large, diffusional motions are often on timescales that are difficult to access with molecular dynamics simulations, especially for large proteins and their complexes. The low frequency modes of normal mode analysis (NMA) report on molecular fluctuations associated with biological activity. However, NMA is limited to a second order expansion about a minimum of the potential energy function, which limits opportunities to observe diffusional motions. By contrast, kino-geometric conformational sampling (KGS) permits large perturbations while maintaining the exact geometry of explicit conformational constraints, such as hydrogen bonds. Here, we extend KGS and show that a conformational ensemble of the α subunit Gαs of heterotrimeric stimulatory protein Gs exhibits structural features implicated in its activation pathway. Activation of protein Gs by G protein-coupled receptors (GPCRs) is associated with GDP release and large conformational changes of its α-helical domain. Our method reveals a coupled α-helical domain opening motion while, simultaneously, Gαs helix α5 samples an activated conformation. These motions are moderated in the activated state. The motion centers on a dynamic hub near the nucleotide-binding site of Gαs, and radiates to helix α4. We find that comparative NMA-based ensembles underestimate the amplitudes of the motion. Additionally, the ensembles fall short in predicting the accepted direction of the full activation pathway. Taken together, our findings suggest that nullspace sampling with explicit, holonomic constraints yields ensembles that illuminate molecular mechanisms involved in GDP release and protein Gs activation, and further establish conformational coupling between key structural elements of Gαs. PMID:26218073
Measurements of the interaction of wave groups with shorter wind-generated waves
NASA Technical Reports Server (NTRS)
Chu, Jacob S.; Long, Steven R.; Phillips, O. M.
1992-01-01
Fields of statistically steady wind-generated waves produced in a wind wave facility were perturbed by the injection of groups of longer, mechanically generated waves with various slopes. The time histories of the surface displacements were measured at four fetches in ensembles consisting of 100 realizations of each set of experimental conditions; the data were stored and analyzed digitally. Four distinct stages in the overall interaction are identified and characterized. The properties of the wave energy front are documented, and a preliminary discussion is given of the dynamic processes involved in its formation.
COLAcode: COmoving Lagrangian Acceleration code
NASA Astrophysics Data System (ADS)
Tassev, Svetlin V.
2016-02-01
COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.
NASA Astrophysics Data System (ADS)
Warner, Thomas T.; Sheu, Rong-Shyang; Bowers, James F.; Sykes, R. Ian; Dodd, Gregory C.; Henn, Douglas S.
2002-05-01
Ensemble simulations made using a coupled atmospheric dynamic model and a probabilistic Lagrangian puff dispersion model were employed in a forensic analysis of the transport and dispersion of a toxic gas that may have been released near Al Muthanna, Iraq, during the Gulf War. The ensemble study had two objectives, the first of which was to determine the sensitivity of the calculated dosage fields to the choices that must be made about the configuration of the atmospheric dynamic model. In this test, various choices were used for model physics representations and for the large-scale analyses that were used to construct the model initial and boundary conditions. The second study objective was to examine the dispersion model's ability to use ensemble inputs to predict dosage probability distributions. Here, the dispersion model was used with the ensemble mean fields from the individual atmospheric dynamic model runs, including the variability in the individual wind fields, to generate dosage probabilities. These are compared with the explicit dosage probabilities derived from the individual runs of the coupled modeling system. The results demonstrate that the specific choices made about the dynamic-model configuration and the large-scale analyses can have a large impact on the simulated dosages. For example, the area near the source that is exposed to a selected dosage threshold varies by up to a factor of 4 among members of the ensemble. The agreement between the explicit and ensemble dosage probabilities is relatively good for both low and high dosage levels. Although only one ensemble was considered in this study, the encouraging results suggest that a probabilistic dispersion model may be of value in quantifying the effects of uncertainties in a dynamic-model ensemble on dispersion model predictions of atmospheric transport and dispersion.
NASA Astrophysics Data System (ADS)
Schalge, Bernd; Rihani, Jehan; Haese, Barbara; Baroni, Gabriele; Erdal, Daniel; Haefliger, Vincent; Lange, Natascha; Neuweiler, Insa; Hendricks-Franssen, Harrie-Jan; Geppert, Gernot; Ament, Felix; Kollet, Stefan; Cirpka, Olaf; Saavedra, Pablo; Han, Xujun; Attinger, Sabine; Kunstmann, Harald; Vereecken, Harry; Simmer, Clemens
2017-04-01
Currently, an integrated approach to simulating the earth system is evolving where several compartment models are coupled to achieve the best possible physically consistent representation. We used the model TerrSysMP, which fully couples subsurface, land surface and atmosphere, in a synthetic study that mimicked the Neckar catchment in Southern Germany. A virtual reality run at a high resolution of 400m for the land surface and subsurface and 1.1km for the atmosphere was made. Ensemble runs at a lower resolution (800m for the land surface and subsurface) were also made. The ensemble was generated by varying soil and vegetation parameters and lateral atmospheric forcing among the different ensemble members in a systematic way. It was found that the ensemble runs deviated for some variables and some time periods largely from the virtual reality reference run (the reference run was not covered by the ensemble), which could be related to the different model resolutions. This was for example the case for river discharge in the summer. We also analyzed the spread of model states as function of time and found clear relations between the spread and the time of the year and weather conditions. For example, the ensemble spread of latent heat flux related to uncertain soil parameters was larger under dry soil conditions than under wet soil conditions. Another example is that the ensemble spread of atmospheric states was more influenced by uncertain soil and vegetation parameters under conditions of low air pressure gradients (in summer) than under conditions with larger air pressure gradients in winter. The analysis of the ensemble of fully coupled model simulations provided valuable insights in the dynamics of land-atmosphere feedbacks which we will further highlight in the presentation.
NASA Astrophysics Data System (ADS)
Ribeiro Neto, A.; Scott, C. A.; Lima, E. A.; Montenegro, S. M. G. L.; Cirilo, J. A.
2014-09-01
Water availability for a range of human uses will increasingly be affected by climate change, especially in the arid and semiarid tropics. The main objective of this study is to evaluate the infrastructure sufficiency in meeting water demand under climate-induced socio-hydrological transition in the Capibaribe River basin (CRB). The basin has experienced spatial and sectoral (agriculture-to-urban) reconfiguration of water demands. Human settlements that were once dispersed, relying on intermittent sources of surface water, are now larger and more spatially concentrated, which increases water-scarcity effects. Based on the application of linked hydrologic and water-resources models using precipitation and temperature projections of the IPCC SRES (Special Report: Emissions Scenarios) A1B scenario, a reduction in rainfall of 26.0% translated to streamflow reduction of 60.0%. We used simulations from four members of the HadCM3 (UK Met Office Hadley Centre) perturbed physics ensemble, in which a single model structure is used and perturbations are introduced to the physical parameterization schemes in the model (Chou et al., 2012). We considered that the change of the water availability in the basin in the future scenarios must drive the water management and the development of adaptation strategies that will manage the water demand. Several adaptive responses are considered, including water-loss reductions, wastewater collection and reuse, and rainwater collection cisterns, which together have potential to reduce future water demand by 23.0%. This study demonstrates the vulnerabilities of the infrastructure system during socio-hydrological transition in response to hydroclimatic and demand variabilities in the CRB and also indicates the differential spatial impacts and vulnerability of multiple uses of water to changes over time. The simulations showed that the measures proposed and the water from interbasin transfer project of the São Francisco River had a positive impact over the water supply in the basin, mainly for human use. Industry and irrigation will suffer impact unless other measures are implemented for demand control.
The Mark III Hypercube-Ensemble Computers
NASA Technical Reports Server (NTRS)
Peterson, John C.; Tuazon, Jesus O.; Lieberman, Don; Pniel, Moshe
1988-01-01
Mark III Hypercube concept applied in development of series of increasingly powerful computers. Processor of each node of Mark III Hypercube ensemble is specialized computer containing three subprocessors and shared main memory. Solves problem quickly by simultaneously processing part of problem at each such node and passing combined results to host computer. Disciplines benefitting from speed and memory capacity include astrophysics, geophysics, chemistry, weather, high-energy physics, applied mechanics, image processing, oil exploration, aircraft design, and microcircuit design.
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, David; Juan Collados-Lara, Antonio; Pardo-Iguzquiza, Eulogio; Jimeno-Saez, Patricia; Fernandez-Chacon, Francisca
2016-04-01
In order to design adaptive strategies to global change we need to assess the future impact of climate change on water resources, which depends on precipitation and temperature series in the systems. The objective of this work is to generate future climate series in the "Alto Genil" Basin (southeast Spain) for the period 2071-2100 by perturbing the historical series using different statistical methods. For this targeted we use information coming from regionals climate model simulations (RCMs) available in two European projects, CORDEX (2013), with a spatial resolution of 12.5 km, and ENSEMBLES (2009), with a spatial resolution of 25 km. The historical climate series used for the period 1971-2000 have been obtained from Spain02 project (2012) which has the same spatial resolution that CORDEX project (both use the EURO-CORDEX grid). Two emission scenarios have been considered: the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC), and the A1B emission scenario of fourth Assessment Report (AR4). We use the RCM simulations to create an ensemble of predictions weighting their information according to their ability to reproduce the main statistic of the historical climatology. A multi-objective analysis has been performed to identify which models are better in terms of goodness of fit to the cited statistic of the historical series. The ensemble of the CORDEX and the ENSEMBLES projects has been finally created with nine and four models respectively. These ensemble series have been used to assess the anomalies in mean and standard deviation (differences between the control and future RCM series). A "delta-change" method (Pulido-Velazquez et al., 2011) has been applied to define future series by modifying the historical climate series in accordance with the cited anomalies in mean and standard deviation. A comparison between results for scenario A1B and RCP8.5 has been performed. The reduction obtained for the mean rainfall respect to the historical are 24.2 % and 24.4 % respectively, and the increment in the temperature are 46.3 % and 31.2 % respectively. A sensitivity analysis of the results to the statistical downscaling techniques employed has been performed. The next techniques have been explored: Perturbation method or "delta-change"; Regression method (a regression function which relates the RCM and the historic information will be used to generate future climate series for the fixed period); Quantile mapping, (it attempts to find a transformation function which relates the observed variable and the modeled variable maintaining an statistical distribution equals the observed variable); Stochastic weather generator (SWG): They can be uni-site or multi-site (which considers the spatial correlation of climatic series). A comparative analysis of these techniques has been performed identifying the advantages and disadvantages of each of them. Acknowledgments: This research has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02, ENSEMBLES and CORDEX projects for the data provided for this study.
NASA Astrophysics Data System (ADS)
Szunyogh, Istvan; Kostelich, Eric J.; Gyarmati, G.; Patil, D. J.; Hunt, Brian R.; Kalnay, Eugenia; Ott, Edward; Yorke, James A.
2005-08-01
The accuracy and computational efficiency of the recently proposed local ensemble Kalman filter (LEKF) data assimilation scheme is investigated on a state-of-the-art operational numerical weather prediction model using simulated observations. The model selected for this purpose is the T62 horizontal- and 28-level vertical-resolution version of the Global Forecast System (GFS) of the National Center for Environmental Prediction. The performance of the data assimilation system is assessed for different configurations of the LEKF scheme. It is shown that a modest size (40-member) ensemble is sufficient to track the evolution of the atmospheric state with high accuracy. For this ensemble size, the computational time per analysis is less than 9 min on a cluster of PCs. The analyses are extremely accurate in the mid-latitude storm track regions. The largest analysis errors, which are typically much smaller than the observational errors, occur where parametrized physical processes play important roles. Because these are also the regions where model errors are expected to be the largest, limitations of a real-data implementation of the ensemble-based Kalman filter may be easily mistaken for model errors. In light of these results, the importance of testing the ensemble-based Kalman filter data assimilation systems on simulated observations is stressed.
NASA Astrophysics Data System (ADS)
Amengual, A.; Romero, R.; Vich, M.; Alonso, S.
2009-06-01
The improvement of the short- and mid-range numerical runoff forecasts over the flood-prone Spanish Mediterranean area is a challenging issue. This work analyses four intense precipitation events which produced floods of different magnitude over the Llobregat river basin, a medium size catchment located in Catalonia, north-eastern Spain. One of them was a devasting flash flood - known as the "Montserrat" event - which produced 5 fatalities and material losses estimated at about 65 million euros. The characterization of the Llobregat basin's hydrological response to these floods is first assessed by using rain-gauge data and the Hydrologic Engineering Center's Hydrological Modeling System (HEC-HMS) runoff model. In second place, the non-hydrostatic fifth-generation Pennsylvania State University/NCAR mesoscale model (MM5) is nested within the ECMWF large-scale forecast fields in a set of 54 h period simulations to provide quantitative precipitation forecasts (QPFs) for each hydrometeorological episode. The hydrological model is forced with these QPFs to evaluate the reliability of the resulting discharge forecasts, while an ensemble prediction system (EPS) based on perturbed atmospheric initial and boundary conditions has been designed to test the value of a probabilistic strategy versus the previous deterministic approach. Specifically, a Potential Vorticity (PV) Inversion technique has been used to perturb the MM5 model initial and boundary states (i.e. ECMWF forecast fields). For that purpose, a PV error climatology has been previously derived in order to introduce realistic PV perturbations in the EPS. Results show the benefits of using a probabilistic approach in those cases where the deterministic QPF presents significant deficiencies over the Llobregat river basin in terms of the rainfall amounts, timing and localization. These deficiences in precipitation fields have a major impact on flood forecasts. Our ensemble strategy has been found useful to reduce the biases at different hydrometric sections along the watershed. Therefore, in an operational context, the devised methodology could be useful to expand the lead times associated with the prediction of similar future floods, helping to alleviate their possible hazardous consequences.
NASA Astrophysics Data System (ADS)
Amengual, A.; Romero, R.; Vich, M.; Alonso, S.
2009-01-01
The improvement of the short- and mid-range numerical runoff forecasts over the flood-prone Spanish Mediterranean area is a challenging issue. This work analyses four intense precipitation events which produced floods of different magnitude over the Llobregat river basin, a medium size catchment located in Catalonia, north-eastern Spain. One of them was a devasting flash flood - known as the "Montserrat" event - which produced 5 fatalities and material losses estimated at about 65 million euros. The characterization of the Llobregat basin's hydrological response to these floods is first assessed by using rain-gauge data and the Hydrologic Engineering Center's Hydrological Modeling System (HEC-HMS) runoff model. In second place, the non-hydrostatic fifth-generation Pennsylvania State University/NCAR mesoscale model (MM5) is nested within the ECMWF large-scale forecast fields in a set of 54 h period simulations to provide quantitative precipitation forecasts (QPFs) for each hydrometeorological episode. The hydrological model is forced with these QPFs to evaluate the reliability of the resulting discharge forecasts, while an ensemble prediction system (EPS) based on perturbed atmospheric initial and boundary conditions has been designed to test the value of a probabilistic strategy versus the previous deterministic approach. Specifically, a Potential Vorticity (PV) Inversion technique has been used to perturb the MM5 model initial and boundary states (i.e. ECMWF forecast fields). For that purpose, a PV error climatology has been previously derived in order to introduce realistic PV perturbations in the EPS. Results show the benefits of using a probabilistic approach in those cases where the deterministic QPF presents significant deficiencies over the Llobregat river basin in terms of the rainfall amounts, timing and localization. These deficiences in precipitation fields have a major impact on flood forecasts. Our ensemble strategy has been found useful to reduce the biases at different hydrometric sections along the watershed. Therefore, in an operational context, the devised methodology could be useful to expand the lead times associated with the prediction of similar future floods, helping to alleviate their possible hazardous consequences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyle, P. A.; Christ, N. H.; Garron, N.
2016-03-09
Here, we have performed fits of the pseudoscalar masses and decay constants, from a variety of the RBC-UKQCD Collaboration’s domain wall fermion ensembles, to SU(2) partially quenched chiral perturbation theory at next-to-leading order (NLO) and next-to-next-to-leading order (NNLO). We report values for 9 NLO and 8 linearly independent combinations of NNLO partially quenched low-energy constants, which we compare to other lattice and phenomenological determinations. We discuss the size of successive terms in the chiral expansion and use our large set of low-energy constants to make predictions for mass splittings due to QCD isospin-breaking effects and the S-wave ππ scattering lengths.more » Lastly, we conclude that, for the range of pseudoscalar masses explored in this work, 115 MeV≲mPS≲430 MeV, the NNLO SU(2) expansion is quite robust and can fit lattice data with percent-scale accuracy.« less
Hadron electric polarizability from lattice QCD
NASA Astrophysics Data System (ADS)
Alexandru, Andrei; Lujan, Michael; Freeman, Walter; Lee, Frank
2015-04-01
Electric polarizability measures the ability of the electric field to deform a particle. Experimentally, electric and magnetic polarizabilities can be measured in Compton scattering experiments. To compute these quantities theoretically we need to understand the internal structure of the scatterer and the dynamics of its constituents. For hadrons - bound stated of quarks and gluons - this is a very difficult problem. Lattice QCD can be used to compute the polarizabilities directly in terms of quark and gluons degrees of freedom. In this talk we focus on the neutron. We present results for the electric polarizability for two different quark masses, light enough to connect to chiral perturbation theory. These are currently the lightest quark masses used in lattice QCD polarizability studies. For each pion mass we compute the polarizability at four different volumes and perform an infinite volume extrapolation. For one ensemble, we also discuss the effect of turning on the coupling between the background field and the sea quarks. We compare our results to chiral perturbation theory expectations.
Entanglement prethermalization in an interaction quench between two harmonic oscillators.
Ikeda, Tatsuhiko N; Mori, Takashi; Kaminishi, Eriko; Ueda, Masahito
2017-02-01
Entanglement prethermalization (EP) refers to a quasi-stationary nonequilibrium state of a composite system in which each individual subsystem looks thermal but the entire system remains nonthermal due to quantum entanglement between subsystems. We theoretically study the dynamics of EP following a coherent split of a one-dimensional harmonic potential in which two interacting bosons are confined. This problem is equivalent to that of an interaction quench between two harmonic oscillators. We show that this simple model captures the bare essentials of EP; that is, each subsystem relaxes to an approximate thermal equilibrium, whereas the total system remains entangled. We find that a generalized Gibbs ensemble exactly describes the total system if we take into account nonlocal conserved quantities that act nontrivially on both subsystems. In the presence of a symmetry-breaking perturbation, the relaxation dynamics of the system exhibits a quasi-stationary EP plateau and eventually reaches thermal equilibrium. We analytically show that the lifetime of EP is inversely proportional to the magnitude of the perturbation.
Perturbation theory in light-cone quantization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langnau, A.
1992-01-01
A thorough investigation of light-cone properties which are characteristic for higher dimensions is very important. The easiest way of addressing these issues is by analyzing the perturbative structure of light-cone field theories first. Perturbative studies cannot be substituted for an analysis of problems related to a nonperturbative approach. However, in order to lay down groundwork for upcoming nonperturbative studies, it is indispensable to validate the renormalization methods at the perturbative level, i.e., to gain control over the perturbative treatment first. A clear understanding of divergences in perturbation theory, as well as their numerical treatment, is a necessary first step towardsmore » formulating such a program. The first objective of this dissertation is to clarify this issue, at least in second and fourth-order in perturbation theory. The work in this dissertation can provide guidance for the choice of counterterms in Discrete Light-Cone Quantization or the Tamm-Dancoff approach. A second objective of this work is the study of light-cone perturbation theory as a competitive tool for conducting perturbative Feynman diagram calculations. Feynman perturbation theory has become the most practical tool for computing cross sections in high energy physics and other physical properties of field theory. Although this standard covariant method has been applied to a great range of problems, computations beyond one-loop corrections are very difficult. Because of the algebraic complexity of the Feynman calculations in higher-order perturbation theory, it is desirable to automatize Feynman diagram calculations so that algebraic manipulation programs can carry out almost the entire calculation. This thesis presents a step in this direction. The technique we are elaborating on here is known as light-cone perturbation theory.« less
Statistical thermodynamics of clustered populations.
Matsoukas, Themis
2014-08-01
We present a thermodynamic theory for a generic population of M individuals distributed into N groups (clusters). We construct the ensemble of all distributions with fixed M and N, introduce a selection functional that embodies the physics that governs the population, and obtain the distribution that emerges in the scaling limit as the most probable among all distributions consistent with the given physics. We develop the thermodynamics of the ensemble and establish a rigorous mapping to regular thermodynamics. We treat the emergence of a so-called giant component as a formal phase transition and show that the criteria for its emergence are entirely analogous to the equilibrium conditions in molecular systems. We demonstrate the theory by an analytic model and confirm the predictions by Monte Carlo simulation.
Semileptonic B-meson decays to light pseudoscalar mesons on the HISQ ensembles
NASA Astrophysics Data System (ADS)
Gelzer, Zechariah; Bernard, C.; Tar, C. De; El-Khadra, AX; Gámiz, E.; Gottlieb, Steven; Kronfeld, Andreas S.; Liu, Yuzhi; Meurice, Y.; Simone, J. N.; Toussaint, D.; Water, R. S. Van de; Zhou, R.
2018-03-01
We report the status of an ongoing lattice-QCD calculation of form factors for exclusive semileptonic decays of B mesons with both charged currents (B → πlv, Bs → Klv) and neutral currents (B → πl+l-, B → Kl+l-). The results are important for constraining or revealing physics beyond the Standard Model. This work uses MILC's (2+1 + 1)-flavor ensembles with the HISQ action for the sea and light valence quarks and the clover action in the Fermilab interpretation for the b quark. Simulations are carried out at three lattice spacings down to 0.088 fm, with both physical and unphysical sea-quark masses. We present preliminary results for correlation-function fits.
Harnessing Disordered-Ensemble Quantum Dynamics for Machine Learning
NASA Astrophysics Data System (ADS)
Fujii, Keisuke; Nakajima, Kohei
2017-08-01
The quantum computer has an amazing potential of fast information processing. However, the realization of a digital quantum computer is still a challenging problem requiring highly accurate controls and key application strategies. Here we propose a platform, quantum reservoir computing, to solve these issues successfully by exploiting the natural quantum dynamics of ensemble systems, which are ubiquitous in laboratories nowadays, for machine learning. This framework enables ensemble quantum systems to universally emulate nonlinear dynamical systems including classical chaos. A number of numerical experiments show that quantum systems consisting of 5-7 qubits possess computational capabilities comparable to conventional recurrent neural networks of 100-500 nodes. This discovery opens up a paradigm for information processing with artificial intelligence powered by quantum physics.
NASA Astrophysics Data System (ADS)
Fowler, H. J.; Forsythe, N. D.; Blenkinsop, S.; Archer, D.; Hardy, A.; Janes, T.; Jones, R. G.; Holderness, T.
2013-12-01
We present results of two distinct, complementary analyses to assess evidence of elevation dependency in temperature change in the UIB (Karakoram, Eastern Hindu Kush) and wider WH. The first analysis component examines historical remotely-sensed land surface temperature (LST) from the second and third generation of the Advanced Very High Resolution Radiometer (AVHRR/2, AVHRR/3) instrument flown on NOAA satellite platforms since the mid-1980s through present day. The high spatial resolution (<4km) from AVHRR instrument enables precise consideration of the relationship between estimated LST and surface topography. The LST data product was developed as part of initiative to produce continuous time-series for key remotely sensed spatial products (LST, snow covered area, cloud cover, NDVI) extending as far back into the historical record as feasible. Context for the AVHRR LST data product is provided by results of bias assessment and validation procedures against both available local observations, both manned and automatic weather stations. Local observations provide meaningful validation and bias assessment of the vertical gradients found in the AVHRR LST as the elevation range from the lowest manned meteorological station (at 1460m asl) to the highest automatic weather station (4733m asl) covers much of the key range yielding runoff from seasonal snowmelt. Furthermore the common available record period of these stations (1995 to 2007) enables assessment not only of the AVHRR LST but also performance comparisons with the more recent MODIS LST data product. A range of spatial aggregations (from minor tributary catchments to primary basin headwaters) is performed to assess regional homogeneity and identify potential latitudinal or longitudinal gradients in elevation dependency. The second analysis component investigates elevation dependency, including its uncertainty, in projected temperature change trajectories in the downscaling of a seventeen member Global Climate Model (GCM) perturbed physics ensemble (PPE) of transient (130-year) simulations using a moderate resolution (25km) regional climate model (RCM). The GCM ensemble is the17-member QUMP (Quantifying Uncertainty in Model Projections) ensemble and the downscaling is done using HadRM3P, part of the PRECIS regional climate modelling system. Both the RCM and GCMs are models developed the UK Met Office Hadley Centre and are based on the HadCM3 GCM. Use of the multi-member PPE enables quantification of uncertainty in projected temperature change while the spatial resolution of RCM improves insight into the role of elevation in projected rates of change. Furthermore comparison with the results of the remote sensing analysis component - considered to provide an 'observed climatology' - permits evaluation of individual ensemble members with regards to biases in spatial gradients in temperature as well timing and magnitude of annual cycles.
NASA Astrophysics Data System (ADS)
Yin, Dong-shan; Gao, Yu-ping; Zhao, Shu-hong
2017-07-01
Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observations are not evenly sampled, and the internals between two data points range from several hours to more than half a month. Further more, these data sets are sparse. All this makes it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, a cubic spline interpolation is used to densify the data set, and make the intervals between data points uniform. Then, the Vondrak filter is employed to smooth the data set, and get rid of the high-frequency noises, and finally the weighted average method is adopted to generate the ensemble pulsar time scale. The newly released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set is used to generate the ensemble pulsar time scale. This data set includes the 9-year observational data of 37 millisecond pulsars observed by the 100-meter Green Bank telescope and the 305-meter Arecibo telescope. It is found that the algorithm used in this paper can reduce effectively the influence caused by the noises in pulsar timing residuals, and improve the long-term stability of the ensemble pulsar time scale. Results indicate that the long-term (> 1 yr) stability of the ensemble pulsar time scale is better than 3.4 × 10-15.
Wigner Functions for Arbitrary Quantum Systems.
Tilma, Todd; Everitt, Mark J; Samson, John H; Munro, William J; Nemoto, Kae
2016-10-28
The possibility of constructing a complete, continuous Wigner function for any quantum system has been a subject of investigation for over 50 years. A key system that has served to illustrate the difficulties of this problem has been an ensemble of spins. Here we present a general and consistent framework for constructing Wigner functions exploiting the underlying symmetries in the physical system at hand. The Wigner function can be used to fully describe any quantum system of arbitrary dimension or ensemble size.
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
Steric interactions lead to collective tilting motion in the ribosome during mRNA-tRNA translocation
NASA Astrophysics Data System (ADS)
Nguyen, Kien; Whitford, Paul C.
2016-02-01
Translocation of mRNA and tRNA through the ribosome is associated with large-scale rearrangements of the head domain in the 30S ribosomal subunit. To elucidate the relationship between 30S head dynamics and mRNA-tRNA displacement, we apply molecular dynamics simulations using an all-atom structure-based model. Here we provide a statistical analysis of 250 spontaneous transitions between the A/P-P/E and P/P-E/E ensembles. Consistent with structural studies, the ribosome samples a chimeric ap/P-pe/E intermediate, where the 30S head is rotated ~18°. It then transiently populates a previously unreported intermediate ensemble, which is characterized by a ~10° tilt of the head. To identify the origins of head tilting, we analyse 781 additional simulations in which specific steric features are perturbed. These calculations show that head tilting may be attributed to specific steric interactions between tRNA and the 30S subunit (PE loop and protein S13). Taken together, this study demonstrates how molecular structure can give rise to large-scale collective rearrangements.
On the global well-posedness of BV weak solutions to the Kuramoto-Sakaguchi equation
NASA Astrophysics Data System (ADS)
Amadori, Debora; Ha, Seung-Yeal; Park, Jinyeong
2017-01-01
The Kuramoto model is a prototype phase model describing the synchronous behavior of weakly coupled limit-cycle oscillators. When the number of oscillators is sufficiently large, the dynamics of Kuramoto ensemble can be effectively approximated by the corresponding mean-field equation, namely "the Kuramoto-Sakaguchi (KS) equation". This KS equation is a kind of scalar conservation law with a nonlocal flux function due to the mean-field interactions among oscillators. In this paper, we provide a unique global solvability of bounded variation (BV) weak solutions to the kinetic KS equation for identical oscillators using the method of front-tracking in hyperbolic conservation laws. Moreover, we also show that our BV weak solutions satisfy local-in-time L1-stability with respect to BV-initial data. For the ensemble of identical Kuramoto oscillators, we explicitly construct an exponentially growing BV weak solution generated from BV perturbation of incoherent state for any positive coupling strength. This implies the nonlinear instability of incoherent state in a positive coupling strength regime. We provide several numerical examples and compare them with our analytical results.
NASA Technical Reports Server (NTRS)
Nearing, Grey S.; Crow, Wade T.; Thorp, Kelly R.; Moran, Mary S.; Reichle, Rolf H.; Gupta, Hoshin V.
2012-01-01
Observing system simulation experiments were used to investigate ensemble Bayesian state updating data assimilation of observations of leaf area index (LAI) and soil moisture (theta) for the purpose of improving single-season wheat yield estimates with the Decision Support System for Agrotechnology Transfer (DSSAT) CropSim-Ceres model. Assimilation was conducted in an energy-limited environment and a water-limited environment. Modeling uncertainty was prescribed to weather inputs, soil parameters and initial conditions, and cultivar parameters and through perturbations to model state transition equations. The ensemble Kalman filter and the sequential importance resampling filter were tested for the ability to attenuate effects of these types of uncertainty on yield estimates. LAI and theta observations were synthesized according to characteristics of existing remote sensing data, and effects of observation error were tested. Results indicate that the potential for assimilation to improve end-of-season yield estimates is low. Limitations are due to a lack of root zone soil moisture information, error in LAI observations, and a lack of correlation between leaf and grain growth.
Jiang, Wei; Roux, Benoît
2010-07-01
Free Energy Perturbation with Replica Exchange Molecular Dynamics (FEP/REMD) offers a powerful strategy to improve the convergence of free energy computations. In particular, it has been shown previously that a FEP/REMD scheme allowing random moves within an extended replica ensemble of thermodynamic coupling parameters "lambda" can improve the statistical convergence in calculations of absolute binding free energy of ligands to proteins [J. Chem. Theory Comput. 2009, 5, 2583]. In the present study, FEP/REMD is extended and combined with an accelerated MD simulations method based on Hamiltonian replica-exchange MD (H-REMD) to overcome the additional problems arising from the existence of kinetically trapped conformations within the protein receptor. In the combined strategy, each system with a given thermodynamic coupling factor lambda in the extended ensemble is further coupled with a set of replicas evolving on a biased energy surface with boosting potentials used to accelerate the inter-conversion among different rotameric states of the side chains in the neighborhood of the binding site. Exchanges are allowed to occur alternatively along the axes corresponding to the thermodynamic coupling parameter lambda and the boosting potential, in an extended dual array of coupled lambda- and H-REMD simulations. The method is implemented on the basis of new extensions to the REPDSTR module of the biomolecular simulation program CHARMM. As an illustrative example, the absolute binding free energy of p-xylene to the nonpolar cavity of the L99A mutant of T4 lysozyme was calculated. The tests demonstrate that the dual lambda-REMD and H-REMD simulation scheme greatly accelerates the configurational sampling of the rotameric states of the side chains around the binding pocket, thereby improving the convergence of the FEP computations.
Granovsky, Alexander A
2011-06-07
The distinctive desirable features, both mathematically and physically meaningful, for all partially contracted multi-state multi-reference perturbation theories (MS-MR-PT) are explicitly formulated. The original approach to MS-MR-PT theory, called extended multi-configuration quasi-degenerate perturbation theory (XMCQDPT), having most, if not all, of the desirable properties is introduced. The new method is applied at the second order of perturbation theory (XMCQDPT2) to the 1(1)A(')-2(1)A(') conical intersection in allene molecule, the avoided crossing in LiF molecule, and the 1(1)A(1) to 2(1)A(1) electronic transition in cis-1,3-butadiene. The new theory has several advantages compared to those of well-established approaches, such as second order multi-configuration quasi-degenerate perturbation theory and multi-state-second order complete active space perturbation theory. The analysis of the prevalent approaches to the MS-MR-PT theory performed within the framework of the XMCQDPT theory unveils the origin of their common inherent problems. We describe the efficient implementation strategy that makes XMCQDPT2 an especially useful general-purpose tool in the high-level modeling of small to large molecular systems. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Zhang, G.; Chen, F.; Gan, Y.
2017-12-01
Assessing and mitigating uncertainties in the Noah-MP land-model simulations over the Tibet Plateau region Guo Zhang1, Fei Chen1,2, Yanjun Gan11State Key Laboratory of Severe Weather, Chinese Academy of Meteorological Sciences, Beijing, China 2National Center for Atmospheric Research, Boulder, Colorado, USA Uncertainties in the Noah with multiparameterization (Noah-MP) land surface model were assessed through physics ensemble simulations for four sparsely-vegetated sites located in the Tibetan Plateau region. Those simulations were evaluated using observations at the four sites during the third Tibetan Plateau Experiment (TIPEX III).The impacts of uncertainties in precipitation data used as forcing conditions, parameterizations of sub-processes such as soil organic matter and rhizosphere on physics-ensemble simulations are identified using two different methods: the natural selection and Tukey's test. This study attempts to answer the following questions: 1) what is the relative contribution of precipitation-forcing uncertainty to the overall uncertainty range of Noah-MP simulations at those sites as compared to that at a more moisture and densely vegetated site; 2) what are the most sensitive physical parameterization for those sites; 3) can we identify the parameterizations that need to be improved? The investigation was conducted by evaluating simulated seasonal evolution of soil temperature, soilmoisture, surface heat fluxes through a number of Noah-MP ensemble simulations.
NASA Astrophysics Data System (ADS)
Peterson, Brittany Ann
Winter storms can affect millions of people, with impacts such as disruptions to transportation, hazards to human health, reduction in retail sales, and structural damage. Blizzard forecasts for Alberta Clippers can be a particular challenge in the Northern Plains, as these systems typically depart from the Canadian Rockies, intensify, and impact the Northern Plains all within 24 hours. The purpose of this study is to determine whether probabilistic forecasts derived from a local physics-based ensemble can improve specific aspects of winter storm forecasts for three Alberta Clipper cases. Verification is performed on the ensemble members and ensemble mean with a focus on quantifying uncertainty in the storm track, two-meter winds, and precipitation using the MERRA and NOHRSC SNODAS datasets. This study finds that addition improvements are needed to proceed with operational use of the ensemble blizzard products, but the use of a proxy for blizzard conditions yields promising results.
Lahiri, A; Roy, Abhijit Guha; Sheet, Debdoot; Biswas, Prabir Kumar
2016-08-01
Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.
A study of regional-scale aerosol assimilation using a Stretch-NICAM
NASA Astrophysics Data System (ADS)
Misawa, S.; Dai, T.; Schutgens, N.; Nakajima, T.
2013-12-01
Although aerosol is considered to be harmful to human health and it became a social issue, aerosol models and emission inventories include large uncertainties. In recent studies, data assimilation is applied to aerosol simulation to get more accurate aerosol field and emission inventory. Most of these studies, however, are carried out only on global scale, and there are only a few researches about regional scale aerosol assimilation. In this study, we have created and verified an aerosol assimilation system on regional scale, in hopes to reduce an error associated with the aerosol emission inventory. Our aerosol assimilation system has been developed using an atmospheric climate model, NICAM (Non-hydrostaric ICosahedral Atmospheric Model; Satoh et al., 2008) with a stretch grid system and coupled with an aerosol transport model, SPRINTARS (Takemura et al., 2000). Also, this assimilation system is based on local ensemble transform Kalman filter (LETKF). To validate this system, we used a simulated observational data by adding some artificial errors to the surface aerosol fields constructed by Stretch-NICAM-SPRINTARS. We also included a small perturbation in original emission inventory. This assimilation with modified observational data and emission inventory was performed in Kanto-plane region around Tokyo, Japan, and the result indicates the system reducing a relative error of aerosol concentration by 20%. Furthermore, we examined a sensitivity of the aerosol assimilation system by varying the number of total ensemble (5, 10 and 15 ensembles) and local patch (domain) size (radius of 50km, 100km and 200km), both of which are the tuning parameters in LETKF. The result of the assimilation with different ensemble number 5, 10 and 15 shows that the larger the number of ensemble is, the smaller the relative error become. This is consistent with ensemble Kalman filter theory and imply that this assimilation system works properly. Also we found that assimilation system does not work well in a case of 200km radius, while a domain of 50km radius is less efficient than when domain of 100km radius is used.Therefore, we expect that the optimized size lies somewhere between 50km to 200km. We will show a real analysis of real data from suspended particle matter (SPM) network in the Kanto-plane region.
Wave ensemble forecast in the Western Mediterranean Sea, application to an early warning system.
NASA Astrophysics Data System (ADS)
Pallares, Elena; Hernandez, Hector; Moré, Jordi; Espino, Manuel; Sairouni, Abdel
2015-04-01
The Western Mediterranean Sea is a highly heterogeneous and variable area, as is reflected on the wind field, the current field, and the waves, mainly in the first kilometers offshore. As a result of this variability, the wave forecast in these regions is quite complicated to perform, usually with some accuracy problems during energetic storm events. Moreover, is in these areas where most of the economic activities take part, including fisheries, sailing, tourism, coastal management and offshore renewal energy platforms. In order to introduce an indicator of the probability of occurrence of the different sea states and give more detailed information of the forecast to the end users, an ensemble wave forecast system is considered. The ensemble prediction systems have already been used in the last decades for the meteorological forecast; to deal with the uncertainties of the initial conditions and the different parametrizations used in the models, which may introduce some errors in the forecast, a bunch of different perturbed meteorological simulations are considered as possible future scenarios and compared with the deterministic forecast. In the present work, the SWAN wave model (v41.01) has been implemented for the Western Mediterranean sea, forced with wind fields produced by the deterministic Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS). The wind fields includes a deterministic forecast (also named control), between 11 and 21 ensemble members, and some intelligent member obtained from the ensemble, as the mean of all the members. Four buoys located in the study area, moored in coastal waters, have been used to validate the results. The outputs include all the time series, with a forecast horizon of 8 days and represented in spaghetti diagrams, the spread of the system and the probability at different thresholds. The main goal of this exercise is to be able to determine the degree of the uncertainty of the wave forecast, meaningful between the 5th and the 8th day of the prediction. The information obtained is then included in an early warning system, designed in the framework of the European project iCoast (ECHO/SUB/2013/661009) with the aim of set alarms in coastal areas depending on the wave conditions, the sea level, the flooding and the run up in the coast.
NASA Astrophysics Data System (ADS)
Feldman, D.; Collins, W. D.; Wielicki, B. A.; Shea, Y.; Mlynczak, M. G.; Kuo, C.; Nguyen, N.
2017-12-01
Shortwave feedbacks are a persistent source of uncertainty for climate models and a large contributor to the diagnosed range of equilibrium climate sensitivity (ECS) for the international multi-model ensemble. The processes that contribute to these feedbacks affect top-of-atmosphere energetics and produce spectral signatures that may be time-evolving. We explore the value of such spectral signatures for providing an observational constraint on model ECS by simulating top-of-atmosphere shortwave reflectance spectra across much of the energetically-relevant shortwave bandpass (300 to 2500 nm). We present centennial-length shortwave hyperspectral simulations from low, medium and high ECS models that reported to the CMIP5 archive as part of an Observing System Simulation Experiment (OSSE) in support of the CLimate Absolute Radiance and Refractivity Observatory (CLARREO). Our framework interfaces with CMIP5 archive results and is agnostic to the choice of model. We simulated spectra from the INM-CM4 model (ECS of 2.08 °K/2xCO2), the MIROC5 model (ECS of 2.70 °K/2xCO2), and the CSIRO Mk3-6-0 (ECS of 4.08 °K/2xCO2) based on those models' integrations of the RCP8.5 scenario for the 21st Century. This approach allows us to explore how perfect data records can exclude models of lower or higher climate sensitivity. We find that spectral channels covering visible and near-infrared water-vapor overtone bands can potentially exclude a low or high sensitivity model with under 15 years' of absolutely-calibrated data. These different spectral channels are sensitive to model cloud radiative effect and cloud height changes, respectively. These unprecedented calculations lay the groundwork for spectral simulations of perturbed-physics ensembles in order to identify those shortwave observations that can help narrow the range in shortwave model feedbacks and ultimately help reduce the stubbornly-large range in model ECS.
Quantifying Uncertainty in Model Predictions for the Pliocene (Plio-QUMP): Initial results
Pope, J.O.; Collins, M.; Haywood, A.M.; Dowsett, H.J.; Hunter, S.J.; Lunt, D.J.; Pickering, S.J.; Pound, M.J.
2011-01-01
Examination of the mid-Pliocene Warm Period (mPWP; ~. 3.3 to 3.0. Ma BP) provides an excellent opportunity to test the ability of climate models to reproduce warm climate states, thereby assessing our confidence in model predictions. To do this it is necessary to relate the uncertainty in model simulations of mPWP climate to uncertainties in projections of future climate change. The uncertainties introduced by the model can be estimated through the use of a Perturbed Physics Ensemble (PPE). Developing on the UK Met Office Quantifying Uncertainty in Model Predictions (QUMP) Project, this paper presents the results from an initial investigation using the end members of a PPE in a fully coupled atmosphere-ocean model (HadCM3) running with appropriate mPWP boundary conditions. Prior work has shown that the unperturbed version of HadCM3 may underestimate mPWP sea surface temperatures at higher latitudes. Initial results indicate that neither the low sensitivity nor the high sensitivity simulations produce unequivocally improved mPWP climatology relative to the standard. Whilst the high sensitivity simulation was able to reconcile up to 6 ??C of the data/model mismatch in sea surface temperatures in the high latitudes of the Northern Hemisphere (relative to the standard simulation), it did not produce a better prediction of global vegetation than the standard simulation. Overall the low sensitivity simulation was degraded compared to the standard and high sensitivity simulations in all aspects of the data/model comparison. The results have shown that a PPE has the potential to explore weaknesses in mPWP modelling simulations which have been identified by geological proxies, but that a 'best fit' simulation will more likely come from a full ensemble in which simulations that contain the strengths of the two end member simulations shown here are combined. ?? 2011 Elsevier B.V.
Nonlinear phenomena in general relativity
NASA Astrophysics Data System (ADS)
Allahyari, Alireza; Firouzjaee, Javad T.; Mansouri, Reza
2018-04-01
The perturbation theory plays an important role in studying structure formation in cosmology and post-Newtonian physics, but not all phenomena can be described by the linear perturbation theory. Thus, it is necessary to study exact solutions or higher-order perturbations. Specifically, we study black hole (apparent) horizons and the cosmological event horizon formation in the perturbation theory. We emphasize that in the perturbative regime of the gravitational potential these horizons cannot form in the lower order. Studying the infinite plane metric, we show that, to capture the cosmological constant effect, we need at least a second-order expansion.
NASA Astrophysics Data System (ADS)
Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.
2015-05-01
A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.
Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures
NASA Astrophysics Data System (ADS)
Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.
2012-12-01
Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).
Future of Lattice Calculations with Staggered Sea Quarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gottlieb, Steven
2011-05-23
The MILC collaboration for some years has been creating gauge ensembles with 2+1 flavors of asqtad or improved staggered quarks. There are some 40 ensembles covering a wide range of quark mass and lattice spacing, thus allowing control of the chiral and continuum limits. An extensive review of that program has been published in Reviews of Modern Physics. Recently, MILC has begun a new program using HPQCD's highly improved staggered quark (HISQ) action. This action has smaller taste symmetry breaking than asqtad and improved scaling properties. We also include a dynamical charm quark in these calculations. We summarize the achievementsmore » of the asqtad program, what has been done so far with HISQ quarks, and then consider what future ensembles will be created and their impact.« less
Generalized ensemble theory with non-extensive statistics
NASA Astrophysics Data System (ADS)
Shen, Ke-Ming; Zhang, Ben-Wei; Wang, En-Ke
2017-12-01
The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature.
Chiral behavior of K →π l ν decay form factors in lattice QCD with exact chiral symmetry
NASA Astrophysics Data System (ADS)
Aoki, S.; Cossu, G.; Feng, X.; Fukaya, H.; Hashimoto, S.; Kaneko, T.; Noaki, J.; Onogi, T.; Jlqcd Collaboration
2017-08-01
We calculate the form factors of the K →π l ν semileptonic decays in three-flavor lattice QCD and study their chiral behavior as a function of the momentum transfer and the Nambu-Goldstone boson masses. Chiral symmetry is exactly preserved by using the overlap quark action, which enables us to directly compare the lattice data with chiral perturbation theory (ChPT). We generate gauge ensembles at a lattice spacing of 0.11 fm with four pion masses covering 290-540 MeV and a strange quark mass ms close to its physical value. By using the all-to-all quark propagator, we calculate the vector and scalar form factors with high precision. Their dependence on ms and the momentum transfer is studied by using the reweighting technique and the twisted boundary conditions for the quark fields. We compare the results for the semileptonic form factors with ChPT at next-to-next-to-leading order in detail. While many low-energy constants appear at this order, we make use of our data of the light meson electromagnetic form factors in order to control the chiral extrapolation. We determine the normalization of the form factors as f+(0 )=0.9636 (36 )(-35+57) and observe reasonable agreement of their shape with experiment.
Emergent rogue wave structures and statistics in spontaneous modulation instability.
Toenger, Shanti; Godin, Thomas; Billet, Cyril; Dias, Frédéric; Erkintalo, Miro; Genty, Goëry; Dudley, John M
2015-05-20
The nonlinear Schrödinger equation (NLSE) is a seminal equation of nonlinear physics describing wave packet evolution in weakly-nonlinear dispersive media. The NLSE is especially important in understanding how high amplitude "rogue waves" emerge from noise through the process of modulation instability (MI) whereby a perturbation on an initial plane wave can evolve into strongly-localised "breather" or "soliton on finite background (SFB)" structures. Although there has been much study of such structures excited under controlled conditions, there remains the open question of how closely the analytic solutions of the NLSE actually model localised structures emerging in noise-seeded MI. We address this question here using numerical simulations to compare the properties of a large ensemble of emergent peaks in noise-seeded MI with the known analytic solutions of the NLSE. Our results show that both elementary breather and higher-order SFB structures are observed in chaotic MI, with the characteristics of the noise-induced peaks clustering closely around analytic NLSE predictions. A significant conclusion of our work is to suggest that the widely-held view that the Peregrine soliton forms a rogue wave prototype must be revisited. Rather, we confirm earlier suggestions that NLSE rogue waves are most appropriately identified as collisions between elementary SFB solutions.
Emergent rogue wave structures and statistics in spontaneous modulation instability
Toenger, Shanti; Godin, Thomas; Billet, Cyril; Dias, Frédéric; Erkintalo, Miro; Genty, Goëry; Dudley, John M.
2015-01-01
The nonlinear Schrödinger equation (NLSE) is a seminal equation of nonlinear physics describing wave packet evolution in weakly-nonlinear dispersive media. The NLSE is especially important in understanding how high amplitude “rogue waves” emerge from noise through the process of modulation instability (MI) whereby a perturbation on an initial plane wave can evolve into strongly-localised “breather” or “soliton on finite background (SFB)” structures. Although there has been much study of such structures excited under controlled conditions, there remains the open question of how closely the analytic solutions of the NLSE actually model localised structures emerging in noise-seeded MI. We address this question here using numerical simulations to compare the properties of a large ensemble of emergent peaks in noise-seeded MI with the known analytic solutions of the NLSE. Our results show that both elementary breather and higher-order SFB structures are observed in chaotic MI, with the characteristics of the noise-induced peaks clustering closely around analytic NLSE predictions. A significant conclusion of our work is to suggest that the widely-held view that the Peregrine soliton forms a rogue wave prototype must be revisited. Rather, we confirm earlier suggestions that NLSE rogue waves are most appropriately identified as collisions between elementary SFB solutions. PMID:25993126
Observation of discrete time-crystalline order in a disordered dipolar many-body system
Kucsko, Georg; Zhou, Hengyun; Isoya, Junichi; Jelezko, Fedor; Onoda, Shinobu; Sumiya, Hitoshi; Khemani, Vedika; von Keyserlingk, Curt; Yao, Norman Y.; Demler, Eugene; Lukin, Mikhail D.
2017-01-01
Understanding quantum dynamics away from equilibrium is an outstanding challenge in the modern physical sciences. It is well known that out-of-equilibrium systems can display a rich array of phenomena, ranging from self-organized synchronization to dynamical phase transitions1,2. More recently, advances in the controlled manipulation of isolated many-body systems have enabled detailed studies of non-equilibrium phases in strongly interacting quantum matter3–6. As a particularly striking example, the interplay of periodic driving, disorder, and strong interactions has recently been predicted to result in exotic “time-crystalline” phases7, which spontaneously break the discrete time-translation symmetry of the underlying drive8–11. Here, we report the experimental observation of such discrete time-crystalline order in a driven, disordered ensemble of ~ 106 dipolar spin impurities in diamond at room-temperature12–14. We observe long-lived temporal correlations at integer multiples of the fundamental driving period, experimentally identify the phase boundary and find that the temporal order is protected by strong interactions; this order is remarkably stable against perturbations, even in the presence of slow thermalization15,16. Our work opens the door to exploring dynamical phases of matter and controlling interacting, disordered many-body systems17–19. PMID:28277511
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staten, Paul; Reichler, Thomas; Lu, Jian
Tropospheric circulation shifts have strong potential to impact surface climate. But the magnitude of these shifts in a changing climate, and the attending regional hydrological changes, are difficult to project. Part of this difficulty arises from our lack of understanding of the physical mechanisms behind the circulation shifts themselves. In order to better delineate circulation shifts and their respective causes, we decompose the circulation response into (1) the "direct" response to radiative forcings themselves, and (2) the "indirect" response to changing sea surface temperatures. Using ensembles of 90-day climate model simulations with immediate switch-on forcings, including perturbed greenhouse gas concentrations,more » stratospheric ozone concentrations, and sea surface temperatures, we document the direct and indirect transient responses of the zonal mean general circulation, and investigate the roles of previously proposed mechanisms in shifting the midlatitude jet. We find that both the direct and indirect wind responses often begin in the lower stratosphere. Changes in midlatitude eddies are ubiquitous and synchronous with the midlatitude zonal wind response. Shifts in the critical latitude of wave absorption on either flank of the jet are not indicted as primary factors for the poleward shifting jet, although we see some evidence for increasing equatorward wave reflection over the southern hemisphere in response to sea surface warming. Mechanisms for the northern hemisphere jet shift are less clear.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, Seth, E-mail: seth.olsen@uq.edu.au
2015-01-28
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant tomore » any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler’s hydrol blue. The diabatic CASVB representation is shown to vary weakly for “temperatures” corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.« less
Olsen, Seth
2015-01-28
This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler's hydrol blue. The diabatic CASVB representation is shown to vary weakly for "temperatures" corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.
Emergent Societal Effects of Crimino-Social Forces in an Animat Agent Model
NASA Astrophysics Data System (ADS)
Scogings, Chris J.; Hawick, Ken A.
Societal behaviour can be studied at a causal level by perturbing a stable multi-agent model with new microscopic behaviours and observing the statistical response over an ensemble of simulated model systems. We report on the effects of introducing criminal and law-enforcing behaviours into a large scale animat agent model and describe the complex spatial agent patterns and population changes that result. Our well-established predator-prey substrate model provides a background framework against which these new microscopic behaviours can be trialled and investigated. We describe some quantitative results and some surprising conclusions concerning the overall societal health when individually anti-social behaviour is introduced.
Path integrals, the ABL rule and the three-box paradox
NASA Astrophysics Data System (ADS)
Sokolovski, D.; Puerto Giménez, I.; Sala Mayato, R.
2008-10-01
The three-box problem is analysed in terms of virtual pathways, interference between which is destroyed by a number of intermediate measurements. The Aharonov-Bergmann-Lebowitz (ABL) rule is shown to be a particular case of Feynman's recipe for assigning probabilities to exclusive alternatives. The ‘paradoxical’ features of the three box case arise in an attempt to attribute, in contradiction to the uncertainty principle, properties pertaining to different ensembles produced by different intermediate measurements to the same particle. The effect can be mimicked by a classical system, provided an observation is made to perturb the system in a non-local manner.
Avetissian, H K; Ghazaryan, A G; Matevosyan, H H; Mkrtchian, G F
2015-10-01
The microscopic quantum theory of plasma nonlinear interaction with the coherent shortwave electromagnetic radiation of arbitrary intensity is developed. The Liouville-von Neumann equation for the density matrix is solved analytically considering a wave field exactly and a scattering potential of plasma ions as a perturbation. With the help of this solution we calculate the nonlinear inverse-bremsstrahlung absorption rate for a grand canonical ensemble of electrons. The latter is studied in Maxwellian, as well as in degenerate quantum plasma for x-ray lasers at superhigh intensities and it is shown that one can achieve the efficient absorption coefficient in these cases.
Probabilistic flood warning using grand ensemble weather forecasts
NASA Astrophysics Data System (ADS)
He, Y.; Wetterhall, F.; Cloke, H.; Pappenberger, F.; Wilson, M.; Freer, J.; McGregor, G.
2009-04-01
As the severity of floods increases, possibly due to climate and landuse change, there is urgent need for more effective and reliable warning systems. The incorporation of numerical weather predictions (NWP) into a flood warning system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and can lead to a high number of false or missed warnings. An ensemble of weather forecasts from one Ensemble Prediction System (EPS), when used on catchment hydrology, can provide improved early flood warning as some of the uncertainties can be quantified. EPS forecasts from a single weather centre only account for part of the uncertainties originating from initial conditions and stochastic physics. Other sources of uncertainties, including numerical implementations and/or data assimilation, can only be assessed if a grand ensemble of EPSs from different weather centres is used. When various models that produce EPS from different weather centres are aggregated, the probabilistic nature of the ensemble precipitation forecasts can be better retained and accounted for. The availability of twelve global EPSs through the 'THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a new opportunity for the design of an improved probabilistic flood forecasting framework. This work presents a case study using the TIGGE database for flood warning on a meso-scale catchment. The upper reach of the River Severn catchment located in the Midlands Region of England is selected due to its abundant data for investigation and its relatively small size (4062 km2) (compared to the resolution of the NWPs). This choice was deliberate as we hypothesize that the uncertainty in the forcing of smaller catchments cannot be represented by a single EPS with a very limited number of ensemble members, but only through the variance given by a large number ensembles and ensemble system. A coupled atmospheric-hydrologic-hydraulic cascade system driven by the TIGGE ensemble forecasts is set up to study the potential benefits of using the TIGGE database in early flood warning. Physically based and fully distributed LISFLOOD suite of models is selected to simulate discharge and flood inundation consecutively. The results show the TIGGE database is a promising tool to produce forecasts of discharge and flood inundation comparable with the observed discharge and simulated inundation driven by the observed discharge. The spread of discharge forecasts varies from centre to centre, but it is generally large, implying a significant level of uncertainties. Precipitation input uncertainties dominate and propagate through the cascade chain. The current NWPs fall short of representing the spatial variability of precipitation on a comparatively small catchment. This perhaps indicates the need to improve NWPs resolution and/or disaggregation techniques to narrow down the spatial gap between meteorology and hydrology. It is not necessarily true that early flood warning becomes more reliable when more ensemble forecasts are employed. It is difficult to identify the best forecast centre(s), but in general the chance of detecting floods is increased by using the TIGGE database. Only one flood event was studied because most of the TIGGE data became available after October 2007. It is necessary to test the TIGGE ensemble forecasts with other flood events in other catchments with different hydrological and climatic regimes before general conclusions can be made on its robustness and applicability.
NASA Astrophysics Data System (ADS)
Idier, Déborah; Falqués, Albert; Rohmer, Jérémy; Arriaga, Jaime
2017-09-01
The instability mechanisms for self-organized kilometer-scale shoreline sand waves have been extensively explored by modeling. However, while the assumed bathymetric perturbation associated with the sand wave controls the feedback between morphology and waves, its effect on the instability onset has not been explored. In addition, no systematic investigation of the effect of the physical parameters has been done yet. Using a linear stability model, we investigate the effect of wave conditions, cross-shore profile, closure depth, and two perturbation shapes (P1: cross-shore bathymetric profile shift, and P2: bed level perturbation linearly decreasing offshore). For a P1 perturbation, no instability occurs below an absolute critical angle θc0≈ 40-50°. For a P2 perturbation, there is no absolute critical angle: sand waves can develop also for low-angle waves. In fact, the bathymetric perturbation shape plays a key role in low-angle wave instability: such instability only develops if the curvature of the depth contours offshore the breaking zone is larger than the shoreline one. This can occur for the P2 perturbation but not for P1. The analysis of bathymetric data suggests that both curvature configurations could exist in nature. For both perturbation types, large wave angle, small wave period, and large closure depth strongly favor instability. The cross-shore profile has almost no effect with a P1 perturbation, whereas large surf zone slope and gently sloping shoreface strongly enhance instability under low-angle waves for a P2 perturbation. Finally, predictive statistical models are set up to identify sites prone to exhibit either a critical angle close to θc0 or low-angle wave instability.
NASA Astrophysics Data System (ADS)
Costin, Ovidiu; Dunne, Gerald V.
2018-01-01
We show how to convert divergent series, which typically occur in many applications in physics, into rapidly convergent inverse factorial series. This can be interpreted physically as a novel resummation of perturbative series. Being convergent, these new series allow rigorous extrapolation from an asymptotic region with a large parameter, to the opposite region where the parameter is small. We illustrate the method with various physical examples, and discuss how these convergent series relate to standard methods such as Borel summation, and also how they incorporate the physical Stokes phenomenon. We comment on the relation of these results to Dyson’s physical argument for the divergence of perturbation theory. This approach also leads naturally to a wide class of relations between bosonic and fermionic partition functions, and Klein-Gordon and Dirac determinants.
Tuning the chemosensory window
Zhou, Shanshan; Mackay, Trudy FC
2010-01-01
Accurate perception of chemical signals from the environment is critical for the fitness of most animals. Drosophila melanogaster experiences its chemical environment through families of chemoreceptors that include olfactory receptors, gustatory receptors and odorant binding proteins. Its chemical environment, however, changes during its life cycle and the interpretation of chemical signals is dependent on dynamic social and physical surroundings. Phenotypic plasticity of gene expression of the chemoreceptor repertoire allows flies to adjust the chemosensory window through which they “view” their world and to modify the ensemble of expressed chemoreceptor proteins in line with their developmental and physiological state and according to their needs to locate food and oviposition sites under different social and physical environmental conditions. Furthermore, males and females differ in their expression profiles of chemoreceptor genes. Thus, each sex experiences its chemical environment via combinatorial activation of distinct chemoreceptor ensembles. The remarkable phenotypic plasticity of the chemoreceptor repertoire raises several fundamental questions. What are the mechanisms that translate environmental cues into regulation of chemoreceptor gene expression? How are gustatory and olfactory cues integrated perceptually? What is the relationship between ensembles of odorant binding proteins and odorant receptors? And, what is the significance of co-regulated chemoreceptor transcriptional networks? PMID:20305396
NASA Astrophysics Data System (ADS)
Lawi, Armin; Adhitya, Yudhi
2018-03-01
The objective of this research is to determine the quality of cocoa beans through morphology of their digital images. Samples of cocoa beans were scattered on a bright white paper under a controlled lighting condition. A compact digital camera was used to capture the images. The images were then processed to extract their morphological parameters. Classification process begins with an analysis of cocoa beans image based on morphological feature extraction. Parameters for extraction of morphological or physical feature parameters, i.e., Area, Perimeter, Major Axis Length, Minor Axis Length, Aspect Ratio, Circularity, Roundness, Ferret Diameter. The cocoa beans are classified into 4 groups, i.e.: Normal Beans, Broken Beans, Fractured Beans, and Skin Damaged Beans. The model of classification used in this paper is the Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM), a proposed improvement model of SVM using ensemble method in which the separate hyperplanes are obtained by least square approach and the multiclass procedure uses One-Against- All method. The result of our proposed model showed that the classification with morphological feature input parameters were accurately as 99.705% for the four classes, respectively.
Dynamical dark matter: A new framework for dark-matter physics
NASA Astrophysics Data System (ADS)
Dienes, Keith R.; Thomas, Brooks
2013-05-01
Although much remains unknown about the dark matter of the universe, one property is normally considered sacrosanct: dark matter must be stable well beyond cosmological time scales. However, a new framework for dark-matter physics has recently been proposed which challenges this assumption. In the "dynamical dark matter" (DDM) framework, the dark sector consists of a vast ensemble of individual dark-matter components with differing masses, lifetimes, and cosmological abundances. Moreover, the usual requirement of stability is replaced by a delicate balancing between lifetimes and cosmological abundances across the ensemble as a whole. As a result, it is possible for the DDM ensemble to remain consistent with all experimental and observational bounds on dark matter while nevertheless giving rise to collective behaviors which transcend those normally associated with traditional dark-matter candidates. These include a new, non-trivial darkmatter equation of state as well as potentially distinctive signatures in collider and direct-detection experiments. In this review article, we provide a self-contained introduction to the DDM framework and summarize some of the work which has recently been done in this area. We also present an explicit model within the DDM framework, and outline a number of ideas for future investigation.
Numerical weather prediction model tuning via ensemble prediction system
NASA Astrophysics Data System (ADS)
Jarvinen, H.; Laine, M.; Ollinaho, P.; Solonen, A.; Haario, H.
2011-12-01
This paper discusses a novel approach to tune predictive skill of numerical weather prediction (NWP) models. NWP models contain tunable parameters which appear in parameterizations schemes of sub-grid scale physical processes. Currently, numerical values of these parameters are specified manually. In a recent dual manuscript (QJRMS, revised) we developed a new concept and method for on-line estimation of the NWP model parameters. The EPPES ("Ensemble prediction and parameter estimation system") method requires only minimal changes to the existing operational ensemble prediction infra-structure and it seems very cost-effective because practically no new computations are introduced. The approach provides an algorithmic decision making tool for model parameter optimization in operational NWP. In EPPES, statistical inference about the NWP model tunable parameters is made by (i) generating each member of the ensemble of predictions using different model parameter values, drawn from a proposal distribution, and (ii) feeding-back the relative merits of the parameter values to the proposal distribution, based on evaluation of a suitable likelihood function against verifying observations. In the presentation, the method is first illustrated in low-order numerical tests using a stochastic version of the Lorenz-95 model which effectively emulates the principal features of ensemble prediction systems. The EPPES method correctly detects the unknown and wrongly specified parameters values, and leads to an improved forecast skill. Second, results with an atmospheric general circulation model based ensemble prediction system show that the NWP model tuning capacity of EPPES scales up to realistic models and ensemble prediction systems. Finally, a global top-end NWP model tuning exercise with preliminary results is published.
A Caveat Note on Tuning in the Development of Coupled Climate Models
NASA Astrophysics Data System (ADS)
Dommenget, Dietmar; Rezny, Michael
2018-01-01
State-of-the-art coupled general circulation models (CGCMs) have substantial errors in their simulations of climate. In particular, these errors can lead to large uncertainties in the simulated climate response (both globally and regionally) to a doubling of CO2. Currently, tuning of the parameterization schemes in CGCMs is a significant part of the developed. It is not clear whether such tuning actually improves models. The tuning process is (in general) neither documented, nor reproducible. Alternative methods such as flux correcting are not used nor is it clear if such methods would perform better. In this study, ensembles of perturbed physics experiments are performed with the Globally Resolved Energy Balance (GREB) model to test the impact of tuning. The work illustrates that tuning has, in average, limited skill given the complexity of the system, the limited computing resources, and the limited observations to optimize parameters. While tuning may improve model performance (such as reproducing observed past climate), it will not get closer to the "true" physics nor will it significantly improve future climate change projections. Tuning will introduce artificial compensating error interactions between submodels that will hamper further model development. In turn, flux corrections do perform well in most, but not all aspects. A main advantage of flux correction is that it is much cheaper, simpler, more transparent, and it does not introduce artificial error interactions between submodels. These GREB model experiments should be considered as a pilot study to motivate further CGCM studies that address the issues of model tuning.
NASA Astrophysics Data System (ADS)
Palter, Jaime B.; Frölicher, Thomas L.; Paynter, David; John, Jasmin G.
2018-06-01
The Paris Agreement has initiated a scientific debate on the role that carbon removal - or net negative emissions - might play in achieving less than 1.5 K of global mean surface warming by 2100. Here, we probe the sensitivity of a comprehensive Earth system model (GFDL-ESM2M) to three different atmospheric CO2 concentration pathways, two of which arrive at 1.5 K of warming in 2100 by very different pathways. We run five ensemble members of each of these simulations: (1) a standard Representative Concentration Pathway (RCP4.5) scenario, which produces 2 K of surface warming by 2100 in our model; (2) a stabilization
pathway in which atmospheric CO2 concentration never exceeds 440 ppm and the global mean temperature rise is approximately 1.5 K by 2100; and (3) an overshoot
pathway that passes through 2 K of warming at mid-century, before ramping down atmospheric CO2 concentrations, as if using carbon removal, to end at 1.5 K of warming at 2100. Although the global mean surface temperature change in response to the overshoot pathway is similar to the stabilization pathway in 2100, this similarity belies several important differences in other climate metrics, such as warming over land masses, the strength of the Atlantic Meridional Overturning Circulation (AMOC), ocean acidification, sea ice coverage, and the global mean sea level change and its regional expressions. In 2100, the overshoot ensemble shows a greater global steric sea level rise and weaker AMOC mass transport than in the stabilization scenario, with both of these metrics close to the ensemble mean of RCP4.5. There is strong ocean surface cooling in the North Atlantic Ocean and Southern Ocean in response to overshoot forcing due to perturbations in the ocean circulation. Thus, overshoot forcing in this model reduces the rate of sea ice loss in the Labrador, Nordic, Ross, and Weddell seas relative to the stabilized pathway, suggesting a negative radiative feedback in response to the early rapid warming. Finally, the ocean perturbation in response to warming leads to strong pathway dependence of sea level rise in northern North American cities, with overshoot forcing producing up to 10 cm of additional sea level rise by 2100 relative to stabilization forcing.
Mapping quantum-classical Liouville equation: projectors and trajectories.
Kelly, Aaron; van Zon, Ramses; Schofield, Jeremy; Kapral, Raymond
2012-02-28
The evolution of a mixed quantum-classical system is expressed in the mapping formalism where discrete quantum states are mapped onto oscillator states, resulting in a phase space description of the quantum degrees of freedom. By defining projection operators onto the mapping states corresponding to the physical quantum states, it is shown that the mapping quantum-classical Liouville operator commutes with the projection operator so that the dynamics is confined to the physical space. It is also shown that a trajectory-based solution of this equation can be constructed that requires the simulation of an ensemble of entangled trajectories. An approximation to this evolution equation which retains only the Poisson bracket contribution to the evolution operator does admit a solution in an ensemble of independent trajectories but it is shown that this operator does not commute with the projection operators and the dynamics may take the system outside the physical space. The dynamical instabilities, utility, and domain of validity of this approximate dynamics are discussed. The effects are illustrated by simulations on several quantum systems.
Driven similarity renormalization group for excited states: A state-averaged perturbation theory
NASA Astrophysics Data System (ADS)
Li, Chenyang; Evangelista, Francesco A.
2018-03-01
The multireference driven similarity renormalization group (MRDSRG) approach [C. Li and F. A. Evangelista, J. Chem. Theory Comput. 11, 2097 (2015)] is generalized to treat quasi-degenerate electronic excited states. The new scheme, termed state-averaged (SA) MRDSRG, is a state-universal approach that considers an ensemble of quasi-degenerate states on an equal footing. Using the SA-MRDSRG framework, we implement second- (SA-DSRG-PT2) and third-order (SA-DSRG-PT3) perturbation theories. These perturbation theories can treat a manifold of near-degenerate states at the cost of a single state-specific computation. At the same time, they have several desirable properties: (1) they are intruder-free and size-extensive, (2) their energy expressions can be evaluated non-iteratively and require at most the three-body density cumulant of the reference states, and (3) the reference states are allowed to relax in the presence of dynamical correlation effects. Numerical benchmarks on the potential energy surfaces of lithium fluoride, ammonia, and the penta-2,4-dieniminium cation reveal that the SA-DSRG-PT2 method yields results with accuracy similar to that of other second-order quasi-degenerate perturbation theories. The SA-DSRG-PT3 results are instead consistent with those from multireference configuration interaction with singles and doubles (MRCISD). Finally, we compute the vertical excitation energies of (E,E)-1,3,5,7-octatetraene. The ordering of the lowest three states is predicted to be 2 1Ag-<1 1Bu+<1 1Bu- by both SA-DSRG-PT2 and SA-DSRG-PT3, in accordance with MRCISD plus Davidson correction.
Perturbation theory for arbitrary coupling strength?
NASA Astrophysics Data System (ADS)
Mahapatra, Bimal P.; Pradhan, Noubihary
2018-03-01
We present a new formulation of perturbation theory for quantum systems, designated here as: “mean field perturbation theory” (MFPT), which is free from power-series-expansion in any physical parameter, including the coupling strength. Its application is thereby extended to deal with interactions of arbitrary strength and to compute system-properties having non-analytic dependence on the coupling, thus overcoming the primary limitations of the “standard formulation of perturbation theory” (SFPT). MFPT is defined by developing perturbation about a chosen input Hamiltonian, which is exactly solvable but which acquires the nonlinearity and the analytic structure (in the coupling strength) of the original interaction through a self-consistent, feedback mechanism. We demonstrate Borel-summability of MFPT for the case of the quartic- and sextic-anharmonic oscillators and the quartic double-well oscillator (QDWO) by obtaining uniformly accurate results for the ground state of the above systems for arbitrary physical values of the coupling strength. The results obtained for the QDWO may be of particular significance since “renormalon”-free, unambiguous results are achieved for its spectrum in contrast to the well-known failure of SFPT in this case.
Kicking the rugby ball: perturbations of 6D gauged chiral supergravity
NASA Astrophysics Data System (ADS)
Burgess, C. P.; de Rham, C.; Hoover, D.; Mason, D.; Tolley, A. J.
2007-02-01
We analyse the axially symmetric scalar perturbations of 6D chiral gauged supergravity compactified on the general warped geometries in the presence of two source branes. We find that all of the conical geometries are marginally stable for normalizable perturbations (in disagreement with some recent calculations) and the non-conical ones for regular perturbations, even though none of them are supersymmetric (apart from the trivial Salam Sezgin solution, for which there are no source branes). The marginal direction is the one whose presence is required by the classical scaling property of the field equations, and all other modes have positive squared mass. In the special case of the conical solutions, including (but not restricted to) the unwarped 'rugby-ball' solutions, we find closed-form expressions for the mode functions in terms of Legendre and hypergeometric functions. In so doing we show how to match the asymptotic near-brane form for the solution to the physics of the source branes, and thereby how to physically interpret perturbations which can be singular at the brane positions.
Black hole evaporation in conformal gravity
NASA Astrophysics Data System (ADS)
Bambi, Cosimo; Modesto, Leonardo; Porey, Shiladitya; Rachwał, Lesław
2017-09-01
We study the formation and the evaporation of a spherically symmetric black hole in conformal gravity. From the collapse of a spherically symmetric thin shell of radiation, we find a singularity-free non-rotating black hole. This black hole has the same Hawking temperature as a Schwarzschild black hole with the same mass, and it completely evaporates either in a finite or in an infinite time, depending on the ensemble. We consider the analysis both in the canonical and in the micro-canonical statistical ensembles. Last, we discuss the corresponding Penrose diagram of this physical process.
Black hole evaporation in conformal gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bambi, Cosimo; Rachwał, Lesław; Modesto, Leonardo
We study the formation and the evaporation of a spherically symmetric black hole in conformal gravity. From the collapse of a spherically symmetric thin shell of radiation, we find a singularity-free non-rotating black hole. This black hole has the same Hawking temperature as a Schwarzschild black hole with the same mass, and it completely evaporates either in a finite or in an infinite time, depending on the ensemble. We consider the analysis both in the canonical and in the micro-canonical statistical ensembles. Last, we discuss the corresponding Penrose diagram of this physical process.
Robust dynamic mitigation of instabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawata, S.; Karino, T.
2015-04-15
A dynamic mitigation mechanism for instability growth was proposed and discussed in the paper [S. Kawata, Phys. Plasmas 19, 024503 (2012)]. In the present paper, the robustness of the dynamic instability mitigation mechanism is discussed further. The results presented here show that the mechanism of the dynamic instability mitigation is rather robust against changes in the phase, the amplitude, and the wavelength of the wobbling perturbation applied. Generally, instability would emerge from the perturbation of the physical quantity. Normally, the perturbation phase is unknown so that the instability growth rate is discussed. However, if the perturbation phase is known, themore » instability growth can be controlled by a superposition of perturbations imposed actively: If the perturbation is induced by, for example, a driving beam axis oscillation or wobbling, the perturbation phase could be controlled, and the instability growth is mitigated by the superposition of the growing perturbations.« less
Chatterjee, Ayan; Sarkar, Sudipta
2012-03-02
We establish the physical process version of the first law by studying small perturbations of a stationary black hole with a regular bifurcation surface in Einstein-Gauss-Bonnet gravity. Our result shows that when the stationary black hole is perturbed by a matter stress energy tensor and finally settles down to a new stationary state, the Wald entropy increases as long as the matter satisfies the null energy condition.
NASA Astrophysics Data System (ADS)
Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich
2017-10-01
New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.
NASA Astrophysics Data System (ADS)
Shulman, Igor; Gould, Richard W.; Frolov, Sergey; McCarthy, Sean; Penta, Brad; Anderson, Stephanie; Sakalaukus, Peter
2018-03-01
An ensemble-based approach to specify observational error covariance in the data assimilation of satellite bio-optical properties is proposed. The observational error covariance is derived from statistical properties of the generated ensemble of satellite MODIS-Aqua chlorophyll (Chl) images. The proposed observational error covariance is used in the Optimal Interpolation scheme for the assimilation of MODIS-Aqua Chl observations. The forecast error covariance is specified in the subspace of the multivariate (bio-optical, physical) empirical orthogonal functions (EOFs) estimated from a month-long model run. The assimilation of surface MODIS-Aqua Chl improved surface and subsurface model Chl predictions. Comparisons with surface and subsurface water samples demonstrate that data assimilation run with the proposed observational error covariance has higher RMSE than the data assimilation run with "optimistic" assumption about observational errors (10% of the ensemble mean), but has smaller or comparable RMSE than data assimilation run with an assumption that observational errors equal to 35% of the ensemble mean (the target error for satellite data product for chlorophyll). Also, with the assimilation of the MODIS-Aqua Chl data, the RMSE between observed and model-predicted fractions of diatoms to the total phytoplankton is reduced by a factor of two in comparison to the nonassimilative run.
Encoding qubits into oscillators with atomic ensembles and squeezed light
NASA Astrophysics Data System (ADS)
Motes, Keith R.; Baragiola, Ben Q.; Gilchrist, Alexei; Menicucci, Nicolas C.
2017-05-01
The Gottesman-Kitaev-Preskill (GKP) encoding of a qubit within an oscillator provides a number of advantages when used in a fault-tolerant architecture for quantum computing, most notably that Gaussian operations suffice to implement all single- and two-qubit Clifford gates. The main drawback of the encoding is that the logical states themselves are challenging to produce. Here we present a method for generating optical GKP-encoded qubits by coupling an atomic ensemble to a squeezed state of light. Particular outcomes of a subsequent spin measurement of the ensemble herald successful generation of the resource state in the optical mode. We analyze the method in terms of the resources required (total spin and amount of squeezing) and the probability of success. We propose a physical implementation using a Faraday-based quantum nondemolition interaction.
Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni
2017-10-01
Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.
Beyond the bump-hunt: A game plan for discovering dynamical dark matter at the LHC
NASA Astrophysics Data System (ADS)
Dienes, Keith R.; Su, Shufang; Thomas, Brooks
2016-06-01
Dynamical Dark Matter (DDM) is an alternative framework for dark-matter physics in which an ensemble of individual constituent fields with a spectrum of masses, lifetimes, and cosmological abundances collectively constitute the dark-matter candidate, and in which the traditional notion of dark-matter stability is replaced by a balancing between lifetimes and abundances across the ensemble. In this talk, we discuss the prospects for distinguishing between DDM ensembles and traditional dark-matter candidates at hadron colliders - and in particular, at the upgraded LHC - via the analysis of event-shape distributions of kine-matic variables. We also examine the correlations between these kinematic variables and other relevant collider variables in order to assess how imposing cuts on these additional variables may distort - for better or worse - their event-shape distributions.
Nonambipolar Transport and Torque in Perturbed Equilibria
NASA Astrophysics Data System (ADS)
Logan, N. C.; Park, J.-K.; Wang, Z. R.; Berkery, J. W.; Kim, K.; Menard, J. E.
2013-10-01
A new Perturbed Equilibrium Nonambipolar Transport (PENT) code has been developed to calculate the neoclassical toroidal torque from radial current composed of both passing and trapped particles in perturbed equilibria. This presentation outlines the physics approach used in the development of the PENT code, with emphasis on the effects of retaining general aspect-ratio geometric effects. First, nonambipolar transport coefficients and corresponding neoclassical toroidal viscous (NTV) torque in perturbed equilibria are re-derived from the first order gyro-drift-kinetic equation in the ``combined-NTV'' PENT formalism. The equivalence of NTV torque and change in potential energy due to kinetic effects [J-K. Park, Phys. Plas., 2011] is then used to showcase computational challenges shared between PENT and stability codes MISK and MARS-K. Extensive comparisons to a reduced model, which makes numerous large aspect ratio approximations, are used throughout to emphasize geometry dependent physics such as pitch angle resonances. These applications make extensive use of the PENT code's native interfacing with the Ideal Perturbed Equilibrium Code (IPEC), and the combination of these codes is a key step towards an iterative solver for self-consistent perturbed equilibrium torque. Supported by US DOE contract #DE-AC02-09CH11466 and the DOE Office of Science Graduate Fellowship administered by the Oak Ridge Institute for Science & Education under contract #DE-AC05-06OR23100.
Multi-RCM ensemble downscaling of global seasonal forecasts (MRED)
NASA Astrophysics Data System (ADS)
Arritt, R. W.
2008-12-01
The Multi-RCM Ensemble Downscaling (MRED) project was recently initiated to address the question, Can regional climate models provide additional useful information from global seasonal forecasts? MRED will use a suite of regional climate models to downscale seasonal forecasts produced by the new National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) seasonal forecast system and the NASA GEOS5 system. The initial focus will be on wintertime forecasts in order to evaluate topographic forcing, snowmelt, and the potential usefulness of higher resolution, especially for near-surface fields influenced by high resolution orography. Each regional model will cover the conterminous US (CONUS) at approximately 32 km resolution, and will perform an ensemble of 15 runs for each year 1982-2003 for the forecast period 1 December - 30 April. MRED will compare individual regional and global forecasts as well as ensemble mean precipitation and temperature forecasts, which are currently being used to drive macroscale land surface models (LSMs), as well as wind, humidity, radiation, turbulent heat fluxes, which are important for more advanced coupled macro-scale hydrologic models. Metrics of ensemble spread will also be evaluated. Extensive analysis will be performed to link improvements in downscaled forecast skill to regional forcings and physical mechanisms. Our overarching goal is to determine what additional skill can be provided by a community ensemble of high resolution regional models, which we believe will eventually define a strategy for more skillful and useful regional seasonal climate forecasts.
Hernández, Griselda; Anderson, Janet S.; LeMaster, David M.
2012-01-01
The acute sensitivity to conformation exhibited by amide hydrogen exchange reactivity provides a valuable test for the physical accuracy of model ensembles developed to represent the Boltzmann distribution of the protein native state. A number of molecular dynamics studies of ubiquitin have predicted a well-populated transition in the tight turn immediately preceding the primary site of proteasome-directed polyubiquitylation Lys 48. Amide exchange reactivity analysis demonstrates that this transition is 103-fold rarer than these predictions. More strikingly, for the most populated novel conformational basin predicted from a recent 1 ms MD simulation of bovine pancreatic trypsin inhibitor (at 13% of total), experimental hydrogen exchange data indicates a population below 10−6. The most sophisticated efforts to directly incorporate experimental constraints into the derivation of model protein ensembles have been applied to ubiquitin, as illustrated by three recently deposited studies (PDB codes 2NR2, 2K39 and 2KOX). Utilizing the extensive set of experimental NOE constraints, each of these three ensembles yields a modestly more accurate prediction of the exchange rates for the highly exposed amides than does a standard unconstrained molecular simulation. However, for the less frequently exposed amide hydrogens, the 2NR2 ensemble offers no improvement in rate predictions as compared to the unconstrained MD ensemble. The other two NMR-constrained ensembles performed markedly worse, either underestimating (2KOX) or overestimating (2K39) the extent of conformational diversity. PMID:22425325
Rixen, M.; Ferreira-Coelho, E.; Signell, R.
2008-01-01
Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).
Breaking of Ensemble Equivalence in Networks
NASA Astrophysics Data System (ADS)
Squartini, Tiziano; de Mol, Joey; den Hollander, Frank; Garlaschelli, Diego
2015-12-01
It is generally believed that, in the thermodynamic limit, the microcanonical description as a function of energy coincides with the canonical description as a function of temperature. However, various examples of systems for which the microcanonical and canonical ensembles are not equivalent have been identified. A complete theory of this intriguing phenomenon is still missing. Here we show that ensemble nonequivalence can manifest itself also in random graphs with topological constraints. We find that, while graphs with a given number of links are ensemble equivalent, graphs with a given degree sequence are not. This result holds irrespective of whether the energy is nonadditive (as in unipartite graphs) or additive (as in bipartite graphs). In contrast with previous expectations, our results show that (1) physically, nonequivalence can be induced by an extensive number of local constraints, and not necessarily by long-range interactions or nonadditivity, (2) mathematically, nonequivalence is determined by a different large-deviation behavior of microcanonical and canonical probabilities for a single microstate, and not necessarily for almost all microstates. The latter criterion, which is entirely local, is not restricted to networks and holds in general.
Geometric integrator for simulations in the canonical ensemble
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapias, Diego, E-mail: diego.tapias@nucleares.unam.mx; Sanders, David P., E-mail: dpsanders@ciencias.unam.mx; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139
2016-08-28
We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservationmore » of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.« less
Jiang, Xukai; Li, Wen; Chen, Guanjun; Wang, Lushan
2017-02-27
The temperature dependence of enzyme catalysis is highly debated. Specifically, how high temperatures induce enzyme inactivation has broad implications for both fundamental and applied science. Here, we explored the mechanism of the reversible thermal inactivation in glycoside hydrolase family 12 (GH12) using comparative molecular dynamics simulations. First, we investigated the distribution of structural flexibility over the enzyme and found that the active site was the general thermal-sensitive region in GH12 cellulases. The dynamic perturbation of the active site before enzyme denaturation was explored through principal-component analysis, which indicated that variations in the collective motion and conformational ensemble of the active site may precisely correspond to enzyme transition from its active form to the inactive form. Furthermore, the degree of dynamic perturbation of the active site was found to be negatively correlated with the melting temperatures of GH12 enzymes, further proving the importance of the dynamic stability of the active site. Additionally, analysis of the residue-interaction network revealed that the active site in thermophilic enzyme was capable of forming additional contacts with other amino acids than those observed in the mesophilic enzyme. These interactions are likely the key mechanisms underlying the differences in rigidity of the active site. These findings provide further biophysical insights into the reversible thermal inactivation of enzymes and potential applications in future protein engineering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geller, Drew Adam; Backhaus, Scott N.
Control of consumer electrical devices for providing electrical grid services is expanding in both the scope and the diversity of loads that are engaged in control, but there are few experimentally-based models of these devices suitable for control designs and for assessing the cost of control. A laboratory-scale test system is developed to experimentally evaluate the use of a simple window-mount air conditioner for electrical grid regulation services. The experimental test bed is a single, isolated air conditioner embedded in a test system that both emulates the thermodynamics of an air conditioned room and also isolates the air conditioner frommore » the real-world external environmental and human variables that perturb the careful measurements required to capture a model that fully characterizes both the control response functions and the cost of control. The control response functions and cost of control are measured using harmonic perturbation of the temperature set point and a test protocol that further isolates the air conditioner from low frequency environmental variability.« less
NASA Astrophysics Data System (ADS)
Lemler, Paul M.; Vaccaro, Patrick
2016-06-01
The non-resonant interaction of electromagnetic radiation with an isotropic ensemble of chiral molecules, which causes the incident state of linear polarization to undergo a signed rotation, long has served as a metric for gauging the enantiomeric purity of asymmetric syntheses. While the underlying phenomenon of circular birefringence (CB) typically is probed in the condensed phase, recent advances in ultrasensitive circular-differential detection schemes, as exemplified by the techniques of Cavity Ring-Down Polarimetry (CRDP), have permitted the first quantitative analyses of such processes to be performed in rarefied media. Efforts to extend vapor-phase investigations of CB to new families of chiral substrates will be discussed, with particular emphasis directed towards the elucidation of intrinsic (e.g., solvent-free) properties and their mediation by environmental perturbations (e.g., solvation). Specific species targeted by this work will include the stereoselective building blocks phenylpropylene oxide and α-methylbenzyl amine, both of which exhibit pronounced solvent-dependent changes in measured optical activity. The nature of chiroptical response in different environments will be highlighted, with quantum-chemical calculations serving to unravel the structural and electronic provenance of observed behavior.
NASA Astrophysics Data System (ADS)
Garzon, B.
Several simulations of dipolar and quadrupolar linear Kihara fluids using the Monte Carlo method in the canonical ensemble have been performed. Pressure and internal energy have been directly determined from simulations and Helmholtz free energy using thermodynamic integration. Simulations were carried out for fluids of fixed elongation at two different densities and several values of temperature and dipolar or quadrupolar moment for each density. Results are compared with the perturbation theory developed by Boublik for this same type of fluid and good agreement between simulated and theoretical values was obtained especially for quadrupole fluids. Simulations are also used to obtain the liquid structure giving the first few coefficients of the expansion of pair correlation functions in terms of spherical harmonics. Estimations of the triple point temperature to critical temperature ratio are given for some dipole and quadrupole linear fluids. The stability range of the liquid phase of these substances is shortly discussed and an analysis about the opposite roles of the dipole moment and the molecular elongation on this stability is also given.
A comparison of ensemble post-processing approaches that preserve correlation structures
NASA Astrophysics Data System (ADS)
Schefzik, Roman; Van Schaeybroeck, Bert; Vannitsem, Stéphane
2016-04-01
Despite the fact that ensemble forecasts address the major sources of uncertainty, they exhibit biases and dispersion errors and therefore are known to improve by calibration or statistical post-processing. For instance the ensemble model output statistics (EMOS) method, also known as non-homogeneous regression approach (Gneiting et al., 2005) is known to strongly improve forecast skill. EMOS is based on fitting and adjusting a parametric probability density function (PDF). However, EMOS and other common post-processing approaches apply to a single weather quantity at a single location for a single look-ahead time. They are therefore unable of taking into account spatial, inter-variable and temporal dependence structures. Recently many research efforts have been invested in designing post-processing methods that resolve this drawback but also in verification methods that enable the detection of dependence structures. New verification methods are applied on two classes of post-processing methods, both generating physically coherent ensembles. A first class uses the ensemble copula coupling (ECC) that starts from EMOS but adjusts the rank structure (Schefzik et al., 2013). The second class is a member-by-member post-processing (MBM) approach that maps each raw ensemble member to a corrected one (Van Schaeybroeck and Vannitsem, 2015). We compare variants of the EMOS-ECC and MBM classes and highlight a specific theoretical connection between them. All post-processing variants are applied in the context of the ensemble system of the European Centre of Weather Forecasts (ECMWF) and compared using multivariate verification tools including the energy score, the variogram score (Scheuerer and Hamill, 2015) and the band depth rank histogram (Thorarinsdottir et al., 2015). Gneiting, Raftery, Westveld, and Goldman, 2005: Calibrated probabilistic forecasting using ensemble model output statistics and minimum CRPS estimation. Mon. Wea. Rev., {133}, 1098-1118. Scheuerer and Hamill, 2015. Variogram-based proper scoring rules for probabilistic forecasts of multivariate quantities. Mon. Wea. Rev. {143},1321-1334. Schefzik, Thorarinsdottir, Gneiting. Uncertainty quantification in complex simulation models using ensemble copula coupling. Statistical Science {28},616-640, 2013. Thorarinsdottir, M. Scheuerer, and C. Heinz, 2015. Assessing the calibration of high-dimensional ensemble forecasts using rank histograms, arXiv:1310.0236. Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.
Probable Maximum Precipitation in the U.S. Pacific Northwest in a Changing Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, Lai-Yung
2017-12-22
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less
Molecular Dynamics Simulation of Membranes and a Transmembrane Helix
NASA Astrophysics Data System (ADS)
Duong, Tap Ha; Mehler, Ernest L.; Weinstein, Harel
1999-05-01
Three molecular dynamics (MD) simulations of 1.5-ns length were carried out on fully hydrated patches of dimyristoyl phosphatidylcholine (DMPC) bilayers in the liquid-crystalline phase. The simulations were performed using different ensembles and electrostatic conditions: a microcanonical ensemble or constant pressure-temperature ensemble, with or without truncated electrostatic interactions. Calculated properties of the membrane patches from the three different protocols were compared to available data from experiments. These data include the resulting overall geometrical dimensions, the order characteristics of the lipid hydrocarbon chains, as well as various measures of the conformations of the polar head groups. The comparisons indicate that the simulation carried out within the microcanonical ensemble with truncated electrostatic interactions yielded results closest to the experimental data, provided that the initial equilibration phase preceding the production run was sufficiently long. The effects of embedding a non-ideal helical protein domain in the membrane patch were studied with the same MD protocols. This simulation was carried out for 2.5 ns. The protein domain corresponds to the seventh transmembrane segment (TMS7) of the human serotonin 5HT 2Areceptor. The peptide is composed of two α-helical segments linked by a hinge domain around a perturbing Asn-Pro motif that produces at the end of the simulation a kink angle of nearly 80° between the two helices. Several aspects of the TMS7 structure, such as the bending angle, backbone Φ and Ψ torsion angles, the intramolecular hydrogen bonds, and the overall conformation, were found to be very similar to those determined by NMR for the corresponding transmembrane segment of the tachykinin NK-1 receptor. In general, the simulations were found to yield structural and dynamic characteristics that are in good agreement with experiment. These findings support the application of simulation methods to the study of the complex biomolecular systems at the membrane interface of cells.
NASA Astrophysics Data System (ADS)
Ahmad, Zeeshan; Viswanathan, Venkatasubramanian
2016-08-01
Computationally-guided material discovery is being increasingly employed using a descriptor-based screening through the calculation of a few properties of interest. A precise understanding of the uncertainty associated with first-principles density functional theory calculated property values is important for the success of descriptor-based screening. The Bayesian error estimation approach has been built in to several recently developed exchange-correlation functionals, which allows an estimate of the uncertainty associated with properties related to the ground state energy, for example, adsorption energies. Here, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, which depend on the derivatives of the energy. The procedure involves calculating energies around the equilibrium cell volume with different strains and fitting the obtained energies to the corresponding energy-strain relationship. At each strain, we use instead of a single energy, an ensemble of energies, giving us an ensemble of fits and thereby, an ensemble of mechanical properties associated with each fit, whose spread can be used to quantify its uncertainty. The generation of ensemble of energies is only a post-processing step involving a perturbation of parameters of the exchange-correlation functional and solving for the energy non-self-consistently. The proposed method is computationally efficient and provides a more robust uncertainty estimate compared to the approach of self-consistent calculations employing several different exchange-correlation functionals. We demonstrate the method by calculating the uncertainty bounds for several materials belonging to different classes and having different structures using the developed method. We show that the calculated uncertainty bounds the property values obtained using three different GGA functionals: PBE, PBEsol, and RPBE. Finally, we apply the approach to calculate the uncertainty associated with the DFT-calculated elastic properties of solid state Li-ion and Na-ion conductors.
Polyakov loop modeling for hot QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukushima, Kenji; Skokov, Vladimir
Here, we review theoretical aspects of quantum chromodynamics (QCD) at finite temperature. The most important physical variable to characterize hot QCD is the Polyakov loop, which is an approximate order parameter for quark deconfinement in a hot gluonic medium. Additionally to its role as an order parameter, the Polyakov loop has rich physical contents in both perturbative and non-perturbative sectors. This review covers a wide range of subjects associated with the Polyakov loop from topological defects in hot QCD to model building with coupling to the Polyakov loop.
Polyakov loop modeling for hot QCD
Fukushima, Kenji; Skokov, Vladimir
2017-06-19
Here, we review theoretical aspects of quantum chromodynamics (QCD) at finite temperature. The most important physical variable to characterize hot QCD is the Polyakov loop, which is an approximate order parameter for quark deconfinement in a hot gluonic medium. Additionally to its role as an order parameter, the Polyakov loop has rich physical contents in both perturbative and non-perturbative sectors. This review covers a wide range of subjects associated with the Polyakov loop from topological defects in hot QCD to model building with coupling to the Polyakov loop.
Hidden Structural Codes in Protein Intrinsic Disorder.
Borkosky, Silvia S; Camporeale, Gabriela; Chemes, Lucía B; Risso, Marikena; Noval, María Gabriela; Sánchez, Ignacio E; Alonso, Leonardo G; de Prat Gay, Gonzalo
2017-10-17
Intrinsic disorder is a major structural category in biology, accounting for more than 30% of coding regions across the domains of life, yet consists of conformational ensembles in equilibrium, a major challenge in protein chemistry. Anciently evolved papillomavirus genomes constitute an unparalleled case for sequence to structure-function correlation in cases in which there are no folded structures. E7, the major transforming oncoprotein of human papillomaviruses, is a paradigmatic example among the intrinsically disordered proteins. Analysis of a large number of sequences of the same viral protein allowed for the identification of a handful of residues with absolute conservation, scattered along the sequence of its N-terminal intrinsically disordered domain, which intriguingly are mostly leucine residues. Mutation of these led to a pronounced increase in both α-helix and β-sheet structural content, reflected by drastic effects on equilibrium propensities and oligomerization kinetics, and uncovers the existence of local structural elements that oppose canonical folding. These folding relays suggest the existence of yet undefined hidden structural codes behind intrinsic disorder in this model protein. Thus, evolution pinpoints conformational hot spots that could have not been identified by direct experimental methods for analyzing or perturbing the equilibrium of an intrinsically disordered protein ensemble.
NASA Technical Reports Server (NTRS)
Chavez, F. P.; Strutton, P. G.; McPhaden, M. J.
1996-01-01
Using physical and bio-optical data from moorings in the central equatorial Pacific, the perturbations to phytoplankton biomass and productivity associated with the onset of the 1997-98 El Nino event were investigated. The data presented depict the physical progression of El Nino onset, from reversal of the trade winds in the western equatorial Pacific, through eastward propagation of equatorially trapped Kelvin waves and advection of waters from the nutrient-poor western equatorial warm pool. The physical perturbations led to fluctuations in phytoplankton biomass, quantum yield of fluorescence and a 50% reduction in primary productivity.
Creating "Intelligent" Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, Noel; Taylor, Patrick
2014-05-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.
NASA Astrophysics Data System (ADS)
Clark, Elizabeth; Wood, Andy; Nijssen, Bart; Mendoza, Pablo; Newman, Andy; Nowak, Kenneth; Arnold, Jeffrey
2017-04-01
In an automated forecast system, hydrologic data assimilation (DA) performs the valuable function of correcting raw simulated watershed model states to better represent external observations, including measurements of streamflow, snow, soil moisture, and the like. Yet the incorporation of automated DA into operational forecasting systems has been a long-standing challenge due to the complexities of the hydrologic system, which include numerous lags between state and output variations. To help demonstrate that such methods can succeed in operational automated implementations, we present results from the real-time application of an ensemble particle filter (PF) for short-range (7 day lead) ensemble flow forecasts in western US river basins. We use the System for Hydromet Applications, Research and Prediction (SHARP), developed by the National Center for Atmospheric Research (NCAR) in collaboration with the University of Washington, U.S. Army Corps of Engineers, and U.S. Bureau of Reclamation. SHARP is a fully automated platform for short-term to seasonal hydrologic forecasting applications, incorporating uncertainty in initial hydrologic conditions (IHCs) and in hydrometeorological predictions through ensemble methods. In this implementation, IHC uncertainty is estimated by propagating an ensemble of 100 temperature and precipitation time series through conceptual and physically-oriented models. The resulting ensemble of derived IHCs exhibits a broad range of possible soil moisture and snow water equivalent (SWE) states. The PF selects and/or weights and resamples the IHCs that are most consistent with external streamflow observations, and uses the particles to initialize a streamflow forecast ensemble driven by ensemble precipitation and temperature forecasts downscaled from the Global Ensemble Forecast System (GEFS). We apply this method in real-time for several basins in the western US that are important for water resources management, and perform a hindcast experiment to evaluate the utility of PF-based data assimilation on streamflow forecasts skill. This presentation describes findings, including a comparison of sequential and non-sequential particle weighting methods.
Transmutation of Matter in Byzantium: The Case of Michael Psellos, the Alchemist
ERIC Educational Resources Information Center
Katsiampoura, Gianna
2008-01-01
There is thus nothing paradoxical about the inclusion of alchemy in the ensemble of the physical sciences nor in the preoccupation with it on the part of learned men engaged in scientific study. In the context of the Medieval model, where discourse on the physical world was ambiguous, often unclear, and lacking the support of experimental…
NASA Astrophysics Data System (ADS)
Imran, H. M.; Kala, J.; Ng, A. W. M.; Muthukumaran, S.
2018-04-01
Appropriate choice of physics options among many physics parameterizations is important when using the Weather Research and Forecasting (WRF) model. The responses of different physics parameterizations of the WRF model may vary due to geographical locations, the application of interest, and the temporal and spatial scales being investigated. Several studies have evaluated the performance of the WRF model in simulating the mean climate and extreme rainfall events for various regions in Australia. However, no study has explicitly evaluated the sensitivity of the WRF model in simulating heatwaves. Therefore, this study evaluates the performance of a WRF multi-physics ensemble that comprises 27 model configurations for a series of heatwave events in Melbourne, Australia. Unlike most previous studies, we not only evaluate temperature, but also wind speed and relative humidity, which are key factors influencing heatwave dynamics. No specific ensemble member for all events explicitly showed the best performance, for all the variables, considering all evaluation metrics. This study also found that the choice of planetary boundary layer (PBL) scheme had largest influence, the radiation scheme had moderate influence, and the microphysics scheme had the least influence on temperature simulations. The PBL and microphysics schemes were found to be more sensitive than the radiation scheme for wind speed and relative humidity. Additionally, the study tested the role of Urban Canopy Model (UCM) and three Land Surface Models (LSMs). Although the UCM did not play significant role, the Noah-LSM showed better performance than the CLM4 and NOAH-MP LSMs in simulating the heatwave events. The study finally identifies an optimal configuration of WRF that will be a useful modelling tool for further investigations of heatwaves in Melbourne. Although our results are invariably region-specific, our results will be useful to WRF users investigating heatwave dynamics elsewhere.
Fournet, Damien; Hodder, Simon; Havenith, George
2015-01-01
Humans sense the wetness of a wet surface through the somatosensory integration of thermal and tactile inputs generated by the interaction between skin and moisture. However, little is known on how wetness is sensed when moisture is produced via sweating. We tested the hypothesis that, in the absence of skin cooling, intermittent tactile cues, as coded by low-threshold skin mechanoreceptors, modulate the perception of sweat-induced skin wetness, independently of the level of physical wetness. Ten males (22 yr old) performed an incremental exercise protocol during two trials designed to induce the same physical skin wetness but to induce lower (TIGHT-FIT) and higher (LOOSE-FIT) wetness perception. In the TIGHT-FIT, a tight-fitting clothing ensemble limited intermittent skin-sweat-clothing tactile interactions. In the LOOSE-FIT, a loose-fitting ensemble allowed free skin-sweat-clothing interactions. Heart rate, core and skin temperature, galvanic skin conductance (GSC), and physical (wbody) and perceived skin wetness were recorded. Exercise-induced sweat production and physical wetness increased significantly [GSC: 3.1 μS, SD 0.3 to 18.8 μS, SD 1.3, P < 0.01; wbody: 0.26 no-dimension units (nd), SD 0.02, to 0.92 nd, SD 0.01, P < 0.01], with no differences between TIGHT-FIT and LOOSE-FIT (P > 0.05). However, the limited intermittent tactile inputs generated by the TIGHT-FIT ensemble reduced significantly whole-body and regional wetness perception (P < 0.01). This reduction was more pronounced when between 40 and 80% of the body was covered in sweat. We conclude that the central integration of intermittent mechanical interactions between skin, sweat, and clothing, as coded by low-threshold skin mechanoreceptors, significantly contributes to the ability to sense sweat-induced skin wetness. PMID:25878153
Sensitivity analysis and calibration of a dynamic physically based slope stability model
NASA Astrophysics Data System (ADS)
Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens
2017-06-01
Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs
with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting that precipitation intensities during the investigated landslide-triggering rainfall events were already close to or above the soil's infiltration capacity.
Development of the NHM-LETKF regional reanalysis system assimilating conventional observations only
NASA Astrophysics Data System (ADS)
Fukui, S.; Iwasaki, T.; Saito, K. K.; Seko, H.; Kunii, M.
2016-12-01
The information about long-term high-resolution atmospheric fields is very useful for studying meso-scale responses to climate change or analyzing extreme events. We are developing a NHM-LETKF (the local ensemble transform Kalman filter with the nonhydrostatic model of the Japan Meteorological Agency (JMA)) regional reanalysis system assimilating only conventional observations that are available over about 60 years such as surface observations at observatories and upper air observations with radiosondes. The domain covers Japan and its surroundings. Before the long-term reanalysis is performed, an experiment using the system was conducted over August in 2014 in order to identify effectiveness and problems of the regional reanalysis system. In this study, we investigated the six-hour accumulated precipitations obtained by integration from the analysis fields. The reproduced precipitation was compared with the JMA's Radar/Rain-gauge Analyzed Precipitation data over Japan islands and the precipitation of JRA-55, which is used as lateral boundary conditions. The comparisons reveal the underestimation of the precipitation in the regional reanalysis. The underestimation is improved by extending the forecast time. In the regional reanalysis system, the analysis fields are derived using the ensemble mean fields, where the conflicting components among ensemble members are filtered out. Therefore, it is important to tune the inflation factor and lateral boundary perturbations not to smooth the analysis fields excessively and to consider more time to spin-up the fields. In the extended run, the underestimation still remains. This implies that the underestimation is attributed to the forecast model itself as well as the analysis scheme.
Climate change on the Colorado River: a method to search for robust management strategies
NASA Astrophysics Data System (ADS)
Keefe, R.; Fischbach, J. R.
2010-12-01
The Colorado River is a principal source of water for the seven Basin States, providing approximately 16.5 maf per year to users in the southwestern United States and Mexico. Though the dynamics of the river ensure Upper Basin users a reliable supply of water, the three Lower Basin states (California, Nevada, and Arizona) are in danger of delivery interruptions as Upper Basin demand increases and climate change threatens to reduce future streamflows. In light of the recent drought and uncertain effects of climate change on Colorado River flows, we evaluate the performance of a suite of policies modeled after the shortage sharing agreement adopted in December 2007 by the Department of the Interior. We build on the current literature by using a simplified model of the Lower Colorado River to consider future streamflow scenarios given climate change uncertainty. We also generate different scenarios of parametric consumptive use growth in the Upper Basin and evaluate alternate management strategies in light of these uncertainties. Uncertainty associated with climate change is represented with a multi-model ensemble from the literature, using a nearest neighbor perturbation to increase the size of the ensemble. We use Robust Decision Making to compare near-term or long-term management strategies across an ensemble of plausible future scenarios with the goal of identifying one or more approaches that are robust to alternate assumptions about the future. This method entails using search algorithms to quantitatively identify vulnerabilities that may threaten a given strategy (including the current operating policy) and characterize key tradeoffs between strategies under different scenarios.
Robust electroencephalogram phase estimation with applications in brain-computer interface systems.
Seraj, Esmaeil; Sameni, Reza
2017-03-01
In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.
Development of the Lunar and Solar Perturbations in the Motion of an Artificial Satellite
NASA Technical Reports Server (NTRS)
Musen, P.; Bailie, A.; Upton, E.
1961-01-01
Problems relating to the influence of lunar and solar perturbations on the motion of artificial satellites are analyzed by an extension of Cayley's development of the perturbative function in the lunar theory. In addition, the results are modified for incorporation into the Hansen-type theory used by the NASA Space Computing Center. The theory is applied to the orbits of the Vanguard I and Explorer VI satellites, and the results of detailed computations for these satellites are given together with a physical description of the perturbations in terms of resonance effects.
NASA Astrophysics Data System (ADS)
Qin, Hong; Davidson, Ronald C.; Lee, W. Wei-Li
1999-11-01
The Beam Equilibrium Stability and Transport (BEST) code, a 3D multispecies nonlinear perturbative particle simulation code, has been developed to study collective effects in intense charged particle beams described self-consistently by the Vlasov-Maxwell equations. A Darwin model is adopted for transverse electromagnetic effects. As a 3D multispecies perturbative particle simulation code, it provides several unique capabilities. Since the simulation particles are used to simulate only the perturbed distribution function and self-fields, the simulation noise is reduced significantly. The perturbative approach also enables the code to investigate different physics effects separately, as well as simultaneously. The code can be easily switched between linear and nonlinear operation, and used to study both linear stability properties and nonlinear beam dynamics. These features, combined with 3D and multispecies capabilities, provides an effective tool to investigate the electron-ion two-stream instability, periodically focused solutions in alternating focusing fields, and many other important problems in nonlinear beam dynamics and accelerator physics. Applications to the two-stream instability are presented.
Superradiance of cold atoms coupled to a superconducting circuit
NASA Astrophysics Data System (ADS)
Braun, Daniel; Hoffman, Jonathan; Tiesinga, Eite
2011-06-01
We investigate superradiance of an ensemble of atoms coupled to an integrated superconducting LC circuit. Particular attention is paid to the effect of inhomogeneous coupling constants. Combining perturbation theory in the inhomogeneity and numerical simulations, we show that inhomogeneous coupling constants can significantly affect the superradiant relaxation process. Incomplete relaxation terminating in “dark states” can occur, from which the only escape is through individual spontaneous emission on a much longer time scale. The relaxation dynamics can be significantly accelerated or retarded, depending on the distribution of the coupling constants. On the technical side, we also generalize the previously known propagator of superradiance for identical couplings in the completely symmetric sector to the full exponentially large Hilbert space.
First Lattice Calculation of the QED Corrections to Leptonic Decay Rates
NASA Astrophysics Data System (ADS)
Giusti, D.; Lubicz, V.; Tarantino, C.; Martinelli, G.; Sachrajda, C. T.; Sanfilippo, F.; Simula, S.; Tantalo, N.
2018-02-01
The leading-order electromagnetic and strong isospin-breaking corrections to the ratio of Kμ 2 and πμ 2 decay rates are evaluated for the first time on the lattice, following a method recently proposed. The lattice results are obtained using the gauge ensembles produced by the European Twisted Mass Collaboration with Nf=2 +1 +1 dynamical quarks. Systematic effects are evaluated and the impact of the quenched QED approximation is estimated. Our result for the correction to the tree-level Kμ 2/πμ 2 decay ratio is -1.22 (16 )%, to be compared to the estimate of -1.12 (21 )% based on chiral perturbation theory and adopted by the Particle Data Group.
Summing Feynman graphs by Monte Carlo: Planar ϕ3-theory and dynamically triangulated random surfaces
NASA Astrophysics Data System (ADS)
Boulatov, D. V.; Kazakov, V. A.
1988-12-01
New combinatorial identities are suggested relating the ratio of (n - 1)th and nth orders of (planar) perturbation expansion for any quantity to some average over the ensemble of all planar graphs of the nth order. These identities are used for Monte Carlo calculation of critical exponents γstr (string susceptibility) in planar ϕ3-theory and in the dynamically triangulated random surface (DTRS) model near the convergence circle for various dimensions. In the solvable case D = 1 the exact critical properties of the theory are reproduced numerically. After August 3, 1988 the address will be: Cybernetics Council, Academy of Science, ul. Vavilova 40, 117333 Moscow, USSR.
Experimental optimization of directed field ionization
NASA Astrophysics Data System (ADS)
Liu, Zhimin Cheryl; Gregoric, Vincent C.; Carroll, Thomas J.; Noel, Michael W.
2017-04-01
The state distribution of an ensemble of Rydberg atoms is commonly measured using selective field ionization. The resulting time resolved ionization signal from a single energy eigenstate tends to spread out due to the multiple avoided Stark level crossings atoms must traverse on the way to ionization. The shape of the ionization signal can be modified by adding a perturbation field to the main field ramp. Here, we present experimental results of the manipulation of the ionization signal using a genetic algorithm. We address how both the genetic algorithm and the experimental parameters were adjusted to achieve an optimized result. This work was supported by the National Science Foundation under Grants No. 1607335 and No. 1607377.
Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.
2013-02-01
Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally very efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore) and Canning River (Western Australia)) representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.
Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.
2013-07-01
Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
Critical mingling and universal correlations in model binary active liquids
NASA Astrophysics Data System (ADS)
Bain, Nicolas; Bartolo, Denis
2017-06-01
Ensembles of driven or motile bodies moving along opposite directions are generically reported to self-organize into strongly anisotropic lanes. Here, building on a minimal model of self-propelled bodies targeting opposite directions, we first evidence a critical phase transition between a mingled state and a phase-separated lane state specific to active particles. We then demonstrate that the mingled state displays algebraic structural correlations also found in driven binary mixtures. Finally, constructing a hydrodynamic theory, we single out the physical mechanisms responsible for these universal long-range correlations typical of ensembles of oppositely moving bodies.
Barvinsky, A O
2007-08-17
The density matrix of the Universe for the microcanonical ensemble in quantum cosmology describes an equipartition in the physical phase space of the theory (sum over everything), but in terms of the observable spacetime geometry this ensemble is peaked about the set of recently obtained cosmological instantons limited to a bounded range of the cosmological constant. This suggests the mechanism of constraining the landscape of string vacua and a possible solution to the dark energy problem in the form of the quasiequilibrium decay of the microcanonical state of the Universe.
Multi-RCM ensemble downscaling of global seasonal forecasts (MRED)
NASA Astrophysics Data System (ADS)
Arritt, R.
2009-04-01
Regional climate models (RCMs) have long been used to downscale global climate simulations. In contrast the ability of RCMs to downscale seasonal climate forecasts has received little attention. The Multi-RCM Ensemble Downscaling (MRED) project was recently initiated to address the question, Does dynamical downscaling using RCMs provide additional useful information for seasonal forecasts made by global models? MRED is using a suite of RCMs to downscale seasonal forecasts produced by the National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) seasonal forecast system and the NASA GEOS5 system. The initial focus is on wintertime forecasts in order to evaluate topographic forcing, snowmelt, and the usefulness of higher resolution for near-surface fields influenced by high resolution orography. Each RCM covers the conterminous U.S. at approximately 32 km resolution, comparable to the scale of the North American Regional Reanalysis (NARR) which will be used to evaluate the models. The forecast ensemble for each RCM is comprised of 15 members over a period of 22+ years (from 1982 to 2003+) for the forecast period 1 December - 30 April. Each RCM will create a 15-member lagged ensemble by starting on different dates in the preceding November. This results in a 120-member ensemble for each projection (8 RCMs by 15 members per RCM). The RCMs will be continually updated at their lateral boundaries using 6-hourly output from CFS or GEOS5. Hydrometeorological output will be produced in a standard netCDF-based format for a common analysis grid, which simplifies both model intercomparison and the generation of ensembles. MRED will compare individual RCM and global forecasts as well as ensemble mean precipitation and temperature forecasts, which are currently being used to drive macroscale land surface models (LSMs). Metrics of ensemble spread will also be evaluated. Extensive process-oriented analysis will be performed to link improvements in downscaled forecast skill to regional forcings and physical mechanisms. Our overarching goal is to determine what additional skill can be provided by a community ensemble of high resolution regional models, which we believe will define a strategy for more skillful and useful regional seasonal climate forecasts.
Application Bayesian Model Averaging method for ensemble system for Poland
NASA Astrophysics Data System (ADS)
Guzikowski, Jakub; Czerwinska, Agnieszka
2014-05-01
The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.
Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework
NASA Astrophysics Data System (ADS)
Baker, N. C.; Taylor, P. C.
2014-12-01
The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.
Virulo is a probabilistic model for predicting virus attenuation. Monte Carlo methods are used to generate ensemble simulations of virus attenuation due to physical, biological, and chemical factors. The model generates a probability of failure to achieve a chosen degree o...
Dong, Ming-Xin; Zhang, Wei; Hou, Zhi-Bo; Yu, Yi-Chen; Shi, Shuai; Ding, Dong-Sheng; Shi, Bao-Sen
2017-11-15
Multi-photon entangled states not only play a crucial role in research on quantum physics but also have many applications in quantum information fields such as quantum computation, quantum communication, and quantum metrology. To fully exploit the multi-photon entangled states, it is important to establish the interaction between entangled photons and matter, which requires that photons have narrow bandwidth. Here, we report on the experimental generation of a narrowband four-photon Greenberger-Horne-Zeilinger state with a fidelity of 64.9% through multiplexing two spontaneous four-wave mixings in a cold Rb85 atomic ensemble. The full bandwidth of the generated GHZ state is about 19.5 MHz. Thus, the generated photons can effectively match the atoms, which are very suitable for building a quantum computation and quantum communication network based on atomic ensembles.
Fast adaptive flat-histogram ensemble to enhance the sampling in large systems
NASA Astrophysics Data System (ADS)
Xu, Shun; Zhou, Xin; Jiang, Yi; Wang, YanTing
2015-09-01
An efficient novel algorithm was developed to estimate the Density of States (DOS) for large systems by calculating the ensemble means of an extensive physical variable, such as the potential energy, U, in generalized canonical ensembles to interpolate the interior reverse temperature curve , where S( U) is the logarithm of the DOS. This curve is computed with different accuracies in different energy regions to capture the dependence of the reverse temperature on U without setting prior grid in the U space. By combining with a U-compression transformation, we decrease the computational complexity from O( N 3/2) in the normal Wang Landau type method to O( N 1/2) in the current algorithm, as the degrees of freedom of system N. The efficiency of the algorithm is demonstrated by applying to Lennard Jones fluids with various N, along with its ability to find different macroscopic states, including metastable states.
Ensembles and Experiments in Classical and Quantum Physics
NASA Astrophysics Data System (ADS)
Neumaier, Arnold
A philosophically consistent axiomatic approach to classical and quantum mechanics is given. The approach realizes a strong formal implementation of Bohr's correspondence principle. In all instances, classical and quantum concepts are fully parallel: the same general theory has a classical realization and a quantum realization. Extending the ''probability via expectation'' approach of Whittle to noncommuting quantities, this paper defines quantities, ensembles, and experiments as mathematical concepts and shows how to model complementarity, uncertainty, probability, nonlocality and dynamics in these terms. The approach carries no connotation of unlimited repeatability; hence it can be applied to unique systems such as the universe. Consistent experiments provide an elegant solution to the reality problem, confirming the insistence of the orthodox Copenhagen interpretation on that there is nothing but ensembles, while avoiding its elusive reality picture. The weak law of large numbers explains the emergence of classical properties for macroscopic systems.
Weak ergodicity breaking, irreproducibility, and ageing in anomalous diffusion processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metzler, Ralf
2014-01-14
Single particle traces are standardly evaluated in terms of time averages of the second moment of the position time series r(t). For ergodic processes, one can interpret such results in terms of the known theories for the corresponding ensemble averaged quantities. In anomalous diffusion processes, that are widely observed in nature over many orders of magnitude, the equivalence between (long) time and ensemble averages may be broken (weak ergodicity breaking), and these time averages may no longer be interpreted in terms of ensemble theories. Here we detail some recent results on weakly non-ergodic systems with respect to the time averagedmore » mean squared displacement, the inherent irreproducibility of individual measurements, and methods to determine the exact underlying stochastic process. We also address the phenomenon of ageing, the dependence of physical observables on the time span between initial preparation of the system and the start of the measurement.« less
An Ensemble Multilabel Classification for Disease Risk Prediction
Liu, Wei; Zhao, Hongling; Zhang, Chaoyang
2017-01-01
It is important to identify and prevent disease risk as early as possible through regular physical examinations. We formulate the disease risk prediction into a multilabel classification problem. A novel Ensemble Label Power-set Pruned datasets Joint Decomposition (ELPPJD) method is proposed in this work. First, we transform the multilabel classification into a multiclass classification. Then, we propose the pruned datasets and joint decomposition methods to deal with the imbalance learning problem. Two strategies size balanced (SB) and label similarity (LS) are designed to decompose the training dataset. In the experiments, the dataset is from the real physical examination records. We contrast the performance of the ELPPJD method with two different decomposition strategies. Moreover, the comparison between ELPPJD and the classic multilabel classification methods RAkEL and HOMER is carried out. The experimental results show that the ELPPJD method with label similarity strategy has outstanding performance. PMID:29065647
Organization and scaling in water supply networks
NASA Astrophysics Data System (ADS)
Cheng, Likwan; Karney, Bryan W.
2017-12-01
Public water supply is one of the society's most vital resources and most costly infrastructures. Traditional concepts of these networks capture their engineering identity as isolated, deterministic hydraulic units, but overlook their physics identity as related entities in a probabilistic, geographic ensemble, characterized by size organization and property scaling. Although discoveries of allometric scaling in natural supply networks (organisms and rivers) raised the prospect for similar findings in anthropogenic supplies, so far such a finding has not been reported in public water or related civic resource supplies. Examining an empirical ensemble of large number and wide size range, we show that water supply networks possess self-organized size abundance and theory-explained allometric scaling in spatial, infrastructural, and resource- and emission-flow properties. These discoveries establish scaling physics for water supply networks and may lead to novel applications in resource- and jurisdiction-scale water governance.
Bolia, Ashini; Gerek, Z. Nevin; Ozkan, S. Banu
2016-01-01
Molecular docking serves as an important tool in modeling protein–ligand interactions. However, it is still challenging to incorporate overall receptor flexibility, especially backbone flexibility, in docking due to the large conformational space that needs to be sampled. To overcome this problem, we developed a novel flexible docking approach, BP-Dock (Backbone Perturbation-Dock) that can integrate both backbone and side chain conformational changes induced by ligand binding through a multi-scale approach. In the BP-Dock method, we mimic the nature of binding-induced events as a first-order approximation by perturbing the residues along the protein chain with a small Brownian kick one at a time. The response fluctuation profile of the chain upon these perturbations is computed using the perturbation response scanning method. These response fluctuation profiles are then used to generate binding-induced multiple receptor conformations for ensemble docking. To evaluate the performance of BP-Dock, we applied our approach on a large and diverse data set using unbound structures as receptors. We also compared the BP-Dock results with bound and unbound docking, where overall receptor flexibility was not taken into account. Our results highlight the importance of modeling backbone flexibility in docking for recapitulating the experimental binding affinities, especially when an unbound structure is used. With BP-Dock, we can generate a wide range of binding site conformations realized in nature even in the absence of a ligand that can help us to improve the accuracy of unbound docking. We expect that our fast and efficient flexible docking approach may further aid in our understanding of protein–ligand interactions as well as virtual screening of novel targets for rational drug design. PMID:24380381
Minimal string theories and integrable hierarchies
NASA Astrophysics Data System (ADS)
Iyer, Ramakrishnan
Well-defined, non-perturbative formulations of the physics of string theories in specific minimal or superminimal model backgrounds can be obtained by solving matrix models in the double scaling limit. They provide us with the first examples of completely solvable string theories. Despite being relatively simple compared to higher dimensional critical string theories, they furnish non-perturbative descriptions of interesting physical phenomena such as geometrical transitions between D-branes and fluxes, tachyon condensation and holography. The physics of these theories in the minimal model backgrounds is succinctly encoded in a non-linear differential equation known as the string equation, along with an associated hierarchy of integrable partial differential equations (PDEs). The bosonic string in (2,2m-1) conformal minimal model backgrounds and the type 0A string in (2,4 m) superconformal minimal model backgrounds have the Korteweg-de Vries system, while type 0B in (2,4m) backgrounds has the Zakharov-Shabat system. The integrable PDE hierarchy governs flows between backgrounds with different m. In this thesis, we explore this interesting connection between minimal string theories and integrable hierarchies further. We uncover the remarkable role that an infinite hierarchy of non-linear differential equations plays in organizing and connecting certain minimal string theories non-perturbatively. We are able to embed the type 0A and 0B (A,A) minimal string theories into this single framework. The string theories arise as special limits of a rich system of equations underpinned by an integrable system known as the dispersive water wave hierarchy. We find that there are several other string-like limits of the system, and conjecture that some of them are type IIA and IIB (A,D) minimal string backgrounds. We explain how these and several other string-like special points arise and are connected. In some cases, the framework endows the theories with a non-perturbative definition for the first time. Notably, we discover that the Painleve IV equation plays a key role in organizing the string theory physics, joining its siblings, Painleve I and II, whose roles have previously been identified in this minimal string context. We then present evidence that the conjectured type II theories have smooth non-perturbative solutions, connecting two perturbative asymptotic regimes, in a 't Hooft limit. Our technique also demonstrates evidence for new minimal string theories that are not apparent in a perturbative analysis.
NASA Astrophysics Data System (ADS)
Vazquez, Justin; Ali, Halima; Punjabi, Alkesh
2009-11-01
Ciraolo, Vittot and Chandre method of building invariant manifolds inside chaos in Hamiltonian systems [Ali H. and Punjabi A, Plasma Phys. Control. Fusion, 49, 1565--1582 (2007)] is used in the ASDEX UG tokamak. In this method, a second order perturbation is added to the perturbed Hamiltonian [op cit]. It creates an invariant torus inside the chaos, and reduces the plasma transport. The perturbation that is added to the equilibrium Hamiltonian is at least an order of magnitude smaller than the perturbation that causes chaos. This additional term has a finite, limited number of Fourier modes. Resonant magnetic perturbations (m,n) = (3,2)+(4,3) are added to the field line Hamiltonian for the ASDEX UG. An area-preserving map for the field line trajectories in the ASDEX UG is used. The common amplitude δ of these modes that gives complete chaos between the resonant surfaces ψ43 and ψ32 is determined. A magnetic barrier is built at a surface with noble q that is very nearly equals to the q at the physical midpoint between the two resonant surfaces. The maximum amplitude of magnetic perturbation for which this barrier can be sustained is determined. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.
A review of multimodel superensemble forecasting for weather, seasonal climate, and hurricanes
NASA Astrophysics Data System (ADS)
Krishnamurti, T. N.; Kumar, V.; Simon, A.; Bhardwaj, A.; Ghosh, T.; Ross, R.
2016-06-01
This review provides a summary of work in the area of ensemble forecasts for weather, climate, oceans, and hurricanes. This includes a combination of multiple forecast model results that does not dwell on the ensemble mean but uses a unique collective bias reduction procedure. A theoretical framework for this procedure is provided, utilizing a suite of models that is constructed from the well-known Lorenz low-order nonlinear system. A tutorial that includes a walk-through table and illustrates the inner workings of the multimodel superensemble's principle is provided. Systematic errors in a single deterministic model arise from a host of features that range from the model's initial state (data assimilation), resolution, representation of physics, dynamics, and ocean processes, local aspects of orography, water bodies, and details of the land surface. Models, in their diversity of representation of such features, end up leaving unique signatures of systematic errors. The multimodel superensemble utilizes as many as 10 million weights to take into account the bias errors arising from these diverse features of multimodels. The design of a single deterministic forecast models that utilizes multiple features from the use of the large volume of weights is provided here. This has led to a better understanding of the error growths and the collective bias reductions for several of the physical parameterizations within diverse models, such as cumulus convection, planetary boundary layer physics, and radiative transfer. A number of examples for weather, seasonal climate, hurricanes and sub surface oceanic forecast skills of member models, the ensemble mean, and the superensemble are provided.
Singular perturbation of smoothly evolving Hele-Shaw solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siegel, M.; Tanveer, S.
1996-01-01
We present analytical scaling results, confirmed by accurate numerics, to show that there exists a class of smoothly evolving zero surface tension solutions to the Hele-Shaw problem that are significantly perturbed by an arbitrarily small amount of surface tension in order one time. {copyright} {ital 1996 The American Physical Society.}
NASA Astrophysics Data System (ADS)
Carmelo, J. M. P.; Prosen, T.
2017-01-01
Whether in the thermodynamic limit, vanishing magnetic field h → 0, and nonzero temperature the spin stiffness of the spin-1/2 XXX Heisenberg chain is finite or vanishes within the grand-canonical ensemble remains an unsolved and controversial issue, as different approaches yield contradictory results. Here we provide an upper bound on the stiffness and show that within that ensemble it vanishes for h → 0 in the thermodynamic limit of chain length L → ∞, at high temperatures T → ∞. Our approach uses a representation in terms of the L physical spins 1/2. For all configurations that generate the exact spin-S energy and momentum eigenstates such a configuration involves a number 2S of unpaired spins 1/2 in multiplet configurations and L - 2 S spins 1/2 that are paired within Msp = L / 2 - S spin-singlet pairs. The Bethe-ansatz strings of length n = 1 and n > 1 describe a single unbound spin-singlet pair and a configuration within which n pairs are bound, respectively. In the case of n > 1 pairs this holds both for ideal and deformed strings associated with n complex rapidities with the same real part. The use of such a spin 1/2 representation provides useful physical information on the problem under investigation in contrast to often less controllable numerical studies. Our results provide strong evidence for the absence of ballistic transport in the spin-1/2 XXX Heisenberg chain in the thermodynamic limit, for high temperatures T → ∞, vanishing magnetic field h → 0 and within the grand-canonical ensemble.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby
The safety of large and aging water infrastructures is gaining attention in water management given the accelerated rate of change in landscape, climate and society. In current engineering practice, such safety is ensured by the design of infrastructure for the Probable Maximum Precipitation (PMP). Recently, several physics-based numerical modeling approaches have been proposed to modernize the conventional and ad hoc PMP estimation approach. However, the underlying physics has not been investigated and thus differing PMP estimates are obtained without clarity on their interpretation. In this study, we present a hybrid approach that takes advantage of both traditional engineering wisdom andmore » modern climate science to estimate PMP for current and future climate conditions. The traditional PMP approach is improved and applied to outputs from an ensemble of five CMIP5 models. This hybrid approach is applied in the Pacific Northwest (PNW) to produce ensemble PMP estimation for the historical (1970-2016) and future (2050-2099) time periods. The new historical PMP estimates are verified by comparing them with the traditional estimates. PMP in the PNW will increase by 50% of the current level by 2099 under the RCP8.5 scenario. Most of the increase is caused by warming, which mainly affects moisture availability, with minor contributions from changes in storm efficiency in the future. Moist track change tends to reduce the future PMP. Compared with extreme precipitation, ensemble PMP exhibits higher internal variation. Thus high-quality data of both precipitation and related meteorological fields (temperature, wind fields) are required to reduce uncertainties in the ensemble PMP estimates.« less
Effects of perturbation relative phase on transverse mode instability gain
NASA Astrophysics Data System (ADS)
Zervas, Michalis N.
2018-02-01
We have shown that the relative phase between the fundamental fiber mode and the transverse perturbation affects significantly the local transverse modal instability (TMI) gain. The gain variation is more pronounced as the core diameter increases. This finding can be used in conjunction with other proposed approaches to develop efficient strategies for mitigating TMI in high power fiber amplifiers and lasers. It also provides some physical insight on the physical origin of the observed large differences in the TMI threshold dependence on core diameter for narrow and broad linewidth operation.