Sample records for large ensemble size

  1. Decadal climate prediction in the large ensemble limit

    NASA Astrophysics Data System (ADS)

    Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.

    2017-12-01

    In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.

  2. A comparison of resampling schemes for estimating model observer performance with small ensembles

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.

    2017-09-01

    In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.

  3. Water dynamics in large and small reverse micelles: From two ensembles to collective behavior

    PubMed Central

    Moilanen, David E.; Fenn, Emily E.; Wong, Daryl; Fayer, Michael D.

    2009-01-01

    The dynamics of water in Aerosol-OT reverse micelles are investigated with ultrafast infrared spectroscopy of the hydroxyl stretch. In large reverse micelles, the dynamics of water are separable into two ensembles: slow interfacial water and bulklike core water. As the reverse micelle size decreases, the slowing effect of the interface and the collective nature of water reorientation begin to slow the dynamics of the core water molecules. In the smallest reverse micelles, these effects dominate and all water molecules have the same long time reorientational dynamics. To understand and characterize the transition in the water dynamics from two ensembles to collective reorientation, polarization and frequency selective infrared pump-probe experiments are conducted on the complete range of reverse micelle sizes from a diameter of 1.6–20 nm. The crossover between two ensemble and collective reorientation occurs near a reverse micelle diameter of 4 nm. Below this size, the small number of confined water molecules and structural changes in the reverse micelle interface leads to homogeneous long time reorientation. PMID:19586114

  4. Ensemble coding remains accurate under object and spatial visual working memory load.

    PubMed

    Epstein, Michael L; Emmanouil, Tatiana A

    2017-10-01

    A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.

  5. Wind and wave extremes over the world oceans from very large ensembles

    NASA Astrophysics Data System (ADS)

    Breivik, Øyvind; Aarnes, Ole Johan; Abdalla, Saleh; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.

    2014-07-01

    Global return values of marine wind speed and significant wave height are estimated from very large aggregates of archived ensemble forecasts at +240 h lead time. Long lead time ensures that the forecasts represent independent draws from the model climate. Compared with ERA-Interim, a reanalysis, the ensemble yields higher return estimates for both wind speed and significant wave height. Confidence intervals are much tighter due to the large size of the data set. The period (9 years) is short enough to be considered stationary even with climate change. Furthermore, the ensemble is large enough for nonparametric 100 year return estimates to be made from order statistics. These direct return estimates compare well with extreme value estimates outside areas with tropical cyclones. Like any method employing modeled fields, it is sensitive to tail biases in the numerical model, but we find that the biases are moderate outside areas with tropical cyclones.

  6. Robustness of the far-field response of nonlocal plasmonic ensembles.

    PubMed

    Tserkezis, Christos; Maack, Johan R; Liu, Zhaowei; Wubs, Martijn; Mortensen, N Asger

    2016-06-22

    Contrary to classical predictions, the optical response of few-nm plasmonic particles depends on particle size due to effects such as nonlocality and electron spill-out. Ensembles of such nanoparticles are therefore expected to exhibit a nonclassical inhomogeneous spectral broadening due to size distribution. For a normal distribution of free-electron nanoparticles, and within the simple nonlocal hydrodynamic Drude model, both the nonlocal blueshift and the plasmon linewidth are shown to be considerably affected by ensemble averaging. Size-variance effects tend however to conceal nonlocality to a lesser extent when the homogeneous size-dependent broadening of individual nanoparticles is taken into account, either through a local size-dependent damping model or through the Generalized Nonlocal Optical Response theory. The role of ensemble averaging is further explored in realistic distributions of isolated or weakly-interacting noble-metal nanoparticles, as encountered in experiments, while an analytical expression to evaluate the importance of inhomogeneous broadening through measurable quantities is developed. Our findings are independent of the specific nonclassical theory used, thus providing important insight into a large range of experiments on nanoscale and quantum plasmonics.

  7. Critical diversity: Divided or united states of social coordination

    PubMed Central

    Kelso, J. A. Scott; Tognoli, Emmanuelle

    2018-01-01

    Much of our knowledge of coordination comes from studies of simple, dyadic systems or systems containing large numbers of components. The huge gap ‘in between’ is seldom addressed, empirically or theoretically. We introduce a new paradigm to study the coordination dynamics of such intermediate-sized ensembles with the goal of identifying key mechanisms of interaction. Rhythmic coordination was studied in ensembles of eight people, with differences in movement frequency (‘diversity’) manipulated within the ensemble. Quantitative change in diversity led to qualitative changes in coordination, a critical value separating régimes of integration and segregation between groups. Metastable and multifrequency coordination between participants enabled communication across segregated groups within the ensemble, without destroying overall order. These novel findings reveal key factors underlying coordination in ensemble sizes previously considered too complicated or 'messy' for systematic study and supply future theoretical/computational models with new empirical checkpoints. PMID:29617371

  8. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    NASA Astrophysics Data System (ADS)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  9. Individual differences in ensemble perception reveal multiple, independent levels of ensemble representation.

    PubMed

    Haberman, Jason; Brady, Timothy F; Alvarez, George A

    2015-04-01

    Ensemble perception, including the ability to "see the average" from a group of items, operates in numerous feature domains (size, orientation, speed, facial expression, etc.). Although the ubiquity of ensemble representations is well established, the large-scale cognitive architecture of this process remains poorly defined. We address this using an individual differences approach. In a series of experiments, observers saw groups of objects and reported either a single item from the group or the average of the entire group. High-level ensemble representations (e.g., average facial expression) showed complete independence from low-level ensemble representations (e.g., average orientation). In contrast, low-level ensemble representations (e.g., orientation and color) were correlated with each other, but not with high-level ensemble representations (e.g., facial expression and person identity). These results suggest that there is not a single domain-general ensemble mechanism, and that the relationship among various ensemble representations depends on how proximal they are in representational space. (c) 2015 APA, all rights reserved).

  10. Unveiling Inherent Degeneracies in Determining Population-weighted Ensembles of Inter-domain Orientational Distributions Using NMR Residual Dipolar Couplings: Application to RNA Helix Junction Helix Motifs

    PubMed Central

    Yang, Shan; Al-Hashimi, Hashim M.

    2016-01-01

    A growing number of studies employ time-averaged experimental data to determine dynamic ensembles of biomolecules. While it is well known that different ensembles can satisfy experimental data to within error, the extent and nature of these degeneracies, and their impact on the accuracy of the ensemble determination remains poorly understood. Here, we use simulations and a recently introduced metric for assessing ensemble similarity to explore degeneracies in determining ensembles using NMR residual dipolar couplings (RDCs) with specific application to A-form helices in RNA. Various target ensembles were constructed representing different domain-domain orientational distributions that are confined to a topologically restricted (<10%) conformational space. Five independent sets of ensemble averaged RDCs were then computed for each target ensemble and a ‘sample and select’ scheme used to identify degenerate ensembles that satisfy RDCs to within experimental uncertainty. We find that ensembles with different ensemble sizes and that can differ significantly from the target ensemble (by as much as ΣΩ ~ 0.4 where ΣΩ varies between 0 and 1 for maximum and minimum ensemble similarity, respectively) can satisfy the ensemble averaged RDCs. These deviations increase with the number of unique conformers and breadth of the target distribution, and result in significant uncertainty in determining conformational entropy (as large as 5 kcal/mol at T = 298 K). Nevertheless, the RDC-degenerate ensembles are biased towards populated regions of the target ensemble, and capture other essential features of the distribution, including the shape. Our results identify ensemble size as a major source of uncertainty in determining ensembles and suggest that NMR interactions such as RDCs and spin relaxation, on their own, do not carry the necessary information needed to determine conformational entropy at a useful level of precision. The framework introduced here provides a general approach for exploring degeneracies in ensemble determination for different types of experimental data. PMID:26131693

  11. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    NASA Astrophysics Data System (ADS)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of predictand, possesses a Bayesian coherence property, constitutes a random sample of the predictand, and has an acceptable sampling error-which makes it suitable for rational decision making under uncertainty.

  12. Spectral statistics of the uni-modular ensemble

    NASA Astrophysics Data System (ADS)

    Joyner, Christopher H.; Smilansky, Uzy; Weidenmüller, Hans A.

    2017-09-01

    We investigate the spectral statistics of Hermitian matrices in which the elements are chosen uniformly from U(1) , called the uni-modular ensemble (UME), in the limit of large matrix size. Using three complimentary methods; a supersymmetric integration method, a combinatorial graph-theoretical analysis and a Brownian motion approach, we are able to derive expressions for 1 / N corrections to the mean spectral moments and also analyse the fluctuations about this mean. By addressing the same ensemble from three different point of view, we can critically compare their relative advantages and derive some new results.

  13. Selecting a climate model subset to optimise key ensemble properties

    NASA Astrophysics Data System (ADS)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  14. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    PubMed Central

    DelSole, T.; Tippett, M.K.; Pegion, K.

    2018-01-01

    Abstract The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real‐time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real‐time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8–10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities. PMID:29937973

  15. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    NASA Astrophysics Data System (ADS)

    Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.

    2018-04-01

    The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.

  16. Ecomorphology of the African felid ensemble: the role of the skull and postcranium in determining species segregation and assembling history.

    PubMed

    Morales, M M; Giannini, N P

    2013-05-01

    Morphology of extant felids is regarded as highly conservative. Most previous studies have focussed on skull morphology, so a vacuum exists about morphofunctional variation in postcranium and its role in structuring ensembles of felids in different continents. The African felid ensemble is particularly rich in ecologically specialized felids. We studied the ecomorphology of this ensemble using 31 cranial and 93 postcranial morphometric variables measured in 49 specimens of all 10 African species. We took a multivariate approach controlling for phylogeny, with and without body size correction. Postcranial and skull + postcranial analyses (but not skull-only analyses) allowed for a complete segregation of species in morphospace. Morphofunctional factors segregating species included body size, bite force, zeugopodial lengths and osteological features related to parasagittal leg movement. A general gradient of bodily proportions was recovered: lightly built, long-legged felids with small heads and weak bite forces vs. the opposite. Three loose groups were recognized: small terrestrial felids, mid-to-large sized scansorial felids and specialized Acinonyx jubatus and Leptailurus serval. As predicted from a previous study, the assembling of the African felid ensemble during the Plio-Pleistocene occurred by the arrival of distinct felid lineages that occupied then vacant areas of morphospace, later diversifying in the continent. © 2013 The Authors. Journal of Evolutionary Biology © 2013 European Society For Evolutionary Biology.

  17. Exactly solvable random graph ensemble with extensively many short cycles

    NASA Astrophysics Data System (ADS)

    Aguirre López, Fabián; Barucca, Paolo; Fekom, Mathilde; Coolen, Anthony C. C.

    2018-02-01

    We introduce and analyse ensembles of 2-regular random graphs with a tuneable distribution of short cycles. The phenomenology of these graphs depends critically on the scaling of the ensembles’ control parameters relative to the number of nodes. A phase diagram is presented, showing a second order phase transition from a connected to a disconnected phase. We study both the canonical formulation, where the size is large but fixed, and the grand canonical formulation, where the size is sampled from a discrete distribution, and show their equivalence in the thermodynamical limit. We also compute analytically the spectral density, which consists of a discrete set of isolated eigenvalues, representing short cycles, and a continuous part, representing cycles of diverging size.

  18. Multi-model ensembles for assessment of flood losses and associated uncertainty

    NASA Astrophysics Data System (ADS)

    Figueiredo, Rui; Schröter, Kai; Weiss-Motz, Alexander; Martina, Mario L. V.; Kreibich, Heidi

    2018-05-01

    Flood loss modelling is a crucial part of risk assessments. However, it is subject to large uncertainty that is often neglected. Most models available in the literature are deterministic, providing only single point estimates of flood loss, and large disparities tend to exist among them. Adopting any one such model in a risk assessment context is likely to lead to inaccurate loss estimates and sub-optimal decision-making. In this paper, we propose the use of multi-model ensembles to address these issues. This approach, which has been applied successfully in other scientific fields, is based on the combination of different model outputs with the aim of improving the skill and usefulness of predictions. We first propose a model rating framework to support ensemble construction, based on a probability tree of model properties, which establishes relative degrees of belief between candidate models. Using 20 flood loss models in two test cases, we then construct numerous multi-model ensembles, based both on the rating framework and on a stochastic method, differing in terms of participating members, ensemble size and model weights. We evaluate the performance of ensemble means, as well as their probabilistic skill and reliability. Our results demonstrate that well-designed multi-model ensembles represent a pragmatic approach to consistently obtain more accurate flood loss estimates and reliable probability distributions of model uncertainty.

  19. Asteroid Impact Risk: Ground Hazard versus Impactor Size

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan; Wheeler, Lorien; Dotson, Jessie; Aftosmis, Michael; Tarano, Ana

    2017-01-01

    We utilized a probabilistic asteroid impact risk (PAIR) model to stochastically assess the impact risk due to an ensemble population of Near-Earth Objects (NEOs). Concretely, we present the variation of risk with impactor size. Results suggest that large impactors dominate the average risk, even when only considering the subset of undiscovered NEOs.

  20. Analyzing the impact of changing size and composition of a crop model ensemble

    NASA Astrophysics Data System (ADS)

    Rodríguez, Alfredo

    2017-04-01

    The use of an ensemble of crop growth simulation models is a practice recently adopted in order to quantify aspects of uncertainties in model simulations. Yet, while the climate modelling community has extensively investigated the properties of model ensembles and their implications, this has hardly been investigated for crop model ensembles (Wallach et al., 2016). In their ensemble of 27 wheat models, Martre et al. (2015) found that the accuracy of the multi-model ensemble-average only increases up to an ensemble size of ca. 10, but does not improve when including more models in the analysis. However, even when this number of members is reached, questions about the impact of the addition or removal of a member to/from the ensemble arise. When selecting ensemble members, identifying members with poor performance or giving implausible results can make a large difference on the outcome. The objective of this study is to set up a methodology that defines indicators to show the effects of changing the ensemble composition and size on simulation results, when a selection procedure of ensemble members is applied. Ensemble mean or median, and variance are measures used to depict ensemble results among other indicators. We are utilizing simulations from an ensemble of wheat models that have been used to construct impact response surfaces (Pirttioja et al., 2015) (IRSs). These show the response of an impact variable (e.g., crop yield) to systematic changes in two explanatory variables (e.g., precipitation and temperature). Using these, we compare different sub-ensembles in terms of the mean, median and spread, and also by comparing IRSs. The methodology developed here allows comparing an ensemble before and after applying any procedure that changes the ensemble composition and size by measuring the impact of this decision on the ensemble central tendency measures. The methodology could also be further developed to compare the effect of changing ensemble composition and size on IRS features. References Martre, P., Wallach, D., Asseng, S., Ewert, F., Jones, J.W., Rötter, R.P., Boote, K.J., Ruane, A.C., Thorburn, P.J., Cammarano, D., Hatfield, J.L., Rosenzweig, C., Aggarwal, P.K., Angulo, C., Basso, B., Bertuzzi, P., Biernath, C., Brisson, N., Challinor, A.J., Doltra, J., Gayler, S., Goldberg, R., Grant, R.F., Heng, L., Hooker, J., Hunt, L.A., Ingwersen, J., Izaurralde, R.C., Kersebaum, K.C., Muller, C., Kumar, S.N., Nendel, C., O'Leary, G., Olesen, J.E., Osborne, T.M., Palosuo, T., Priesack, E., Ripoche, D., Semenov, M.A., Shcherbak, I., Steduto, P., Stockle, C.O., Stratonovitch, P., Streck, T., Supit, I., Tao, F.L., Travasso, M., Waha, K., White, J.W., Wolf, J., 2015. Multimodel ensembles of wheat growth: many models are better than one. Glob. Change Biol. 21, 911-925. Pirttioja N., Carter T., Fronzek S., Bindi M., Hoffmann H., Palosuo T., Ruiz-Ramos, M., Tao F., Trnka M., Acutis M., Asseng S., Baranowski P., Basso B., Bodin P., Buis S., Cammarano D., Deligios P., Destain M.-F., Doro L., Dumont B., Ewert F., Ferrise R., Francois L., Gaiser T., Hlavinka P., Jacquemin I., Kersebaum K.-C., Kollas C., Krzyszczak J., Lorite I. J., Minet J., Minguez M. I., Montesion M., Moriondo M., Müller C., Nendel C., Öztürk I., Perego A., Rodriguez, A., Ruane A.C., Ruget F., Sanna M., Semenov M., Slawinski C., Stratonovitch P., Supit I., Waha K., Wang E., Wu L., Zhao Z., Rötter R.P, 2015. A crop model ensemble analysis of temperature and precipitation effects on wheat yield across a European transect using impact response surfaces. Clim. Res., 65:87-105, doi:10.3354/cr01322 Wallach, D., Mearns, L.O. Ruane, A.C., Rötter, R.P., Asseng, S. (2016). Lessons from climate modeling on the design and use of ensembles for crop modeling. Climate Change (in press) doi:10.1007/s10584-016-1803-1.

  1. Single Aerosol Particle Studies Using Optical Trapping Raman And Cavity Ringdown Spectroscopy

    NASA Astrophysics Data System (ADS)

    Gong, Z.; Wang, C.; Pan, Y. L.; Videen, G.

    2017-12-01

    Due to the physical and chemical complexity of aerosol particles and the interdisciplinary nature of aerosol science that involves physics, chemistry, and biology, our knowledge of aerosol particles is rather incomplete; our current understanding of aerosol particles is limited by averaged (over size, composition, shape, and orientation) and/or ensemble (over time, size, and multi-particles) measurements. Physically, single aerosol particles are the fundamental units of any large aerosol ensembles. Chemically, single aerosol particles carry individual chemical components (properties and constituents) in particle ensemble processes. Therefore, the study of single aerosol particles can bridge the gap between aerosol ensembles and bulk/surface properties and provide a hierarchical progression from a simple benchmark single-component system to a mixed-phase multicomponent system. A single aerosol particle can be an effective reactor to study heterogeneous surface chemistry in multiple phases. Latest technological advances provide exciting new opportunities to study single aerosol particles and to further develop single aerosol particle instrumentation. We present updates on our recent studies of single aerosol particles optically trapped in air using the optical-trapping Raman and cavity ringdown spectroscopy.

  2. CELES: CUDA-accelerated simulation of electromagnetic scattering by large ensembles of spheres

    NASA Astrophysics Data System (ADS)

    Egel, Amos; Pattelli, Lorenzo; Mazzamuto, Giacomo; Wiersma, Diederik S.; Lemmer, Uli

    2017-09-01

    CELES is a freely available MATLAB toolbox to simulate light scattering by many spherical particles. Aiming at high computational performance, CELES leverages block-diagonal preconditioning, a lookup-table approach to evaluate costly functions and massively parallel execution on NVIDIA graphics processing units using the CUDA computing platform. The combination of these techniques allows to efficiently address large electrodynamic problems (>104 scatterers) on inexpensive consumer hardware. In this paper, we validate near- and far-field distributions against the well-established multi-sphere T-matrix (MSTM) code and discuss the convergence behavior for ensembles of different sizes, including an exemplary system comprising 105 particles.

  3. Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Gilbreth, C. N.; Alhassid, Y.

    2015-03-01

    Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.

  4. Muscle activation described with a differential equation model for large ensembles of locally coupled molecular motors.

    PubMed

    Walcott, Sam

    2014-10-01

    Molecular motors, by turning chemical energy into mechanical work, are responsible for active cellular processes. Often groups of these motors work together to perform their biological role. Motors in an ensemble are coupled and exhibit complex emergent behavior. Although large motor ensembles can be modeled with partial differential equations (PDEs) by assuming that molecules function independently of their neighbors, this assumption is violated when motors are coupled locally. It is therefore unclear how to describe the ensemble behavior of the locally coupled motors responsible for biological processes such as calcium-dependent skeletal muscle activation. Here we develop a theory to describe locally coupled motor ensembles and apply the theory to skeletal muscle activation. The central idea is that a muscle filament can be divided into two phases: an active and an inactive phase. Dynamic changes in the relative size of these phases are described by a set of linear ordinary differential equations (ODEs). As the dynamics of the active phase are described by PDEs, muscle activation is governed by a set of coupled ODEs and PDEs, building on previous PDE models. With comparison to Monte Carlo simulations, we demonstrate that the theory captures the behavior of locally coupled ensembles. The theory also plausibly describes and predicts muscle experiments from molecular to whole muscle scales, suggesting that a micro- to macroscale muscle model is within reach.

  5. Dependence of high density nitrogen-vacancy center ensemble coherence on electron irradiation doses and annealing time

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Yuan, H.; Zhang, N.; Xu, L. X.; Li, B.; Cheng, G. D.; Wang, Y.; Gui, Q.; Fang, J. C.

    2017-12-01

    Negatively charged nitrogen-vacancy (NV-) center ensembles in diamond have proved to have great potential for use in highly sensitive, small-package solid-state quantum sensors. One way to improve sensitivity is to produce a high-density NV- center ensemble on a large scale with a long coherence lifetime. In this work, the NV- center ensemble is prepared in type-Ib diamond using high energy electron irradiation and annealing, and the transverse relaxation time of the ensemble—T 2—was systematically investigated as a function of the irradiation electron dose and annealing time. Dynamical decoupling sequences were used to characterize T 2. To overcome the problem of low signal-to-noise ratio in T 2 measurement, a coupled strip lines waveguide was used to synchronously manipulate NV- centers along three directions to improve fluorescence signal contrast. Finally, NV- center ensembles with a high concentration of roughly 1015 mm-3 were manipulated within a ~10 µs coherence time. By applying a multi-coupled strip-lines waveguide to improve the effective volume of the diamond, a sub-femtotesla sensitivity for AC field magnetometry can be achieved. The long-coherence high-density large-scale NV- center ensemble in diamond means that types of room-temperature micro-sized solid-state quantum sensors with ultra-high sensitivity can be further developed in the near future.

  6. Toward an Accurate Theoretical Framework for Describing Ensembles for Proteins under Strongly Denaturing Conditions

    PubMed Central

    Tran, Hoang T.; Pappu, Rohit V.

    2006-01-01

    Our focus is on an appropriate theoretical framework for describing highly denatured proteins. In high concentrations of denaturants, proteins behave like polymers in a good solvent and ensembles for denatured proteins can be modeled by ignoring all interactions except excluded volume (EV) effects. To assay conformational preferences of highly denatured proteins, we quantify a variety of properties for EV-limit ensembles of 23 two-state proteins. We find that modeled denatured proteins can be best described as follows. Average shapes are consistent with prolate ellipsoids. Ensembles are characterized by large correlated fluctuations. Sequence-specific conformational preferences are restricted to local length scales that span five to nine residues. Beyond local length scales, chain properties follow well-defined power laws that are expected for generic polymers in the EV limit. The average available volume is filled inefficiently, and cavities of all sizes are found within the interiors of denatured proteins. All properties characterized from simulated ensembles match predictions from rigorous field theories. We use our results to resolve between conflicting proposals for structure in ensembles for highly denatured states. PMID:16766618

  7. Assessing an ensemble Kalman filter inference of Manning's n coefficient of an idealized tidal inlet against a polynomial chaos-based MCMC

    NASA Astrophysics Data System (ADS)

    Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim

    2017-08-01

    Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and uncertainties of Manning's n coefficients compared to the full posterior distributions inferred by MCMC.

  8. Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.

    PubMed

    Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S

    2017-01-05

    The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.

  9. Classification of large-sized hyperspectral imagery using fast machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira

    2017-07-01

    We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.

  10. Combination of Cation Exchange and Quantized Ostwald Ripening for Controlling Size Distribution of Lead Chalcogenide Quantum Dots

    DOE PAGES

    Zhang, Changwang; Xia, Yong; Zhang, Zhiming; ...

    2017-03-22

    A new strategy for narrowing the size distribution of colloidal quantum dots (QDs) was developed by combining cation exchange and quantized Ostwald ripening. Medium-sized reactant CdS(e) QDs were subjected to cation exchange to form the target PbS(e) QDs, and then small reactant CdS(e) QDs were added which were converted to small PbS(e) dots via cation exchange. The small-sized ensemble of PbS(e) QDs dissolved completely rapidly and released a large amount of monomers, promoting the growth and size-focusing of the medium-sized ensemble of PbS(e) QDs. The addition of small reactant QDs can be repeated to continuously reduce the size distribution. Themore » new method was applied to synthesize PbSe and PbS QDs with extremely narrow size distributions and as a bonus they have hybrid surface passivation. In conclusion, the size distribution of prepared PbSe and PbS QDs are as low as 3.6% and 4.3%, respectively, leading to hexagonal close packing in monolayer and highly ordered three-dimensional superlattice.« less

  11. Combination of Cation Exchange and Quantized Ostwald Ripening for Controlling Size Distribution of Lead Chalcogenide Quantum Dots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Changwang; Xia, Yong; Zhang, Zhiming

    A new strategy for narrowing the size distribution of colloidal quantum dots (QDs) was developed by combining cation exchange and quantized Ostwald ripening. Medium-sized reactant CdS(e) QDs were subjected to cation exchange to form the target PbS(e) QDs, and then small reactant CdS(e) QDs were added which were converted to small PbS(e) dots via cation exchange. The small-sized ensemble of PbS(e) QDs dissolved completely rapidly and released a large amount of monomers, promoting the growth and size-focusing of the medium-sized ensemble of PbS(e) QDs. The addition of small reactant QDs can be repeated to continuously reduce the size distribution. Themore » new method was applied to synthesize PbSe and PbS QDs with extremely narrow size distributions and as a bonus they have hybrid surface passivation. In conclusion, the size distribution of prepared PbSe and PbS QDs are as low as 3.6% and 4.3%, respectively, leading to hexagonal close packing in monolayer and highly ordered three-dimensional superlattice.« less

  12. Organization and scaling in water supply networks

    NASA Astrophysics Data System (ADS)

    Cheng, Likwan; Karney, Bryan W.

    2017-12-01

    Public water supply is one of the society's most vital resources and most costly infrastructures. Traditional concepts of these networks capture their engineering identity as isolated, deterministic hydraulic units, but overlook their physics identity as related entities in a probabilistic, geographic ensemble, characterized by size organization and property scaling. Although discoveries of allometric scaling in natural supply networks (organisms and rivers) raised the prospect for similar findings in anthropogenic supplies, so far such a finding has not been reported in public water or related civic resource supplies. Examining an empirical ensemble of large number and wide size range, we show that water supply networks possess self-organized size abundance and theory-explained allometric scaling in spatial, infrastructural, and resource- and emission-flow properties. These discoveries establish scaling physics for water supply networks and may lead to novel applications in resource- and jurisdiction-scale water governance.

  13. Application of an Ensemble Smoother to Precipitation Assimilation

    NASA Technical Reports Server (NTRS)

    Zhang, Sara; Zupanski, Dusanka; Hou, Arthur; Zupanski, Milija

    2008-01-01

    Assimilation of precipitation in a global modeling system poses a special challenge in that the observation operators for precipitation processes are highly nonlinear. In the variational approach, substantial development work and model simplifications are required to include precipitation-related physical processes in the tangent linear model and its adjoint. An ensemble based data assimilation algorithm "Maximum Likelihood Ensemble Smoother (MLES)" has been developed to explore the ensemble representation of the precipitation observation operator with nonlinear convection and large-scale moist physics. An ensemble assimilation system based on the NASA GEOS-5 GCM has been constructed to assimilate satellite precipitation data within the MLES framework. The configuration of the smoother takes the time dimension into account for the relationship between state variables and observable rainfall. The full nonlinear forward model ensembles are used to represent components involving the observation operator and its transpose. Several assimilation experiments using satellite precipitation observations have been carried out to investigate the effectiveness of the ensemble representation of the nonlinear observation operator and the data impact of assimilating rain retrievals from the TMI and SSM/I sensors. Preliminary results show that this ensemble assimilation approach is capable of extracting information from nonlinear observations to improve the analysis and forecast if ensemble size is adequate, and a suitable localization scheme is applied. In addition to a dynamically consistent precipitation analysis, the assimilation system produces a statistical estimate of the analysis uncertainty.

  14. Changing precipitation in western Europe, climate change or natural variability?

    NASA Astrophysics Data System (ADS)

    Aalbers, Emma; Lenderink, Geert; van Meijgaard, Erik; van den Hurk, Bart

    2017-04-01

    Multi-model RCM-GCM ensembles provide high resolution climate projections, valuable for among others climate impact assessment studies. While the application of multiple models (both GCMs and RCMs) provides a certain robustness with respect to model uncertainty, the interpretation of differences between ensemble members - the combined result of model uncertainty and natural variability of the climate system - is not straightforward. Natural variability is intrinsic to the climate system, and a potentially large source of uncertainty in climate change projections, especially for projections on the local to regional scale. To quantify the natural variability and get a robust estimate of the forced climate change response (given a certain model and forcing scenario), large ensembles of climate model simulations of the same model provide essential information. While for global climate models (GCMs) a number of such large single model ensembles exists and have been analyzed, for regional climate models (RCMs) the number and size of single model ensembles is limited, and the predictability of the forced climate response at the local to regional scale is still rather uncertain. We present a regional downscaling of a 16-member single model ensemble over western Europe and the Alps at a resolution of 0.11 degrees (˜12km), similar to the highest resolution EURO-CORDEX simulations. This 16-member ensemble was generated by the GCM EC-EARTH, which was downscaled with the RCM RACMO for the period 1951-2100. This single model ensemble has been investigated in terms of the ensemble mean response (our estimate of the forced climate response), as well as the difference between the ensemble members, which measures natural variability. We focus on the response in seasonal mean and extreme precipitation (seasonal maxima and extremes with a return period up to 20 years) for the near to far future. For most precipitation indices we can reliably determine the climate change signal, given the applied model chain and forcing scenario. However, the analysis also shows how limited the information in single ensemble members is on the local scale forced climate response, even for high levels of global warming when the forced response has emerged from natural variability. Analysis and application of multi-model ensembles like EURO-CORDEX should go hand-in-hand with single model ensembles, like the one presented here, to be able to correctly interpret the fine-scale information in terms of a forced signal and random noise due to natural variability.

  15. Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romano, Paul K.; Lund, Amanda L.; Siegel, Andrew R.

    2017-06-19

    The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulationmore » that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.« less

  16. Subsurface characterization with localized ensemble Kalman filter employing adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Delijani, Ebrahim Biniaz; Pishvaie, Mahmoud Reza; Boozarjomehry, Ramin Bozorgmehry

    2014-07-01

    Ensemble Kalman filter, EnKF, as a Monte Carlo sequential data assimilation method has emerged promisingly for subsurface media characterization during past decade. Due to high computational cost of large ensemble size, EnKF is limited to small ensemble set in practice. This results in appearance of spurious correlation in covariance structure leading to incorrect or probable divergence of updated realizations. In this paper, a universal/adaptive thresholding method is presented to remove and/or mitigate spurious correlation problem in the forecast covariance matrix. This method is, then, extended to regularize Kalman gain directly. Four different thresholding functions have been considered to threshold forecast covariance and gain matrices. These include hard, soft, lasso and Smoothly Clipped Absolute Deviation (SCAD) functions. Three benchmarks are used to evaluate the performances of these methods. These benchmarks include a small 1D linear model and two 2D water flooding (in petroleum reservoirs) cases whose levels of heterogeneity/nonlinearity are different. It should be noted that beside the adaptive thresholding, the standard distance dependant localization and bootstrap Kalman gain are also implemented for comparison purposes. We assessed each setup with different ensemble sets to investigate the sensitivity of each method on ensemble size. The results indicate that thresholding of forecast covariance yields more reliable performance than Kalman gain. Among thresholding function, SCAD is more robust for both covariance and gain estimation. Our analyses emphasize that not all assimilation cycles do require thresholding and it should be performed wisely during the early assimilation cycles. The proposed scheme of adaptive thresholding outperforms other methods for subsurface characterization of underlying benchmarks.

  17. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  18. Impacts of a Stochastic Ice Mass-Size Relationship on Squall Line Ensemble Simulations

    NASA Astrophysics Data System (ADS)

    Stanford, M.; Varble, A.; Morrison, H.; Grabowski, W.; McFarquhar, G. M.; Wu, W.

    2017-12-01

    Cloud and precipitation structure, evolution, and cloud radiative forcing of simulated mesoscale convective systems (MCSs) are significantly impacted by ice microphysics parameterizations. Most microphysics schemes assume power law relationships with constant parameters for ice particle mass, area, and terminal fallspeed relationships as a function of size, despite observations showing that these relationships vary in both time and space. To account for such natural variability, a stochastic representation of ice microphysical parameters was developed using the Predicted Particle Properties (P3) microphysics scheme in the Weather Research and Forecasting model, guided by in situ aircraft measurements from a number of field campaigns. Here, the stochastic framework is applied to the "a" and "b" parameters of the unrimed ice mass-size (m-D) relationship (m=aDb) with co-varying "a" and "b" values constrained by observational distributions tested over a range of spatiotemporal autocorrelation scales. Diagnostically altering a-b pairs in three-dimensional (3D) simulations of the 20 May 2011 Midlatitude Continental Convective Clouds Experiment (MC3E) squall line suggests that these parameters impact many important characteristics of the simulated squall line, including reflectivity structure (particularly in the anvil region), surface rain rates, surface and top of atmosphere radiative fluxes, buoyancy and latent cooling distributions, and system propagation speed. The stochastic a-b P3 scheme is tested using two frameworks: (1) a large ensemble of two-dimensional idealized squall line simulations and (2) a smaller ensemble of 3D simulations of the 20 May 2011 squall line, for which simulations are evaluated using observed radar reflectivity and radial velocity at multiple wavelengths, surface meteorology, and surface and satellite measured longwave and shortwave radiative fluxes. Ensemble spreads are characterized and compared against initial condition ensemble spreads for a range of variables.

  19. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    NASA Astrophysics Data System (ADS)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.

  20. Thermodynamics of phase-separating nanoalloys: Single particles and particle assemblies

    NASA Astrophysics Data System (ADS)

    Fèvre, Mathieu; Le Bouar, Yann; Finel, Alphonse

    2018-05-01

    The aim of this paper is to investigate the consequences of finite-size effects on the thermodynamics of nanoparticle assemblies and isolated particles. We consider a binary phase-separating alloy with a negligible atomic size mismatch, and equilibrium states are computed using off-lattice Monte Carlo simulations in several thermodynamic ensembles. First, a semi-grand-canonical ensemble is used to describe infinite assemblies of particles with the same size. When decreasing the particle size, we obtain a significant decrease of the solid/liquid transition temperatures as well as a growing asymmetry of the solid-state miscibility gap related to surface segregation effects. Second, a canonical ensemble is used to analyze the thermodynamic equilibrium of finite monodisperse particle assemblies. Using a general thermodynamic formulation, we show that a particle assembly may split into two subassemblies of identical particles. Moreover, if the overall average canonical concentration belongs to a discrete spectrum, the subassembly concentrations are equal to the semi-grand-canonical equilibrium ones. We also show that the equilibrium of a particle assembly with a prescribed size distribution combines a size effect and the fact that a given particle size assembly can adopt two configurations. Finally, we have considered the thermodynamics of an isolated particle to analyze whether a phase separation can be defined within a particle. When studying rather large nanoparticles, we found that the region in which a two-phase domain can be identified inside a particle is well below the bulk phase diagram, but the concentration of the homogeneous core remains very close to the bulk solubility limit.

  1. Effect of Data Assimilation Parameters on The Optimized Surface CO2 Flux in Asia

    NASA Astrophysics Data System (ADS)

    Kim, Hyunjung; Kim, Hyun Mee; Kim, Jinwoong; Cho, Chun-Ho

    2018-02-01

    In this study, CarbonTracker, an inverse modeling system based on the ensemble Kalman filter, was used to evaluate the effects of data assimilation parameters (assimilation window length and ensemble size) on the estimation of surface CO2 fluxes in Asia. Several experiments with different parameters were conducted, and the results were verified using CO2 concentration observations. The assimilation window lengths tested were 3, 5, 7, and 10 weeks, and the ensemble sizes were 100, 150, and 300. Therefore, a total of 12 experiments using combinations of these parameters were conducted. The experimental period was from January 2006 to December 2009. Differences between the optimized surface CO2 fluxes of the experiments were largest in the Eurasian Boreal (EB) area, followed by Eurasian Temperate (ET) and Tropical Asia (TA), and were larger in boreal summer than in boreal winter. The effect of ensemble size on the optimized biosphere flux is larger than the effect of the assimilation window length in Asia, but the importance of them varies in specific regions in Asia. The optimized biosphere flux was more sensitive to the assimilation window length in EB, whereas it was sensitive to the ensemble size as well as the assimilation window length in ET. The larger the ensemble size and the shorter the assimilation window length, the larger the uncertainty (i.e., spread of ensemble) of optimized surface CO2 fluxes. The 10-week assimilation window and 300 ensemble size were the optimal configuration for CarbonTracker in the Asian region based on several verifications using CO2 concentration measurements.

  2. Scalable and balanced dynamic hybrid data assimilation

    NASA Astrophysics Data System (ADS)

    Kauranne, Tuomo; Amour, Idrissa; Gunia, Martin; Kallio, Kari; Lepistö, Ahti; Koponen, Sampsa

    2017-04-01

    Scalability of complex weather forecasting suites is dependent on the technical tools available for implementing highly parallel computational kernels, but to an equally large extent also on the dependence patterns between various components of the suite, such as observation processing, data assimilation and the forecast model. Scalability is a particular challenge for 4D variational assimilation methods that necessarily couple the forecast model into the assimilation process and subject this combination to an inherently serial quasi-Newton minimization process. Ensemble based assimilation methods are naturally more parallel, but large models force ensemble sizes to be small and that results in poor assimilation accuracy, somewhat akin to shooting with a shotgun in a million-dimensional space. The Variational Ensemble Kalman Filter (VEnKF) is an ensemble method that can attain the accuracy of 4D variational data assimilation with a small ensemble size. It achieves this by processing a Gaussian approximation of the current error covariance distribution, instead of a set of ensemble members, analogously to the Extended Kalman Filter EKF. Ensemble members are re-sampled every time a new set of observations is processed from a new approximation of that Gaussian distribution which makes VEnKF a dynamic assimilation method. After this a smoothing step is applied that turns VEnKF into a dynamic Variational Ensemble Kalman Smoother VEnKS. In this smoothing step, the same process is iterated with frequent re-sampling of the ensemble but now using past iterations as surrogate observations until the end result is a smooth and balanced model trajectory. In principle, VEnKF could suffer from similar scalability issues as 4D-Var. However, this can be avoided by isolating the forecast model completely from the minimization process by implementing the latter as a wrapper code whose only link to the model is calling for many parallel and totally independent model runs, all of them implemented as parallel model runs themselves. The only bottleneck in the process is the gathering and scattering of initial and final model state snapshots before and after the parallel runs which requires a very efficient and low-latency communication network. However, the volume of data communicated is small and the intervening minimization steps are only 3D-Var, which means their computational load is negligible compared with the fully parallel model runs. We present example results of scalable VEnKF with the 4D lake and shallow sea model COHERENS, assimilating simultaneously continuous in situ measurements in a single point and infrequent satellite images that cover a whole lake, with the fully scalable VEnKF.

  3. Ensemble representations: effects of set size and item heterogeneity on average size perception.

    PubMed

    Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W

    2013-02-01

    Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Order and disorder in coupled metronome systems

    NASA Astrophysics Data System (ADS)

    Boda, Sz.; Davidova, L.; Néda, Z.

    2014-04-01

    Metronomes placed on a smoothly rotating disk are used for exemplifying order-disorder type phase-transitions. The ordered phase corresponds to spontaneously synchronized beats, while the disordered state is when the metronomes swing in unsynchronized manner. Using a given metronome ensemble, we propose several methods for switching between ordered and disordered states. The system is studied by controlled experiments and a realistic model. The model reproduces the experimental results, and allows to study large ensembles with good statistics. Finite-size effects and the increased fluctuation in the vicinity of the phase-transition point are also successfully reproduced.

  5. Global Weirding? - Using Very Large Ensembles and Extreme Value Theory to assess Changes in Extreme Weather Events Today

    NASA Astrophysics Data System (ADS)

    Otto, F. E. L.; Mitchell, D.; Sippel, S.; Black, M. T.; Dittus, A. J.; Harrington, L. J.; Mohd Saleh, N. H.

    2014-12-01

    A shift in the distribution of socially-relevant climate variables such as daily minimum winter temperatures and daily precipitation extremes, has been attributed to anthropogenic climate change for various mid-latitude regions. However, while there are many process-based arguments suggesting also a change in the shape of these distributions, attribution studies demonstrating this have not currently been undertaken. Here we use a very large initial condition ensemble of ~40,000 members simulating the European winter 2013/2014 using the distributed computing infrastructure under the weather@home project. Two separate scenarios are used:1. current climate conditions, and 2. a counterfactual scenario of "world that might have been" without anthropogenic forcing. Specifically focusing on extreme events, we assess how the estimated parameters of the Generalized Extreme Value (GEV) distribution vary depending on variable-type, sampling frequency (daily, monthly, …) and geographical region. We find that the location parameter changes for most variables but, depending on the region and variables, we also find significant changes in scale and shape parameters. The very large ensemble allows, furthermore, to assess whether such findings in the fitted GEV distributions are consistent with an empirical analysis of the model data, and whether the most extreme data still follow a known underlying distribution that in a small sample size might otherwise be thought of as an out-lier. The ~40,000 member ensemble is simulated using 12 different SST patterns (1 'observed', and 11 best guesses of SSTs with no anthropogenic warming). The range in SSTs, along with the corresponding changings in the NAO and high-latitude blocking inform on the dynamics governing some of these extreme events. While strong tele-connection patterns are not found in this particular experiment, the high number of simulated extreme events allows for a more thorough analysis of the dynamics than has been performed before. Therefore, combining extreme value theory with very large ensemble simulations allows us to understand the dynamics of changes in extreme events which is not possible just using the former but also shows in which cases statistics combined with smaller ensembles give as valid results as very large initial conditions.

  6. Improving precipitation forecast with hybrid 3DVar and time-lagged ensembles in a heavy rainfall event

    NASA Astrophysics Data System (ADS)

    Wang, Yuanbing; Min, Jinzhong; Chen, Yaodeng; Huang, Xiang-Yu; Zeng, Mingjian; Li, Xin

    2017-01-01

    This study evaluates the performance of three-dimensional variational (3DVar) and a hybrid data assimilation system using time-lagged ensembles in a heavy rainfall event. The time-lagged ensembles are constructed by sampling from a moving time window of 3 h along a model trajectory, which is economical and easy to implement. The proposed hybrid data assimilation system introduces flow-dependent error covariance derived from time-lagged ensemble into variational cost function without significantly increasing computational cost. Single observation tests are performed to document characteristic of the hybrid system. The sensitivity of precipitation forecasts to ensemble covariance weight and localization scale is investigated. Additionally, the TLEn-Var is evaluated and compared to the ETKF(ensemble transformed Kalman filter)-based hybrid assimilation within a continuously cycling framework, through which new hybrid analyses are produced every 3 h over 10 days. The 24 h accumulated precipitation, moisture, wind are analyzed between 3DVar and the hybrid assimilation using time-lagged ensembles. Results show that model states and precipitation forecast skill are improved by the hybrid assimilation using time-lagged ensembles compared with 3DVar. Simulation of the precipitable water and structure of the wind are also improved. Cyclonic wind increments are generated near the rainfall center, leading to an improved precipitation forecast. This study indicates that the hybrid data assimilation using time-lagged ensembles seems like a viable alternative or supplement in the complex models for some weather service agencies that have limited computing resources to conduct large size of ensembles.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, Logan C.; Ciesielski, Peter N.; Jarvis, Mark W.

    Here, biomass particles can experience variable thermal conditions during fast pyrolysis due to differences in their size and morphology, and from local temperature variations within a reactor. These differences lead to increased heterogeneity of the chemical products obtained in the pyrolysis vapors and bio-oil. Here we present a simple, high-throughput method to investigate the thermal history experienced by large ensembles of particles during fast pyrolysis by imaging and quantitative image analysis. We present a correlation between the surface luminance (darkness) of the biochar particle and the highest temperature that it experienced during pyrolysis. Next, we apply this correlation to large,more » heterogeneous ensembles of char particles produced in a laminar entrained flow reactor (LEFR). The results are used to interpret the actual temperature distributions delivered by the reactor over a range of operating conditions.« less

  8. A simple new filter for nonlinear high-dimensional data assimilation

    NASA Astrophysics Data System (ADS)

    Tödter, Julian; Kirchgessner, Paul; Ahrens, Bodo

    2015-04-01

    The ensemble Kalman filter (EnKF) and its deterministic variants, mostly square root filters such as the ensemble transform Kalman filter (ETKF), represent a popular alternative to variational data assimilation schemes and are applied in a wide range of operational and research activities. Their forecast step employs an ensemble integration that fully respects the nonlinear nature of the analyzed system. In the analysis step, they implicitly assume the prior state and observation errors to be Gaussian. Consequently, in nonlinear systems, the analysis mean and covariance are biased, and these filters remain suboptimal. In contrast, the fully nonlinear, non-Gaussian particle filter (PF) only relies on Bayes' theorem, which guarantees an exact asymptotic behavior, but because of the so-called curse of dimensionality it is exposed to weight collapse. This work shows how to obtain a new analysis ensemble whose mean and covariance exactly match the Bayesian estimates. This is achieved by a deterministic matrix square root transformation of the forecast ensemble, and subsequently a suitable random rotation that significantly contributes to filter stability while preserving the required second-order statistics. The forecast step remains as in the ETKF. The proposed algorithm, which is fairly easy to implement and computationally efficient, is referred to as the nonlinear ensemble transform filter (NETF). The properties and performance of the proposed algorithm are investigated via a set of Lorenz experiments. They indicate that such a filter formulation can increase the analysis quality, even for relatively small ensemble sizes, compared to other ensemble filters in nonlinear, non-Gaussian scenarios. Furthermore, localization enhances the potential applicability of this PF-inspired scheme in larger-dimensional systems. Finally, the novel algorithm is coupled to a large-scale ocean general circulation model. The NETF is stable, behaves reasonably and shows a good performance with a realistic ensemble size. The results confirm that, in principle, it can be applied successfully and as simple as the ETKF in high-dimensional problems without further modifications of the algorithm, even though it is only based on the particle weights. This proves that the suggested method constitutes a useful filter for nonlinear, high-dimensional data assimilation, and is able to overcome the curse of dimensionality even in deterministic systems.

  9. Stochastic dynamics and mechanosensitivity of myosin II minifilaments

    NASA Astrophysics Data System (ADS)

    Albert, Philipp J.; Erdmann, Thorsten; Schwarz, Ulrich S.

    2014-09-01

    Tissue cells are in a state of permanent mechanical tension that is maintained mainly by myosin II minifilaments, which are bipolar assemblies of tens of myosin II molecular motors contracting actin networks and bundles. Here we introduce a stochastic model for myosin II minifilaments as two small myosin II motor ensembles engaging in a stochastic tug-of-war. Each of the two ensembles is described by the parallel cluster model that allows us to use exact stochastic simulations and at the same time to keep important molecular details of the myosin II cross-bridge cycle. Our simulation and analytical results reveal a strong dependence of myosin II minifilament dynamics on environmental stiffness that is reminiscent of the cellular response to substrate stiffness. For small stiffness, minifilaments form transient crosslinks exerting short spikes of force with negligible mean. For large stiffness, minifilaments form near permanent crosslinks exerting a mean force which hardly depends on environmental elasticity. This functional switch arises because dissociation after the power stroke is suppressed by force (catch bonding) and because ensembles can no longer perform the power stroke at large forces. Symmetric myosin II minifilaments perform a random walk with an effective diffusion constant which decreases with increasing ensemble size, as demonstrated for rigid substrates with an analytical treatment.

  10. Framework for cascade size calculations on random networks

    NASA Astrophysics Data System (ADS)

    Burkholz, Rebekka; Schweitzer, Frank

    2018-04-01

    We present a framework to calculate the cascade size evolution for a large class of cascade models on random network ensembles in the limit of infinite network size. Our method is exact and applies to network ensembles with almost arbitrary degree distribution, degree-degree correlations, and, in case of threshold models, for arbitrary threshold distribution. With our approach, we shift the perspective from the known branching process approximations to the iterative update of suitable probability distributions. Such distributions are key to capture cascade dynamics that involve possibly continuous quantities and that depend on the cascade history, e.g., if load is accumulated over time. As a proof of concept, we provide two examples: (a) Constant load models that cover many of the analytically tractable casacade models, and, as a highlight, (b) a fiber bundle model that was not tractable by branching process approximations before. Our derivations cover the whole cascade dynamics, not only their steady state. This allows us to include interventions in time or further model complexity in the analysis.

  11. Advanced ensemble modelling of flexible macromolecules using X-ray solution scattering.

    PubMed

    Tria, Giancarlo; Mertens, Haydyn D T; Kachala, Michael; Svergun, Dmitri I

    2015-03-01

    Dynamic ensembles of macromolecules mediate essential processes in biology. Understanding the mechanisms driving the function and molecular interactions of 'unstructured' and flexible molecules requires alternative approaches to those traditionally employed in structural biology. Small-angle X-ray scattering (SAXS) is an established method for structural characterization of biological macromolecules in solution, and is directly applicable to the study of flexible systems such as intrinsically disordered proteins and multi-domain proteins with unstructured regions. The Ensemble Optimization Method (EOM) [Bernadó et al. (2007 ▶). J. Am. Chem. Soc. 129, 5656-5664] was the first approach introducing the concept of ensemble fitting of the SAXS data from flexible systems. In this approach, a large pool of macromolecules covering the available conformational space is generated and a sub-ensemble of conformers coexisting in solution is selected guided by the fit to the experimental SAXS data. This paper presents a series of new developments and advancements to the method, including significantly enhanced functionality and also quantitative metrics for the characterization of the results. Building on the original concept of ensemble optimization, the algorithms for pool generation have been redesigned to allow for the construction of partially or completely symmetric oligomeric models, and the selection procedure was improved to refine the size of the ensemble. Quantitative measures of the flexibility of the system studied, based on the characteristic integral parameters of the selected ensemble, are introduced. These improvements are implemented in the new EOM version 2.0, and the capabilities as well as inherent limitations of the ensemble approach in SAXS, and of EOM 2.0 in particular, are discussed.

  12. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  13. A New Look into the Effect of Large Drops on Radiative Transfer Process

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander

    2003-01-01

    Recent studies indicate that a cloudy atmosphere absorbs more solar radiation than any current 1D or 3D radiation model can predict. The excess absorption is not large, perhaps 10-15 W/sq m or less, but any such systematic bias is of concern since radiative transfer models are assumed to be sufficiently accurate for remote sensing applications and climate modeling. The most natural explanation would be that models do not capture real 3D cloud structure and, as a consequence, their photon path lengths are too short. However, extensive calculations, using increasingly realistic 3D cloud structures, failed to produce photon paths long enough to explain the excess absorption. Other possible explanations have also been unsuccessful so, at this point, conventional models seem to offer no solution to this puzzle. The weakest link in conventional models is the way a size distribution of cloud particles is mathematically handled. Basically, real particles are replaced with a single average particle. This "ensemble assumption" assumes that all particle sizes are well represented in any given elementary volume. But the concentration of larger particles can be so low that this assumption is significantly violated. We show how a different mathematical route, using the concept of a cumulative distribution, avoids the ensemble assumption. The cumulative distribution has jumps, or steps, corresponding to the rarer sizes. These jumps result in an additional term, a kind of Green's function, in the solution of the radiative transfer equation. Solving the cloud radiative transfer equation with the measured particle distributions, described in a cumulative rather than an ensemble fashion, may lead to increased cloud absorption of the magnitude observed.

  14. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  15. A conditional approach to determining the effect of anthropogenic climate change on very rare events.

    NASA Astrophysics Data System (ADS)

    Wehner, Michael; Pall, Pardeep; Zarzycki, Colin; Stone, Daithi

    2016-04-01

    Probabilistic extreme event attribution is especially difficult for weather events that are caused by extremely rare large-scale meteorological patterns. Traditional modeling techniques have involved using ensembles of climate models, either fully coupled or with prescribed ocean and sea ice. Ensemble sizes for the latter case ranges from several 100 to tens of thousand. However, even if the simulations are constrained by the observed ocean state, the requisite large-scale meteorological pattern may not occur frequently enough or even at all in free running climate model simulations. We present a method to ensure that simulated events similar to the observed event are modeled with enough fidelity that robust statistics can be determined given the large scale meteorological conditions. By initializing suitably constrained short term ensemble hindcasts of both the actual weather system and a counterfactual weather system where the human interference in the climate system is removed, the human contribution to the magnitude of the event can be determined. However, the change (if any) in the probability of an event of the observed magnitude is conditional not only on the state of the ocean/sea ice system but also on the prescribed initial conditions determined by the causal large scale meteorological pattern. We will discuss the implications of this technique through two examples; the 2013 Colorado flood and the 2014 Typhoon Haiyan.

  16. Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldenson, N.; Mauger, G.; Leung, L. R.

    Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less

  17. RACORO continental boundary layer cloud investigations. Part I: Case study development and ensemble large-scale forcings

    DOE PAGES

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; ...

    2015-06-19

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60-hour case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in-situ measurements from the RACORO field campaign and remote-sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functionsmore » for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be ~0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing datasets are derived from the ARM variational analysis, ECMWF forecasts, and a multi-scale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in 'trial' large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.« less

  18. RACORO Continental Boundary Layer Cloud Investigations: 1. Case Study Development and Ensemble Large-Scale Forcings

    NASA Technical Reports Server (NTRS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; hide

    2015-01-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, kappa, are derived from observations to be approximately 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  19. RACORO continental boundary layer cloud investigations: 1. Case study development and ensemble large-scale forcings

    NASA Astrophysics Data System (ADS)

    Vogelmann, Andrew M.; Fridlind, Ann M.; Toto, Tami; Endo, Satoshi; Lin, Wuyin; Wang, Jian; Feng, Sha; Zhang, Yunyan; Turner, David D.; Liu, Yangang; Li, Zhijin; Xie, Shaocheng; Ackerman, Andrew S.; Zhang, Minghua; Khairoutdinov, Marat

    2015-06-01

    Observation-based modeling case studies of continental boundary layer clouds have been developed to study cloudy boundary layers, aerosol influences upon them, and their representation in cloud- and global-scale models. Three 60 h case study periods span the temporal evolution of cumulus, stratiform, and drizzling boundary layer cloud systems, representing mixed and transitional states rather than idealized or canonical cases. Based on in situ measurements from the Routine AAF (Atmospheric Radiation Measurement (ARM) Aerial Facility) CLOWD (Clouds with Low Optical Water Depth) Optical Radiative Observations (RACORO) field campaign and remote sensing observations, the cases are designed with a modular configuration to simplify use in large-eddy simulations (LES) and single-column models. Aircraft measurements of aerosol number size distribution are fit to lognormal functions for concise representation in models. Values of the aerosol hygroscopicity parameter, κ, are derived from observations to be 0.10, which are lower than the 0.3 typical over continents and suggestive of a large aerosol organic fraction. Ensemble large-scale forcing data sets are derived from the ARM variational analysis, European Centre for Medium-Range Weather Forecasts, and a multiscale data assimilation system. The forcings are assessed through comparison of measured bulk atmospheric and cloud properties to those computed in "trial" large-eddy simulations, where more efficient run times are enabled through modest reductions in grid resolution and domain size compared to the full-sized LES grid. Simulations capture many of the general features observed, but the state-of-the-art forcings were limited at representing details of cloud onset, and tight gradients and high-resolution transients of importance. Methods for improving the initial conditions and forcings are discussed. The cases developed are available to the general modeling community for studying continental boundary clouds.

  20. Number statistics for β-ensembles of random matrices: Applications to trapped fermions at zero temperature.

    PubMed

    Marino, Ricardo; Majumdar, Satya N; Schehr, Grégory; Vivo, Pierpaolo

    2016-09-01

    Let P_{β}^{(V)}(N_{I}) be the probability that a N×Nβ-ensemble of random matrices with confining potential V(x) has N_{I} eigenvalues inside an interval I=[a,b] on the real line. We introduce a general formalism, based on the Coulomb gas technique and the resolvent method, to compute analytically P_{β}^{(V)}(N_{I}) for large N. We show that this probability scales for large N as P_{β}^{(V)}(N_{I})≈exp[-βN^{2}ψ^{(V)}(N_{I}/N)], where β is the Dyson index of the ensemble. The rate function ψ^{(V)}(k_{I}), independent of β, is computed in terms of single integrals that can be easily evaluated numerically. The general formalism is then applied to the classical β-Gaussian (I=[-L,L]), β-Wishart (I=[1,L]), and β-Cauchy (I=[-L,L]) ensembles. Expanding the rate function around its minimum, we find that generically the number variance var(N_{I}) exhibits a nonmonotonic behavior as a function of the size of the interval, with a maximum that can be precisely characterized. These analytical results, corroborated by numerical simulations, provide the full counting statistics of many systems where random matrix models apply. In particular, we present results for the full counting statistics of zero-temperature one-dimensional spinless fermions in a harmonic trap.

  1. Assessing the Impact of Pre-gpm Microwave Precipitation Observations in the Goddard WRF Ensemble Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Chambon, Philippe; Zhang, Sara Q.; Hou, Arthur Y.; Zupanski, Milija; Cheung, Samson

    2013-01-01

    The forthcoming Global Precipitation Measurement (GPM) Mission will provide next generation precipitation observations from a constellation of satellites. Since precipitation by nature has large variability and low predictability at cloud-resolving scales, the impact of precipitation data on the skills of mesoscale numerical weather prediction (NWP) is largely affected by the characterization of background and observation errors and the representation of nonlinear cloud/precipitation physics in an NWP data assimilation system. We present a data impact study on the assimilation of precipitation-affected microwave (MW) radiances from a pre-GPM satellite constellation using the Goddard WRF Ensemble Data Assimilation System (Goddard WRF-EDAS). A series of assimilation experiments are carried out in a Weather Research Forecast (WRF) model domain of 9 km resolution in western Europe. Sensitivities to observation error specifications, background error covariance estimated from ensemble forecasts with different ensemble sizes, and MW channel selections are examined through single-observation assimilation experiments. An empirical bias correction for precipitation-affected MW radiances is developed based on the statistics of radiance innovations in rainy areas. The data impact is assessed by full data assimilation cycling experiments for a storm event that occurred in France in September 2010. Results show that the assimilation of MW precipitation observations from a satellite constellation mimicking GPM has a positive impact on the accumulated rain forecasts verified with surface radar rain estimates. The case-study on a convective storm also reveals that the accuracy of ensemble-based background error covariance is limited by sampling errors and model errors such as precipitation displacement and unresolved convective scale instability.

  2. Estimating the Temperature Experienced by Biomass Particles during Fast Pyrolysis Using Microscopic Analysis of Biochars

    DOE PAGES

    Thompson, Logan C.; Ciesielski, Peter N.; Jarvis, Mark W.; ...

    2017-07-12

    Here, biomass particles can experience variable thermal conditions during fast pyrolysis due to differences in their size and morphology, and from local temperature variations within a reactor. These differences lead to increased heterogeneity of the chemical products obtained in the pyrolysis vapors and bio-oil. Here we present a simple, high-throughput method to investigate the thermal history experienced by large ensembles of particles during fast pyrolysis by imaging and quantitative image analysis. We present a correlation between the surface luminance (darkness) of the biochar particle and the highest temperature that it experienced during pyrolysis. Next, we apply this correlation to large,more » heterogeneous ensembles of char particles produced in a laminar entrained flow reactor (LEFR). The results are used to interpret the actual temperature distributions delivered by the reactor over a range of operating conditions.« less

  3. Efficient and Unbiased Sampling of Biomolecular Systems in the Canonical Ensemble: A Review of Self-Guided Langevin Dynamics

    PubMed Central

    Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R.

    2013-01-01

    This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion “borrows” energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991

  4. Perception of ensemble statistics requires attention.

    PubMed

    Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A

    2017-02-01

    To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Revisiting the synoptic-scale predictability of severe European winter storms using ECMWF ensemble reforecasts

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich

    2017-10-01

    New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.

  6. Formation and evolution of multimodal size distributions of InAs/GaAs quantum dots

    NASA Astrophysics Data System (ADS)

    Pohl, U. W.; Pötschke, K.; Schliwa, A.; Lifshits, M. B.; Shchukin, V. A.; Jesson, D. E.; Bimberg, D.

    2006-05-01

    Self-organized formation and evolution of quantum dot (QD) ensembles with a multimodal size distribution is reported. Such ensembles form after fast deposition near the critical thickness during a growth interruption (GRI) prior to cap layer growth and consist of pure InAs truncated pyramids with heights varying in steps of complete InAs monolayers, thereby creating well-distinguishable sub-ensembles. Ripening during GRI manifests itself by an increase of sub-ensembles of larger QDs at the expense of sub-ensembles of smaller ones, leaving the wetting layer unchanged. The dynamics of the multimodal QD size distribution is theoretically described using a kinetic approach. Starting from a broad distribution of flat QDs, a predominantly vertical growth is found due to strain-induced barriers for nucleation of a next atomic layer on different facets. QDs having initially a shorter base length attain a smaller height, accounting for the experimentally observed sub-ensemble structure. The evolution of the distribution is described by a master equation, which accounts for growth or dissolution of the QDs by mass exchange between the QDs and the adatom sea. The numerical solution is in good agreement with the measured dynamics.

  7. Self-narrowing of size distributions of nanostructures by nucleation antibunching

    NASA Astrophysics Data System (ADS)

    Glas, Frank; Dubrovskii, Vladimir G.

    2017-08-01

    We study theoretically the size distributions of ensembles of nanostructures fed from a nanosize mother phase or a nanocatalyst that contains a limited number of the growth species that form each nanostructure. In such systems, the nucleation probability decreases exponentially after each nucleation event, leading to the so-called nucleation antibunching. Specifically, this effect has been observed in individual nanowires grown in the vapor-liquid-solid mode and greatly affects their properties. By performing numerical simulations over large ensembles of nanostructures as well as developing two different analytical schemes (a discrete and a continuum approach), we show that nucleation antibunching completely suppresses fluctuation-induced broadening of the size distribution. As a result, the variance of the distribution saturates to a time-independent value instead of growing infinitely with time. The size distribution widths and shapes primarily depend on the two parameters describing the degree of antibunching and the nucleation delay required to initiate the growth. The resulting sub-Poissonian distributions are highly desirable for improving size homogeneity of nanowires. On a more general level, this unique self-narrowing effect is expected whenever the growth rate is regulated by a nanophase which is able to nucleate an island much faster than it is refilled from a surrounding macroscopic phase.

  8. Analysis of microstructure-dependent shock dissipation and hot-spot formation in granular metalized explosive

    NASA Astrophysics Data System (ADS)

    Chakravarthy, Sunada; Gonthier, Keith A.

    2016-07-01

    Variations in the microstructure of granular explosives (i.e., particle packing density, size, shape, and composition) can affect their shock sensitivity by altering thermomechanical fields at the particle-scale during pore collapse within shocks. If the deformation rate is fast, hot-spots can form, ignite, and interact, resulting in burn at the macro-scale. In this study, a two-dimensional finite and discrete element technique is used to simulate and examine shock-induced dissipation and hot-spot formation within low density explosives (68%-84% theoretical maximum density (TMD)) consisting of large ensembles of HMX (C4H8N8O8) and aluminum (Al) particles (size ˜ 60 -360 μm). Emphasis is placed on identifying how the inclusion of Al influences effective shock dissipation and hot-spot fields relative to equivalent ensembles of neat/pure HMX for shocks that are sufficiently strong to eliminate porosity. Spatially distributed hot-spot fields are characterized by their number density and area fraction enabling their dynamics to be described in terms of nucleation, growth, and agglomeration-dominated phases with increasing shock strength. For fixed shock particle speed, predictions indicate that decreasing packing density enhances shock dissipation and hot-spot formation, and that the inclusion of Al increases dissipation relative to neat HMX by pressure enhanced compaction resulting in fewer but larger HMX hot-spots. Ensembles having bimodal particle sizes are shown to significantly affect hot-spot dynamics by altering the spatial distribution of hot-spots behind shocks.

  9. The role of internal variability for decadal carbon uptake anomalies in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Spring, Aaron; Hi, Hongmei; Ilyina, Tatiana

    2017-04-01

    The Southern Ocean is a major sink for anthropogenic CO2 emissions and hence it plays an essential role in modulating global carbon cycle and climate change. Previous studies based on observations (e.g., Landschützer et al. 2015) show pronounced decadal variations of carbon uptake in the Southern Ocean in recent decades and this variability is largely driven by internal climate variability. However, due to limited ensemble size of simulations, the variability of this important ocean sink is still poorly assessed by the state-of-the-art earth system models (ESMs). To assess the internal variability of carbon sink in the Southern Ocean, we use a large ensemble of 100 member simulations based on the Max Planck Institute-ESM (MPI-ESM). The large ensemble of simulations is generated via perturbed initial conditions in the ocean and atmosphere. Each ensemble member includes a historical simulation from 1850 to 2005 with an extension until 2100 under Representative Concentration Pathway (RCP) 4.5 future projections. Here we use model simulations from 1980-2015 to compare with available observation-based dataset. We found several ensemble members showing decadal decreasing trends in the carbon sink, which are similar to the trend shown in observations. This result suggests that MPI-ESM large ensemble simulations are able to reproduce decadal variation of carbon sink in the Southern Ocean. Moreover, the decreasing trends of Southern Ocean carbon sink in MPI-ESM are mainly contributed by region between 50-60°S. To understand the internal variability of the air-sea carbon fluxes in the Southern Ocean, we further investigate the variability of underlying processes, such as physical climate variability and ocean biological processes. Our results indicate two main drivers for the decadal decreasing trend of carbon sink: i) Intensified winds enhance upwelling of old carbon-rich waters, this leads to increase of the ocean surface pCO2; ii) Primary production is reduced in area from 50-60°S, probably induced by reduced euphotic water column stability; therefore the biological drawdown of ocean surface pCO2 is weakened accordingly and hence the ocean is in favor of carbon outgassing. Landschützer, et al. (2015): The reinvigoration of the Southern Ocean carbon sink, Science, 349, 1221-1224.

  10. Practice Makes Perfect?: Effective Practice Instruction in Large Ensembles

    ERIC Educational Resources Information Center

    Prichard, Stephanie

    2012-01-01

    Helping young musicians learn how to practice effectively is a challenge faced by all music educators. This article presents a system of individual music practice instruction that can be seamlessly integrated within large-ensemble rehearsals. Using a step-by-step approach, large-ensemble conductors can teach students to identify and isolate…

  11. Thermal induced carrier's transfer in bimodal size distribution InAs/GaAs quantum dots

    NASA Astrophysics Data System (ADS)

    Ilahi, B.; Alshehri, K.; Madhar, N. A.; Sfaxi, L.; Maaref, H.

    2018-06-01

    This work reports on the investigation of the thermal induced carriers' transfer mechanism in vertically stacked bimodal size distribution InAs/GaAs quantum dots (QD). A model treating the QD as a localized states ensemble (LSE) has been employed to fit the atypical temperature dependence of the photoluminescence (PL) emission energies and linewidth. The results suggest that thermally activated carriers transfer within the large size QD family occurs through the neighboring smaller size QD as an intermediate channel before direct carriers redistribution. The obtained activation energy suggests also the possible contribution of the wetting layer (WL) continuum states as a second mediator channel for carriers transfer.

  12. Evolution of Precipitation Extremes in Three Large Ensembles of Climate Simulations - Impact of Spatial and Temporal Resolutions

    NASA Astrophysics Data System (ADS)

    Martel, J. L.; Brissette, F.; Mailhot, A.; Wood, R. R.; Ludwig, R.; Frigon, A.; Leduc, M.; Turcotte, R.

    2017-12-01

    Recent studies indicate that the frequency and intensity of extreme precipitation will increase in future climate due to global warming. In this study, we compare annual maxima precipitation series from three large ensembles of climate simulations at various spatial and temporal resolutions. The first two are at the global scale: the Canadian Earth System Model (CanESM2) 50-member large ensemble (CanESM2-LE) at a 2.8° resolution and the Community Earth System Model (CESM1) 40-member large ensemble (CESM1-LE) at a 1° resolution. The third ensemble is at the regional scale over both Eastern North America and Europe: the Canadian Regional Climate Model (CRCM5) 50-member large ensemble (CRCM5-LE) at a 0.11° resolution, driven at its boundaries by the CanESM-LE. The CRCM5-LE is a new ensemble issued from the ClimEx project (http://www.climex-project.org), a Québec-Bavaria collaboration. Using these three large ensembles, change in extreme precipitations over the historical (1980-2010) and future (2070-2100) periods are investigated. This results in 1 500 (30 years x 50 members for CanESM2-LE and CRCM5-LE) and 1200 (30 years x 40 members for CESM1-LE) simulated years over both the historical and future periods. Using these large datasets, the empirical daily (and sub-daily for CRCM5-LE) extreme precipitation quantiles for large return periods ranging from 2 to 100 years are computed. Results indicate that daily extreme precipitations generally will increase over most land grid points of both domains according to the three large ensembles. Regarding the CRCM5-LE, the increase in sub-daily extreme precipitations will be even more important than the one observed for daily extreme precipitations. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety.

  13. Signal to noise quantification of regional climate projections

    NASA Astrophysics Data System (ADS)

    Li, S.; Rupp, D. E.; Mote, P.

    2016-12-01

    One of the biggest challenges in interpreting climate model outputs for impacts studies and adaptation planning is understanding the sources of disagreement among models (which is often used imperfectly as a stand-in for system uncertainty). Internal variability is a primary source of uncertainty in climate projections, especially for precipitation, for which models disagree about even the sign of changes in large areas like the continental US. Taking advantage of a large initial-condition ensemble of regional climate simulations, this study quantifies the magnitude of changes forced by increasing greenhouse gas concentrations relative to internal variability. Results come from a large initial-condition ensemble of regional climate model simulations generated by weather@home, a citizen science computing platform, where the western United States climate was simulated for the recent past (1985-2014) and future (2030-2059) using a 25-km horizontal resolution regional climate model (HadRM3P) nested in global atmospheric model (HadAM3P). We quantify grid point level signal-to-noise not just in temperature and precipitation responses, but also the energy and moisture flux terms that are related to temperature and precipitation responses, to provide important insights regarding uncertainty in climate change projections at local and regional scales. These results will aid modelers in determining appropriate ensemble sizes for different climate variables and help users of climate model output with interpreting climate model projections.

  14. Ensemble Learning Method for Hidden Markov Models

    DTIC Science & Technology

    2014-12-01

    Ensemble HMM landmine detector Mine signatures vary according to the mine type, mine size , and burial depth. Similarly, clutter signatures vary with soil ...approaches for the di erent K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum...propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we

  15. Functional form for the leading correction to the distribution of the largest eigenvalue in the GUE and LUE

    NASA Astrophysics Data System (ADS)

    Forrester, Peter J.; Trinh, Allan K.

    2018-05-01

    The neighbourhood of the largest eigenvalue λmax in the Gaussian unitary ensemble (GUE) and Laguerre unitary ensemble (LUE) is referred to as the soft edge. It is known that there exists a particular centring and scaling such that the distribution of λmax tends to a universal form, with an error term bounded by 1/N2/3. We take up the problem of computing the exact functional form of the leading error term in a large N asymptotic expansion for both the GUE and LUE—two versions of the LUE are considered, one with the parameter a fixed and the other with a proportional to N. Both settings in the LUE case allow for an interpretation in terms of the distribution of a particular weighted path length in a model involving exponential variables on a rectangular grid, as the grid size gets large. We give operator theoretic forms of the corrections, which are corollaries of knowledge of the first two terms in the large N expansion of the scaled kernel and are readily computed using a method due to Bornemann. We also give expressions in terms of the solutions of particular systems of coupled differential equations, which provide an alternative method of computation. Both characterisations are well suited to a thinned generalisation of the original ensemble, whereby each eigenvalue is deleted independently with probability (1 - ξ). In Sec. V, we investigate using simulation the question of whether upon an appropriate centring and scaling a wider class of complex Hermitian random matrix ensembles have their leading correction to the distribution of λmax proportional to 1/N2/3.

  16. Molecular dynamics simulations using temperature-enhanced essential dynamics replica exchange.

    PubMed

    Kubitzki, Marcus B; de Groot, Bert L

    2007-06-15

    Today's standard molecular dynamics simulations of moderately sized biomolecular systems at full atomic resolution are typically limited to the nanosecond timescale and therefore suffer from limited conformational sampling. Efficient ensemble-preserving algorithms like replica exchange (REX) may alleviate this problem somewhat but are still computationally prohibitive due to the large number of degrees of freedom involved. Aiming at increased sampling efficiency, we present a novel simulation method combining the ideas of essential dynamics and REX. Unlike standard REX, in each replica only a selection of essential collective modes of a subsystem of interest (essential subspace) is coupled to a higher temperature, with the remainder of the system staying at a reference temperature, T(0). This selective excitation along with the replica framework permits efficient approximate ensemble-preserving conformational sampling and allows much larger temperature differences between replicas, thereby considerably enhancing sampling efficiency. Ensemble properties and sampling performance of the method are discussed using dialanine and guanylin test systems, with multi-microsecond molecular dynamics simulations of these test systems serving as references.

  17. Boiling point determination using adiabatic Gibbs ensemble Monte Carlo simulations: Application to metals described by embedded-atom potentials

    NASA Astrophysics Data System (ADS)

    Gelb, Lev D.; Chakraborty, Somendra Nath

    2011-12-01

    The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase.

  18. Transition-State Ensembles Navigate the Pathways of Enzyme Catalysis.

    PubMed

    Mickert, Matthias J; Gorris, Hans H

    2018-06-07

    Transition-state theory (TST) provides an important framework for analyzing and explaining the reaction rates of enzymes. TST, however, needs to account for protein dynamic effects and heterogeneities in enzyme catalysis. We have analyzed the reaction rates of β-galactosidase and β-glucuronidase at the single molecule level by using large arrays of femtoliter-sized chambers. Heterogeneities in individual reaction rates yield information on the intrinsic distribution of the free energy of activation (Δ G ‡ ) in an enzyme ensemble. The broader distribution of Δ G ‡ in β-galactosidase compared to β-glucuronidase is attributed to β-galactosidase's multiple catalytic functions as a hydrolase and a transglycosylase. Based on the catalytic mechanism of β-galactosidase, we show that transition-state ensembles do not only contribute to enzyme catalysis but can also channel the catalytic pathway to the formation of different products. We conclude that β-galactosidase is an example of natural evolution, where a new catalytic pathway branches off from an established enzyme function. The functional division of work between enzymatic substates explains why the conformational space represented by the enzyme ensemble is larger than the conformational space that can be sampled by any given enzyme molecule during catalysis.

  19. Design and Implementation of a Parallel Multivariate Ensemble Kalman Filter for the Poseidon Ocean General Circulation Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Koblinsky, Chester (Technical Monitor)

    2001-01-01

    A multivariate ensemble Kalman filter (MvEnKF) implemented on a massively parallel computer architecture has been implemented for the Poseidon ocean circulation model and tested with a Pacific Basin model configuration. There are about two million prognostic state-vector variables. Parallelism for the data assimilation step is achieved by regionalization of the background-error covariances that are calculated from the phase-space distribution of the ensemble. Each processing element (PE) collects elements of a matrix measurement functional from nearby PEs. To avoid the introduction of spurious long-range covariances associated with finite ensemble sizes, the background-error covariances are given compact support by means of a Hadamard (element by element) product with a three-dimensional canonical correlation function. The methodology and the MvEnKF configuration are discussed. It is shown that the regionalization of the background covariances; has a negligible impact on the quality of the analyses. The parallel algorithm is very efficient for large numbers of observations but does not scale well beyond 100 PEs at the current model resolution. On a platform with distributed memory, memory rather than speed is the limiting factor.

  20. Extreme Value Analysis of hydro meteorological extremes in the ClimEx Large-Ensemble

    NASA Astrophysics Data System (ADS)

    Wood, R. R.; Martel, J. L.; Willkofer, F.; von Trentini, F.; Schmid, F. J.; Leduc, M.; Frigon, A.; Ludwig, R.

    2017-12-01

    Many studies show an increase in the magnitude and frequency of hydrological extreme events in the course of climate change. However the contribution of natural variability to the magnitude and frequency of hydrological extreme events is not yet settled. A reliable estimate of extreme events is from great interest for water management and public safety. In the course of the ClimEx Project (www.climex-project.org) a new single-model large-ensemble was created by dynamically downscaling the CanESM2 large-ensemble with the Canadian Regional Climate Model version 5 (CRCM5) for an European Domain and a Northeastern North-American domain. By utilizing the ClimEx 50-Member Large-Ensemble (CRCM5 driven by CanESM2 Large-Ensemble) a thorough analysis of natural variability in extreme events is possible. Are the current extreme value statistical methods able to account for natural variability? How large is the natural variability for e.g. a 1/100 year return period derived from a 50-Member Large-Ensemble for Europe and Northeastern North-America? These questions should be answered by applying various generalized extreme value distributions (GEV) to the ClimEx Large-Ensemble. Hereby various return levels (5-, 10-, 20-, 30-, 60- and 100-years) based on various lengths of time series (20-, 30-, 50-, 100- and 1500-years) should be analyzed for the maximum one day precipitation (RX1d), the maximum three hourly precipitation (RX3h) and the streamflow for selected catchments in Europe. The long time series of the ClimEx Ensemble (7500 years) allows us to give a first reliable estimate of the magnitude and frequency of certain extreme events.

  1. A Statistical Description of Neural Ensemble Dynamics

    PubMed Central

    Long, John D.; Carmena, Jose M.

    2011-01-01

    The growing use of multi-channel neural recording techniques in behaving animals has produced rich datasets that hold immense potential for advancing our understanding of how the brain mediates behavior. One limitation of these techniques is they do not provide important information about the underlying anatomical connections among the recorded neurons within an ensemble. Inferring these connections is often intractable because the set of possible interactions grows exponentially with ensemble size. This is a fundamental challenge one confronts when interpreting these data. Unfortunately, the combination of expert knowledge and ensemble data is often insufficient for selecting a unique model of these interactions. Our approach shifts away from modeling the network diagram of the ensemble toward analyzing changes in the dynamics of the ensemble as they relate to behavior. Our contribution consists of adapting techniques from signal processing and Bayesian statistics to track the dynamics of ensemble data on time-scales comparable with behavior. We employ a Bayesian estimator to weigh prior information against the available ensemble data, and use an adaptive quantization technique to aggregate poorly estimated regions of the ensemble data space. Importantly, our method is capable of detecting changes in both the magnitude and structure of correlations among neurons missed by firing rate metrics. We show that this method is scalable across a wide range of time-scales and ensemble sizes. Lastly, the performance of this method on both simulated and real ensemble data is used to demonstrate its utility. PMID:22319486

  2. An adaptive incremental approach to constructing ensemble classifiers: application in an information-theoretic computer-aided decision system for detection of masses in mammograms.

    PubMed

    Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D

    2009-07-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.

  3. Growing into and out of the bouncing barrier in planetesimal formation

    NASA Astrophysics Data System (ADS)

    Kruss, Maximilian; Teiser, Jens; Wurm, Gerhard

    2017-04-01

    In recent laboratory studies the robustness of a bouncing barrier in planetesimal formation was studied with an ensemble of pre-formed compact mm-sized aggregates. Here we show that a bouncing barrier indeed evolves self-consistently by hit-and-stick from an ensemble of smaller dust aggregates. In addition, we feed small aggregates to an ensemble of larger bouncing aggregates. The stickiness temporarily increases, but the final number of aggregates still bouncing remains the same. However, feeding on the small particle supply, the size of the bouncing aggregates increases. This suggests that in the presence of a dust reservoir aggregates grow into but also out of a bouncing barrier at larger size.

  4. The Classification of Universes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bjorken, J

    2004-04-09

    We define a universe as the contents of a spacetime box with comoving walls, large enough to contain essentially all phenomena that can be conceivably measured. The initial time is taken as the epoch when the lowest CMB modes undergo horizon crossing, and the final time taken when the wavelengths of CMB photons are comparable with the Hubble scale, i.e. with the nominal size of the universe. This allows the definition of a local ensemble of similarly constructed universes, using only modest extrapolations of the observed behavior of the cosmos. We then assume that further out in spacetime, similar universesmore » can be constructed but containing different standard model parameters. Within this multiverse ensemble, it is assumed that the standard model parameters are strongly correlated with size, i.e. with the value of the inverse Hubble parameter at the final time, in a manner as previously suggested. This allows an estimate of the range of sizes which allow life as we know it, and invites a speculation regarding the most natural distribution of sizes. If small sizes are favored, this in turn allows some understanding of the hierarchy problems of particle physics. Subsequent sections of the paper explore other possible implications. In all cases, the approach is as bottoms up and as phenomenological as possible, and suggests that theories of the multiverse so constructed may in fact lay some claim of being scientific.« less

  5. Size-Dependent Coherent-Phonon Plasmon Modulation and Deformation Characterization in Gold Bipyramids and Nanojavelins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirschner, Matthew S.; Lethiec, Clotilde M.; Lin, Xiao-Min

    2016-04-04

    Localized surface plasmon resonances (LSPRs) arising from metallic nanoparticles offer an array of prospective applications that range from chemical sensing to biotherapies. Bipyramidal particles exhibit particularly narrow ensemble LSPR resonances that reflect small dispersity of size and shape but until recently were only synthetically accessible over a limited range of sizes with corresponding aspect ratios. Narrow size dispersion offers the opportunity to examine ensemble dynamical phenomena such as coherent phonons that induce periodic oscillations of the LSPR energy. Here, we characterize transient optical behavior of a large range of gold bipyramid sizes, as well as higher aspect ratio nanojavelin ensemblesmore » with specific attention to the lowest-order acoustic phonon mode of these nanoparticles. We report coherent phonon-driven oscillations of the LSPR position for particles with resonances spanning 670 to 1330 nm. Nanojavelins were shown to behave similarly to bipyramids but offer the prospect of separate control over LSPR energy and coherent phonon oscillation period. We develop a new methodology for quantitatively measuring mechanical expansion caused by photogenerated coherent phonons. Using this method, we find an elongation of approximately 1% per photon absorbed per unit cell and that particle expansion along the lowest frequency acoustic phonon mode is linearly proportional to excitation fluence for the fluence range studied. These characterizations provide insight regarding means to manipulate phonon period and transient mechanical deformation.« less

  6. Ensembl 2004.

    PubMed

    Birney, E; Andrews, D; Bevan, P; Caccamo, M; Cameron, G; Chen, Y; Clarke, L; Coates, G; Cox, T; Cuff, J; Curwen, V; Cutts, T; Down, T; Durbin, R; Eyras, E; Fernandez-Suarez, X M; Gane, P; Gibbins, B; Gilbert, J; Hammond, M; Hotz, H; Iyer, V; Kahari, A; Jekosch, K; Kasprzyk, A; Keefe, D; Keenan, S; Lehvaslaiho, H; McVicker, G; Melsopp, C; Meidl, P; Mongin, E; Pettett, R; Potter, S; Proctor, G; Rae, M; Searle, S; Slater, G; Smedley, D; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Storey, R; Ureta-Vidal, A; Woodwark, C; Clamp, M; Hubbard, T

    2004-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organize biology around the sequences of large genomes. It is a comprehensive and integrated source of annotation of large genome sequences, available via interactive website, web services or flat files. As well as being one of the leading sources of genome annotation, Ensembl is an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements. The facilities of the system range from sequence analysis to data storage and visualization and installations exist around the world both in companies and at academic sites. With a total of nine genome sequences available from Ensembl and more genomes to follow, recent developments have focused mainly on closer integration between genomes and external data.

  7. Idea Bank: Chamber Music within the Large Ensemble

    ERIC Educational Resources Information Center

    Neidlinger, Erica

    2011-01-01

    Many music educators incorporate chamber music in their ensemble programs--an excellent way to promote musical independence. However, they rarely think of the large ensemble as myriad chamber interactions. Rehearsals become more productive when greater responsibility for music-making is placed on the individual student. This article presents some…

  8. High Performance Nuclear Magnetic Resonance Imaging Using Magnetic Resonance Force Microscopy

    DTIC Science & Technology

    2013-12-12

    Micron- Size Ferromagnet . Physical Review Letters, 92(3) 037205 (2004) [22] A. Z. Genack and A. G. Redeld. Theory of nuclear spin diusion in a...perform spatially resolved scanned probe studies of spin dynamics in nanoscale ensembles of few electron spins of varying size . Our research culminated...perform spatially resolved scanned probe studies of spin dynamics in nanoscale ensembles of few electron spins of varying size . Our research culminated

  9. Stimuli Reduce the Dimensionality of Cortical Activity

    PubMed Central

    Mazzucato, Luca; Fontanini, Alfredo; La Camera, Giancarlo

    2016-01-01

    The activity of ensembles of simultaneously recorded neurons can be represented as a set of points in the space of firing rates. Even though the dimension of this space is equal to the ensemble size, neural activity can be effectively localized on smaller subspaces. The dimensionality of the neural space is an important determinant of the computational tasks supported by the neural activity. Here, we investigate the dimensionality of neural ensembles from the sensory cortex of alert rats during periods of ongoing (inter-trial) and stimulus-evoked activity. We find that dimensionality grows linearly with ensemble size, and grows significantly faster during ongoing activity compared to evoked activity. We explain these results using a spiking network model based on a clustered architecture. The model captures the difference in growth rate between ongoing and evoked activity and predicts a characteristic scaling with ensemble size that could be tested in high-density multi-electrode recordings. Moreover, we present a simple theory that predicts the existence of an upper bound on dimensionality. This upper bound is inversely proportional to the amount of pair-wise correlations and, compared to a homogeneous network without clusters, it is larger by a factor equal to the number of clusters. The empirical estimation of such bounds depends on the number and duration of trials and is well predicted by the theory. Together, these results provide a framework to analyze neural dimensionality in alert animals, its behavior under stimulus presentation, and its theoretical dependence on ensemble size, number of clusters, and correlations in spiking network models. PMID:26924968

  10. Stimuli Reduce the Dimensionality of Cortical Activity.

    PubMed

    Mazzucato, Luca; Fontanini, Alfredo; La Camera, Giancarlo

    2016-01-01

    The activity of ensembles of simultaneously recorded neurons can be represented as a set of points in the space of firing rates. Even though the dimension of this space is equal to the ensemble size, neural activity can be effectively localized on smaller subspaces. The dimensionality of the neural space is an important determinant of the computational tasks supported by the neural activity. Here, we investigate the dimensionality of neural ensembles from the sensory cortex of alert rats during periods of ongoing (inter-trial) and stimulus-evoked activity. We find that dimensionality grows linearly with ensemble size, and grows significantly faster during ongoing activity compared to evoked activity. We explain these results using a spiking network model based on a clustered architecture. The model captures the difference in growth rate between ongoing and evoked activity and predicts a characteristic scaling with ensemble size that could be tested in high-density multi-electrode recordings. Moreover, we present a simple theory that predicts the existence of an upper bound on dimensionality. This upper bound is inversely proportional to the amount of pair-wise correlations and, compared to a homogeneous network without clusters, it is larger by a factor equal to the number of clusters. The empirical estimation of such bounds depends on the number and duration of trials and is well predicted by the theory. Together, these results provide a framework to analyze neural dimensionality in alert animals, its behavior under stimulus presentation, and its theoretical dependence on ensemble size, number of clusters, and correlations in spiking network models.

  11. Angular-domain scattering interferometry.

    PubMed

    Shipp, Dustin W; Qian, Ruobing; Berger, Andrew J

    2013-11-15

    We present an angular-scattering optical method that is capable of measuring the mean size of scatterers in static ensembles within a field of view less than 20 μm in diameter. Using interferometry, the method overcomes the inability of intensity-based models to tolerate the large speckle grains associated with such small illumination areas. By first estimating each scatterer's location, the method can model between-scatterer interference as well as traditional single-particle Mie scattering. Direct angular-domain measurements provide finer angular resolution than digitally transformed image-plane recordings. This increases sensitivity to size-dependent scattering features, enabling more robust size estimates. The sensitivity of these angular-scattering measurements to various sizes of polystyrene beads is demonstrated. Interferometry also allows recovery of the full complex scattered field, including a size-dependent phase profile in the angular-scattering pattern.

  12. A comparison of breeding and ensemble transform vectors for global ensemble generation

    NASA Astrophysics Data System (ADS)

    Deng, Guo; Tian, Hua; Li, Xiaoli; Chen, Jing; Gong, Jiandong; Jiao, Meiyan

    2012-02-01

    To compare the initial perturbation techniques using breeding vectors and ensemble transform vectors, three ensemble prediction systems using both initial perturbation methods but with different ensemble member sizes based on the spectral model T213/L31 are constructed at the National Meteorological Center, China Meteorological Administration (NMC/CMA). A series of ensemble verification scores such as forecast skill of the ensemble mean, ensemble resolution, and ensemble reliability are introduced to identify the most important attributes of ensemble forecast systems. The results indicate that the ensemble transform technique is superior to the breeding vector method in light of the evaluation of anomaly correlation coefficient (ACC), which is a deterministic character of the ensemble mean, the root-mean-square error (RMSE) and spread, which are of probabilistic attributes, and the continuous ranked probability score (CRPS) and its decomposition. The advantage of the ensemble transform approach is attributed to its orthogonality among ensemble perturbations as well as its consistence with the data assimilation system. Therefore, this study may serve as a reference for configuration of the best ensemble prediction system to be used in operation.

  13. An adaptive incremental approach to constructing ensemble classifiers: Application in an information-theoretic computer-aided decision system for detection of masses in mammograms

    PubMed Central

    Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.

    2009-01-01

    Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC=0.905±0.024) in performance as compared to the original IT-CAD system (AUC=0.865±0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters. PMID:19673196

  14. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    PubMed Central

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  15. Force-momentum-based self-guided Langevin dynamics: A rapid sampling method that approaches the canonical ensemble

    NASA Astrophysics Data System (ADS)

    Wu, Xiongwu; Brooks, Bernard R.

    2011-11-01

    The self-guided Langevin dynamics (SGLD) is a method to accelerate conformational searching. This method is unique in the way that it selectively enhances and suppresses molecular motions based on their frequency to accelerate conformational searching without modifying energy surfaces or raising temperatures. It has been applied to studies of many long time scale events, such as protein folding. Recent progress in the understanding of the conformational distribution in SGLD simulations makes SGLD also an accurate method for quantitative studies. The SGLD partition function provides a way to convert the SGLD conformational distribution to the canonical ensemble distribution and to calculate ensemble average properties through reweighting. Based on the SGLD partition function, this work presents a force-momentum-based self-guided Langevin dynamics (SGLDfp) simulation method to directly sample the canonical ensemble. This method includes interaction forces in its guiding force to compensate the perturbation caused by the momentum-based guiding force so that it can approximately sample the canonical ensemble. Using several example systems, we demonstrate that SGLDfp simulations can approximately maintain the canonical ensemble distribution and significantly accelerate conformational searching. With optimal parameters, SGLDfp and SGLD simulations can cross energy barriers of more than 15 kT and 20 kT, respectively, at similar rates for LD simulations to cross energy barriers of 10 kT. The SGLDfp method is size extensive and works well for large systems. For studies where preserving accessible conformational space is critical, such as free energy calculations and protein folding studies, SGLDfp is an efficient approach to search and sample the conformational space.

  16. Ensemble brightening and enhanced quantum yield in size-purified silicon nanocrystals

    DOE PAGES

    Miller, Joseph B.; Van Sickle, Austin R.; Anthony, Rebecca J.; ...

    2012-07-18

    Here, we report on the quantum yield, photoluminescence (PL) lifetime and ensemble photoluminescent stability of highly monodisperse plasma-synthesized silicon nanocrystals (SiNCs) prepared though density-gradient ultracentrifugation in mixed organic solvents. Improved size uniformity leads to a reduction in PL line width and the emergence of entropic order in dry nanocrystal films. We find excellent agreement with the anticipated trends of quantum confinement in nanocrystalline silicon, with a solution quantum yield that is independent of nanocrystal size for the larger fractions but decreases dramatically with size for the smaller fractions. We also find a significant PL enhancement in films assembled from themore » fractions, and we use a combination of measurement, simulation and modeling to link this ‘brightening’ to a temporally enhanced quantum yield arising from SiNC interactions in ordered ensembles of monodisperse nanocrystals. Using an appropriate excitation scheme, we exploit this enhancement to achieve photostable emission.« less

  17. Ensemble Kalman filter inference of spatially-varying Manning's n coefficients in the coastal ocean

    NASA Astrophysics Data System (ADS)

    Siripatana, Adil; Mayo, Talea; Knio, Omar; Dawson, Clint; Maître, Olivier Le; Hoteit, Ibrahim

    2018-07-01

    Ensemble Kalman (EnKF) filtering is an established framework for large scale state estimation problems. EnKFs can also be used for state-parameter estimation, using the so-called "Joint-EnKF" approach. The idea is simply to augment the state vector with the parameters to be estimated and assign invariant dynamics for the time evolution of the parameters. In this contribution, we investigate the efficiency of the Joint-EnKF for estimating spatially-varying Manning's n coefficients used to define the bottom roughness in the Shallow Water Equations (SWEs) of a coastal ocean model. Observation System Simulation Experiments (OSSEs) are conducted using the ADvanced CIRCulation (ADCIRC) model, which solves a modified form of the Shallow Water Equations. A deterministic EnKF, the Singular Evolutive Interpolated Kalman (SEIK) filter, is used to estimate a vector of Manning's n coefficients defined at the model nodal points by assimilating synthetic water elevation data. It is found that with reasonable ensemble size (O (10)) , the filter's estimate converges to the reference Manning's field. To enhance performance, we have further reduced the dimension of the parameter search space through a Karhunen-Loéve (KL) expansion. We have also iterated on the filter update step to better account for the nonlinearity of the parameter estimation problem. We study the sensitivity of the system to the ensemble size, localization scale, dimension of retained KL modes, and number of iterations. The performance of the proposed framework in term of estimation accuracy suggests that a well-tuned Joint-EnKF provides a promising robust approach to infer spatially varying seabed roughness parameters in the context of coastal ocean modeling.

  18. Ensemble modeling of very small ZnO nanoparticles.

    PubMed

    Niederdraenk, Franziska; Seufert, Knud; Stahl, Andreas; Bhalerao-Panajkar, Rohini S; Marathe, Sonali; Kulkarni, Sulabha K; Neder, Reinhard B; Kumpf, Christian

    2011-01-14

    The detailed structural characterization of nanoparticles is a very important issue since it enables a precise understanding of their electronic, optical and magnetic properties. Here we introduce a new method for modeling the structure of very small particles by means of powder X-ray diffraction. Using thioglycerol-capped ZnO nanoparticles with a diameter of less than 3 nm as an example we demonstrate that our ensemble modeling method is superior to standard XRD methods like, e.g., Rietveld refinement. Besides fundamental properties (size, anisotropic shape and atomic structure) more sophisticated properties like imperfections in the lattice, a size distribution as well as strain and relaxation effects in the particles and-in particular-at their surface (surface relaxation effects) can be obtained. Ensemble properties, i.e., distributions of the particle size and other properties, can also be investigated which makes this method superior to imaging techniques like (high resolution) transmission electron microscopy or atomic force microscopy, in particular for very small nanoparticles. For the particles under study an excellent agreement of calculated and experimental X-ray diffraction patterns could be obtained with an ensemble of anisotropic polyhedral particles of three dominant sizes, wurtzite structure and a significant relaxation of Zn atoms close to the surface.

  19. Boiling point determination using adiabatic Gibbs ensemble Monte Carlo simulations: application to metals described by embedded-atom potentials.

    PubMed

    Gelb, Lev D; Chakraborty, Somendra Nath

    2011-12-14

    The normal boiling points are obtained for a series of metals as described by the "quantum-corrected Sutton Chen" (qSC) potentials [S.-N. Luo, T. J. Ahrens, T. Çağın, A. Strachan, W. A. Goddard III, and D. C. Swift, Phys. Rev. B 68, 134206 (2003)]. Instead of conventional Monte Carlo simulations in an isothermal or expanded ensemble, simulations were done in the constant-NPH adabatic variant of the Gibbs ensemble technique as proposed by Kristóf and Liszi [Chem. Phys. Lett. 261, 620 (1996)]. This simulation technique is shown to be a precise tool for direct calculation of boiling temperatures in high-boiling fluids, with results that are almost completely insensitive to system size or other arbitrary parameters as long as the potential truncation is handled correctly. Results obtained were validated using conventional NVT-Gibbs ensemble Monte Carlo simulations. The qSC predictions for boiling temperatures are found to be reasonably accurate, but substantially underestimate the enthalpies of vaporization in all cases. This appears to be largely due to the systematic overestimation of dimer binding energies by this family of potentials, which leads to an unsatisfactory description of the vapor phase. © 2011 American Institute of Physics

  20. Ensemble Generation and the Influence of Protein Flexibility on Geometric Tunnel Prediction in Cytochrome P450 Enzymes

    PubMed Central

    Kingsley, Laura J.; Lill, Markus A.

    2014-01-01

    Computational prediction of ligand entry and egress paths in proteins has become an emerging topic in computational biology and has proven useful in fields such as protein engineering and drug design. Geometric tunnel prediction programs, such as Caver3.0 and MolAxis, are computationally efficient methods to identify potential ligand entry and egress routes in proteins. Although many geometric tunnel programs are designed to accommodate a single input structure, the increasingly recognized importance of protein flexibility in tunnel formation and behavior has led to the more widespread use of protein ensembles in tunnel prediction. However, there has not yet been an attempt to directly investigate the influence of ensemble size and composition on geometric tunnel prediction. In this study, we compared tunnels found in a single crystal structure to ensembles of various sizes generated using different methods on both the apo and holo forms of cytochrome P450 enzymes CYP119, CYP2C9, and CYP3A4. Several protein structure clustering methods were tested in an attempt to generate smaller ensembles that were capable of reproducing the data from larger ensembles. Ultimately, we found that by including members from both the apo and holo data sets, we could produce ensembles containing less than 15 members that were comparable to apo or holo ensembles containing over 100 members. Furthermore, we found that, in the absence of either apo or holo crystal structure data, pseudo-apo or –holo ensembles (e.g. adding ligand to apo protein throughout MD simulations) could be used to resemble the structural ensembles of the corresponding apo and holo ensembles, respectively. Our findings not only further highlight the importance of including protein flexibility in geometric tunnel prediction, but also suggest that smaller ensembles can be as capable as larger ensembles at capturing many of the protein motions important for tunnel prediction at a lower computational cost. PMID:24956479

  1. The Ensembl REST API: Ensembl Data for Any Language.

    PubMed

    Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul

    2015-01-01

    We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.

  2. Forecasting European cold waves based on subsampling strategies of CMIP5 and Euro-CORDEX ensembles

    NASA Astrophysics Data System (ADS)

    Cordero-Llana, Laura; Braconnot, Pascale; Vautard, Robert; Vrac, Mathieu; Jezequel, Aglae

    2016-04-01

    Forecasting future extreme events under the present changing climate represents a difficult task. Currently there are a large number of ensembles of simulations for climate projections that take in account different models and scenarios. However, there is a need for reducing the size of the ensemble to make the interpretation of these simulations more manageable for impact studies or climate risk assessment. This can be achieved by developing subsampling strategies to identify a limited number of simulations that best represent the ensemble. In this study, cold waves are chosen to test different approaches for subsampling available simulations. The definition of cold waves depends on the criteria used, but they are generally defined using a minimum temperature threshold, the duration of the cold spell as well as their geographical extend. These climate indicators are not universal, highlighting the difficulty of directly comparing different studies. As part of the of the CLIPC European project, we use daily surface temperature data obtained from CMIP5 outputs as well as Euro-CORDEX simulations to predict future cold waves events in Europe. From these simulations a clustering method is applied to minimise the number of ensembles required. Furthermore, we analyse the different uncertainties that arise from the different model characteristics and definitions of climate indicators. Finally, we will test if the same subsampling strategy can be used for different climate indicators. This will facilitate the use of the subsampling results for a wide number of impact assessment studies.

  3. Internal Spin Control, Squeezing and Decoherence in Ensembles of Alkali Atomic Spins

    NASA Astrophysics Data System (ADS)

    Norris, Leigh Morgan

    Large atomic ensembles interacting with light are one of the most promising platforms for quantum information processing. In the past decade, novel applications for these systems have emerged in quantum communication, quantum computing, and metrology. Essential to all of these applications is the controllability of the atomic ensemble, which is facilitated by a strong coupling between the atoms and light. Non-classical spin squeezed states are a crucial step in attaining greater ensemble control. The degree of entanglement present in these states, furthermore, serves as a benchmark for the strength of the atom-light interaction. Outside the broader context of quantum information processing with atomic ensembles, spin squeezed states have applications in metrology, where their quantum correlations can be harnessed to improve the precision of magnetometers and atomic clocks. This dissertation focuses upon the production of spin squeezed states in large ensembles of cold trapped alkali atoms interacting with optical fields. While most treatments of spin squeezing consider only the case in which the ensemble is composed of two level systems or qubits, we utilize the entire ground manifold of an alkali atom with hyperfine spin f greater than or equal to 1/2, a qudit. Spin squeezing requires non-classical correlations between the constituent atomic spins, which are generated through the atoms' collective coupling to the light. Either through measurement or multiple interactions with the atoms, the light mediates an entangling interaction that produces quantum correlations. Because the spin squeezing treated in this dissertation ultimately originates from the coupling between the light and atoms, conventional approaches of improving this squeezing have focused on increasing the optical density of the ensemble. The greater number of internal degrees of freedom and the controllability of the spin-f ground hyperfine manifold enable novel methods of enhancing squeezing. In particular, we find that state preparation using control of the internal hyperfine spin increases the entangling power of squeezing protocols when f>1/2. Post-processing of the ensemble using additional internal spin control converts this entanglement into metrologically useful spin squeezing. By employing a variation of the Holstein-Primakoff approximation, in which the collective spin observables of the atomic ensemble are treated as quadratures of a bosonic mode, we model entanglement generation, spin squeezing and the effects of internal spin control. The Holstein-Primakoff formalism also enables us to take into account the decoherence of the ensemble due to optical pumping. While most works ignore or treat optical pumping phenomenologically, we employ a master equation derived from first principles. Our analysis shows that state preparation and the hyperfine spin size have a substantial impact upon both the generation of spin squeezing and the decoherence of the ensemble. Through a numerical search, we determine state preparations that enhance squeezing protocols while remaining robust to optical pumping. Finally, most work on spin squeezing in atomic ensembles has treated the light as a plane wave that couples identically to all atoms. In the final part of this dissertation, we go beyond the customary plane wave approximation on the light and employ focused paraxial beams, which are more efficiently mode matched to the radiation pattern of the atomic ensemble. The mathematical formalism and the internal spin control techniques that we applied in the plane wave case are generalized to accommodate the non-homogeneous paraxial probe. We find the optimal geometries of the atomic ensemble and the probe for mode matching and generation of spin squeezing.

  4. Internal Spin Control, Squeezing and Decoherence in Ensembles of Alkali Atomic Spins

    NASA Astrophysics Data System (ADS)

    Norris, Leigh Morgan

    Large atomic ensembles interacting with light are one of the most promising platforms for quantum information processing. In the past decade, novel applications for these systems have emerged in quantum communication, quantum computing, and metrology. Essential to all of these applications is the controllability of the atomic ensemble, which is facilitated by a strong coupling between the atoms and light. Non-classical spin squeezed states are a crucial step in attaining greater ensemble control. The degree of entanglement present in these states, furthermore, serves as a benchmark for the strength of the atom-light interaction. Outside the broader context of quantum information processing with atomic ensembles, spin squeezed states have applications in metrology, where their quantum correlations can be harnessed to improve the precision of magnetometers and atomic clocks. This dissertation focuses upon the production of spin squeezed states in large ensembles of cold trapped alkali atoms interacting with optical fields. While most treatments of spin squeezing consider only the case in which the ensemble is composed of two level systems or qubits, we utilize the entire ground manifold of an alkali atom with hyperfine spin f greater or equal to 1/2, a qudit. Spin squeezing requires non-classical correlations between the constituent atomic spins, which are generated through the atoms' collective coupling to the light. Either through measurement or multiple interactions with the atoms, the light mediates an entangling interaction that produces quantum correlations. Because the spin squeezing treated in this dissertation ultimately originates from the coupling between the light and atoms, conventional approaches of improving this squeezing have focused on increasing the optical density of the ensemble. The greater number of internal degrees of freedom and the controllability of the spin-f ground hyperfine manifold enable novel methods of enhancing squeezing. In particular, we find that state preparation using control of the internal hyperfine spin increases the entangling power of squeezing protocols when f >1/2. Post-processing of the ensemble using additional internal spin control converts this entanglement into metrologically useful spin squeezing. By employing a variation of the Holstein-Primakoff approximation, in which the collective spin observables of the atomic ensemble are treated as quadratures of a bosonic mode, we model entanglement generation, spin squeezing and the effects of internal spin control. The Holstein-Primakoff formalism also enables us to take into account the decoherence of the ensemble due to optical pumping. While most works ignore or treat optical pumping phenomenologically, we employ a master equation derived from first principles. Our analysis shows that state preparation and the hyperfine spin size have a substantial impact upon both the generation of spin squeezing and the decoherence of the ensemble. Through a numerical search, we determine state preparations that enhance squeezing protocols while remaining robust to optical pumping. Finally, most work on spin squeezing in atomic ensembles has treated the light as a plane wave that couples identically to all atoms. In the final part of this dissertation, we go beyond the customary plane wave approximation on the light and employ focused paraxial beams, which are more efficiently mode matched to the radiation pattern of the atomic ensemble. The mathematical formalism and the internal spin control techniques that we applied in the plane wave case are generalized to accommodate the non-homogeneous paraxial probe. We find the optimal geometries of the atomic ensemble and the probe for mode matching and generation of spin squeezing.

  5. "Playing It Like a Professional": Approaches to Ensemble Direction in Tertiary Institutions

    ERIC Educational Resources Information Center

    Harrison, Scott; O'Bryan, Jessica; Lebler, Don

    2013-01-01

    This article reports on a case study of three directors of large ensembles within a large conservatoire and the ways in which they attempted to scaffold their students into professional music careers. The core aim in this article is to respond to the question "What is the role and function of the ensemble experience on the training of the…

  6. Mathematical foundations of hybrid data assimilation from a synchronization perspective

    NASA Astrophysics Data System (ADS)

    Penny, Stephen G.

    2017-12-01

    The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.

  7. Mathematical foundations of hybrid data assimilation from a synchronization perspective.

    PubMed

    Penny, Stephen G

    2017-12-01

    The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.

  8. A WRF/Chem sensitivity study using ensemble modelling for a high ozone episode in Slovenia and the Northern Adriatic area

    NASA Astrophysics Data System (ADS)

    Žabkar, Rahela; Koračin, Darko; Rakovec, Jože

    2013-10-01

    A high ozone (O3) concentrations episode during a heat wave event in the Northeastern Mediterranean was investigated using the WRF/Chem model. To understand the major model uncertainties and errors as well as the impacts of model inputs on the model accuracy, an ensemble modelling experiment was conducted. The 51-member ensemble was designed by varying model physics parameterization options (PBL schemes with different surface layer and land-surface modules, and radiation schemes); chemical initial and boundary conditions; anthropogenic and biogenic emission inputs; and model domain setup and resolution. The main impacts of the geographical and emission characteristics of three distinct regions (suburban Mediterranean, continental urban, and continental rural) on the model accuracy and O3 predictions were investigated. In spite of the large ensemble set size, the model generally failed to simulate the extremes; however, as expected from probabilistic forecasting the ensemble spread improved results with respect to extremes compared to the reference run. Noticeable model nighttime overestimations at the Mediterranean and some urban and rural sites can be explained by too strong simulated winds, which reduce the impact of dry deposition and O3 titration in the near surface layers during the nighttime. Another possible explanation could be inaccuracies in the chemical mechanisms, which are suggested also by model insensitivity to variations in the nitrogen oxides (NOx) and volatile organic compounds (VOC) emissions. Major impact factors for underestimations of the daytime O3 maxima at the Mediterranean and some rural sites include overestimation of the PBL depths, a lack of information on forest fires, too strong surface winds, and also possible inaccuracies in biogenic emissions. This numerical experiment with the ensemble runs also provided guidance on an optimum model setup and input data.

  9. ClustENM: ENM-Based Sampling of Essential Conformational Space at Full Atomic Resolution

    PubMed Central

    Kurkcuoglu, Zeynep; Bahar, Ivet; Doruker, Pemra

    2016-01-01

    Accurate sampling of conformational space and, in particular, the transitions between functional substates has been a challenge in molecular dynamic (MD) simulations of large biomolecular systems. We developed an Elastic Network Model (ENM)-based computational method, ClustENM, for sampling large conformational changes of biomolecules with various sizes and oligomerization states. ClustENM is an iterative method that combines ENM with energy minimization and clustering steps. It is an unbiased technique, which requires only an initial structure as input, and no information about the target conformation. To test the performance of ClustENM, we applied it to six biomolecular systems: adenylate kinase (AK), calmodulin, p38 MAP kinase, HIV-1 reverse transcriptase (RT), triosephosphate isomerase (TIM), and the 70S ribosomal complex. The generated ensembles of conformers determined at atomic resolution show good agreement with experimental data (979 structures resolved by X-ray and/or NMR) and encompass the subspaces covered in independent MD simulations for TIM, p38, and RT. ClustENM emerges as a computationally efficient tool for characterizing the conformational space of large systems at atomic detail, in addition to generating a representative ensemble of conformers that can be advantageously used in simulating substrate/ligand-binding events. PMID:27494296

  10. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE PAGES

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; ...

    2016-12-07

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less

  11. Evaluating lossy data compression on climate simulation data within a large ensemble

    NASA Astrophysics Data System (ADS)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.; Xu, Haiying; Stolpe, Martin B.; Naveau, Phillipe; Sanderson, Ben; Ebert-Uphoff, Imme; Samarasinghe, Savini; De Simone, Francesco; Carbone, Francesco; Gencarelli, Christian N.; Dennis, John M.; Kay, Jennifer E.; Lindstrom, Peter

    2016-12-01

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data, the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.

  12. Evaluating lossy data compression on climate simulation data within a large ensemble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Allison H.; Hammerling, Dorit M.; Mickelson, Sheri A.

    High-resolution Earth system model simulations generate enormous data volumes, and retaining the data from these simulations often strains institutional storage resources. Further, these exceedingly large storage requirements negatively impact science objectives, for example, by forcing reductions in data output frequency, simulation length, or ensemble size. To lessen data volumes from the Community Earth System Model (CESM), we advocate the use of lossy data compression techniques. While lossy data compression does not exactly preserve the original data (as lossless compression does), lossy techniques have an advantage in terms of smaller storage requirements. To preserve the integrity of the scientific simulation data,more » the effects of lossy data compression on the original data should, at a minimum, not be statistically distinguishable from the natural variability of the climate system, and previous preliminary work with data from CESM has shown this goal to be attainable. However, to ultimately convince climate scientists that it is acceptable to use lossy data compression, we provide climate scientists with access to publicly available climate data that have undergone lossy data compression. In particular, we report on the results of a lossy data compression experiment with output from the CESM Large Ensemble (CESM-LE) Community Project, in which we challenge climate scientists to examine features of the data relevant to their interests, and attempt to identify which of the ensemble members have been compressed and reconstructed. We find that while detecting distinguishing features is certainly possible, the compression effects noticeable in these features are often unimportant or disappear in post-processing analyses. In addition, we perform several analyses that directly compare the original data to the reconstructed data to investigate the preservation, or lack thereof, of specific features critical to climate science. Overall, we conclude that applying lossy data compression to climate simulation data is both advantageous in terms of data reduction and generally acceptable in terms of effects on scientific results.« less

  13. Sensitivity tests and ensemble hazard assessment for tephra fallout at Campi Flegrei, Italy

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Costa, Antonio; De Natale, Giuseppe; Di Vito, Mauro; Isaia, Roberto; Macedonio, Giovanni

    2017-04-01

    We present the results of a statistical study on tephra dispersion in the case of reactivation of the Campi Flegrei volcano. We considered the full spectrum of possible eruptions, in terms of size and position of eruptive vents. To represent the spectrum of possible eruptive sizes, four classes of eruptions were considered. Of those only three are explosive (small, medium, and large) and can produce a significant quantity of volcanic ash. Hazard assessments are made through dispersion simulations of ash and lapilli, considering the full variability of winds, eruptive vents, and eruptive sizes. The results are presented in form of four families of hazard curves conditioned to the occurrence of an eruption: 1) small eruptive size from any vent; 2) medium eruptive size from any vent; 3) large eruptive size from any vent; 4) any size from any vent. The epistemic uncertainty (i.e. associated with the level of scientific knowledge of phenomena) on the estimation of hazard curves was quantified making use of alternative scientifically acceptable approaches. The choice of such alternative models is made after a comprehensive sensitivity analysis which considered different weather databases, alternative modelling of the possible opening of eruptive vents, tephra total grain-size distributions (TGSD), relative mass of fine particles, and the effect of aggregation. The results of this sensitivity analyses show that the dominant uncertainty is related to the choice of TGSD, mass of fine ash, and potential effects of ash aggregation. The latter is particularly relevant in case of magma-water interaction during an eruptive phase, when most of the fine ash can form accretionary lapilli that could contribute significantly in increasing the tephra load in the proximal region. Relatively insignificant is the variability induced by the use of different weather databases. The hazard curves, together with the quantification of epistemic uncertainty, were finally calculated through a statistical model based on ensemble mixing of selected alternative models, e.g. different choices on the estimate of the total erupted mass, mass of fine ash, effects of aggregation, etc. Hazard and probability maps were produced at different confidence levels compared to the epistemic uncertainty (mean, median, 16th percentile, and 84th percentile).

  14. A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, Laura; Jakob, Christian; Cheung, K.

    2013-06-27

    Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less

  15. Lateral and feedforward inhibition suppress asynchronous activity in a large, biophysically-detailed computational model of the striatal network

    PubMed Central

    Moyer, Jason T.; Halterman, Benjamin L.; Finkel, Leif H.; Wolf, John A.

    2014-01-01

    Striatal medium spiny neurons (MSNs) receive lateral inhibitory projections from other MSNs and feedforward inhibitory projections from fast-spiking, parvalbumin-containing striatal interneurons (FSIs). The functional roles of these connections are unknown, and difficult to study in an experimental preparation. We therefore investigated the functionality of both lateral (MSN-MSN) and feedforward (FSI-MSN) inhibition using a large-scale computational model of the striatal network. The model consists of 2744 MSNs comprised of 189 compartments each and 121 FSIs comprised of 148 compartments each, with dendrites explicitly represented and almost all known ionic currents included and strictly constrained by biological data as appropriate. Our analysis of the model indicates that both lateral inhibition and feedforward inhibition function at the population level to limit non-ensemble MSN spiking while preserving ensemble MSN spiking. Specifically, lateral inhibition enables large ensembles of MSNs firing synchronously to strongly suppress non-ensemble MSNs over a short time-scale (10–30 ms). Feedforward inhibition enables FSIs to strongly inhibit weakly activated, non-ensemble MSNs while moderately inhibiting activated ensemble MSNs. Importantly, FSIs appear to more effectively inhibit MSNs when FSIs fire asynchronously. Both types of inhibition would increase the signal-to-noise ratio of responding MSN ensembles and contribute to the formation and dissolution of MSN ensembles in the striatal network. PMID:25505406

  16. The Ensembl REST API: Ensembl Data for Any Language

    PubMed Central

    Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R. S.; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul

    2015-01-01

    Motivation: We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. Availability and implementation: The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. Contact: ayates@ebi.ac.uk or flicek@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25236461

  17. On ozone trend detection: using coupled chemistry-climate simulations to investigate early signs of total column ozone recovery

    NASA Astrophysics Data System (ADS)

    Keeble, James; Brown, Hannah; Abraham, N. Luke; Harris, Neil R. P.; Pyle, John A.

    2018-06-01

    Total column ozone values from an ensemble of UM-UKCA model simulations are examined to investigate different definitions of progress on the road to ozone recovery. The impacts of modelled internal atmospheric variability are accounted for by applying a multiple linear regression model to modelled total column ozone values, and ozone trend analysis is performed on the resulting ozone residuals. Three definitions of recovery are investigated: (i) a slowed rate of decline and the date of minimum column ozone, (ii) the identification of significant positive trends and (iii) a return to historic values. A return to past thresholds is the last state to be achieved. Minimum column ozone values, averaged from 60° S to 60° N, occur between 1990 and 1995 for each ensemble member, driven in part by the solar minimum conditions during the 1990s. When natural cycles are accounted for, identification of the year of minimum ozone in the resulting ozone residuals is uncertain, with minimum values for each ensemble member occurring at different times between 1992 and 2000. As a result of this large variability, identification of the date of minimum ozone constitutes a poor measure of ozone recovery. Trends for the 2000-2017 period are positive at most latitudes and are statistically significant in the mid-latitudes in both hemispheres when natural cycles are accounted for. This significance results largely from the large sample size of the multi-member ensemble. Significant trends cannot be identified by 2017 at the highest latitudes, due to the large interannual variability in the data, nor in the tropics, due to the small trend magnitude, although it is projected that significant trends may be identified in these regions soon thereafter. While significant positive trends in total column ozone could be identified at all latitudes by ˜ 2030, column ozone values which are lower than the 1980 annual mean can occur in the mid-latitudes until ˜ 2050, and in the tropics and high latitudes deep into the second half of the 21st century.

  18. LETTER TO THE EDITOR: Constant-time solution to the global optimization problem using Brüschweiler's ensemble search algorithm

    NASA Astrophysics Data System (ADS)

    Protopopescu, V.; D'Helon, C.; Barhen, J.

    2003-06-01

    A constant-time solution of the continuous global optimization problem (GOP) is obtained by using an ensemble algorithm. We show that under certain assumptions, the solution can be guaranteed by mapping the GOP onto a discrete unsorted search problem, whereupon Brüschweiler's ensemble search algorithm is applied. For adequate sensitivities of the measurement technique, the query complexity of the ensemble search algorithm depends linearly on the size of the function's domain. Advantages and limitations of an eventual NMR implementation are discussed.

  19. A stochastic simulator of birth-death master equations with application to phylodynamics.

    PubMed

    Vaughan, Timothy G; Drummond, Alexei J

    2013-06-01

    In this article, we present a versatile new software tool for the simulation and analysis of stochastic models of population phylodynamics and chemical kinetics. Models are specified via an expressive and human-readable XML format and can be used as the basis for generating either single population histories or large ensembles of such histories. Importantly, phylogenetic trees or networks can be generated alongside the histories they correspond to, enabling investigations into the interplay between genealogies and population dynamics. Summary statistics such as means and variances can be recorded in place of the full ensemble, allowing for a reduction in the amount of memory used--an important consideration for models including large numbers of individual subpopulations or demes. In the case of population size histories, the resulting simulation output is written to disk in the flexible JSON format, which is easily read into numerical analysis environments such as R for visualization or further processing. Simulated phylogenetic trees can be recorded using the standard Newick or NEXUS formats, with extensions to these formats used for non-tree-like inheritance relationships.

  20. A Stochastic Simulator of Birth–Death Master Equations with Application to Phylodynamics

    PubMed Central

    Vaughan, Timothy G.; Drummond, Alexei J.

    2013-01-01

    In this article, we present a versatile new software tool for the simulation and analysis of stochastic models of population phylodynamics and chemical kinetics. Models are specified via an expressive and human-readable XML format and can be used as the basis for generating either single population histories or large ensembles of such histories. Importantly, phylogenetic trees or networks can be generated alongside the histories they correspond to, enabling investigations into the interplay between genealogies and population dynamics. Summary statistics such as means and variances can be recorded in place of the full ensemble, allowing for a reduction in the amount of memory used—an important consideration for models including large numbers of individual subpopulations or demes. In the case of population size histories, the resulting simulation output is written to disk in the flexible JSON format, which is easily read into numerical analysis environments such as R for visualization or further processing. Simulated phylogenetic trees can be recorded using the standard Newick or NEXUS formats, with extensions to these formats used for non-tree-like inheritance relationships. PMID:23505043

  1. A Probabilistic Collocation Based Iterative Kalman Filter for Landfill Data Assimilation

    NASA Astrophysics Data System (ADS)

    Qiang, Z.; Zeng, L.; Wu, L.

    2016-12-01

    Due to the strong spatial heterogeneity of landfill, uncertainty is ubiquitous in gas transport process in landfill. To accurately characterize the landfill properties, the ensemble Kalman filter (EnKF) has been employed to assimilate the measurements, e.g., the gas pressure. As a Monte Carlo (MC) based method, the EnKF usually requires a large ensemble size, which poses a high computational cost for large scale problems. In this work, we propose a probabilistic collocation based iterative Kalman filter (PCIKF) to estimate permeability in a liquid-gas coupling model. This method employs polynomial chaos expansion (PCE) to represent and propagate the uncertainties of model parameters and states, and an iterative form of Kalman filter to assimilate the current gas pressure data. To further reduce the computation cost, the functional ANOVA (analysis of variance) decomposition is conducted, and only the first order ANOVA components are remained for PCE. Illustrated with numerical case studies, this proposed method shows significant superiority in computation efficiency compared with the traditional MC based iterative EnKF. The developed method has promising potential in reliable prediction and management of landfill gas production.

  2. A study of regional-scale aerosol assimilation using a Stretch-NICAM

    NASA Astrophysics Data System (ADS)

    Misawa, S.; Dai, T.; Schutgens, N.; Nakajima, T.

    2013-12-01

    Although aerosol is considered to be harmful to human health and it became a social issue, aerosol models and emission inventories include large uncertainties. In recent studies, data assimilation is applied to aerosol simulation to get more accurate aerosol field and emission inventory. Most of these studies, however, are carried out only on global scale, and there are only a few researches about regional scale aerosol assimilation. In this study, we have created and verified an aerosol assimilation system on regional scale, in hopes to reduce an error associated with the aerosol emission inventory. Our aerosol assimilation system has been developed using an atmospheric climate model, NICAM (Non-hydrostaric ICosahedral Atmospheric Model; Satoh et al., 2008) with a stretch grid system and coupled with an aerosol transport model, SPRINTARS (Takemura et al., 2000). Also, this assimilation system is based on local ensemble transform Kalman filter (LETKF). To validate this system, we used a simulated observational data by adding some artificial errors to the surface aerosol fields constructed by Stretch-NICAM-SPRINTARS. We also included a small perturbation in original emission inventory. This assimilation with modified observational data and emission inventory was performed in Kanto-plane region around Tokyo, Japan, and the result indicates the system reducing a relative error of aerosol concentration by 20%. Furthermore, we examined a sensitivity of the aerosol assimilation system by varying the number of total ensemble (5, 10 and 15 ensembles) and local patch (domain) size (radius of 50km, 100km and 200km), both of which are the tuning parameters in LETKF. The result of the assimilation with different ensemble number 5, 10 and 15 shows that the larger the number of ensemble is, the smaller the relative error become. This is consistent with ensemble Kalman filter theory and imply that this assimilation system works properly. Also we found that assimilation system does not work well in a case of 200km radius, while a domain of 50km radius is less efficient than when domain of 100km radius is used.Therefore, we expect that the optimized size lies somewhere between 50km to 200km. We will show a real analysis of real data from suspended particle matter (SPM) network in the Kanto-plane region.

  3. Rapid sampling of local minima in protein energy surface and effective reduction through a multi-objective filter

    PubMed Central

    2013-01-01

    Background Many problems in protein modeling require obtaining a discrete representation of the protein conformational space as an ensemble of conformations. In ab-initio structure prediction, in particular, where the goal is to predict the native structure of a protein chain given its amino-acid sequence, the ensemble needs to satisfy energetic constraints. Given the thermodynamic hypothesis, an effective ensemble contains low-energy conformations which are similar to the native structure. The high-dimensionality of the conformational space and the ruggedness of the underlying energy surface currently make it very difficult to obtain such an ensemble. Recent studies have proposed that Basin Hopping is a promising probabilistic search framework to obtain a discrete representation of the protein energy surface in terms of local minima. Basin Hopping performs a series of structural perturbations followed by energy minimizations with the goal of hopping between nearby energy minima. This approach has been shown to be effective in obtaining conformations near the native structure for small systems. Recent work by us has extended this framework to larger systems through employment of the molecular fragment replacement technique, resulting in rapid sampling of large ensembles. Methods This paper investigates the algorithmic components in Basin Hopping to both understand and control their effect on the sampling of near-native minima. Realizing that such an ensemble is reduced before further refinement in full ab-initio protocols, we take an additional step and analyze the quality of the ensemble retained by ensemble reduction techniques. We propose a novel multi-objective technique based on the Pareto front to filter the ensemble of sampled local minima. Results and conclusions We show that controlling the magnitude of the perturbation allows directly controlling the distance between consecutively-sampled local minima and, in turn, steering the exploration towards conformations near the native structure. For the minimization step, we show that the addition of Metropolis Monte Carlo-based minimization is no more effective than a simple greedy search. Finally, we show that the size of the ensemble of sampled local minima can be effectively and efficiently reduced by a multi-objective filter to obtain a simpler representation of the probed energy surface. PMID:24564970

  4. Future changes in peak river flows across northern Eurasia as inferred from an ensemble of regional climate projections under the IPCC RCP8.5 scenario

    NASA Astrophysics Data System (ADS)

    Shkolnik, Igor; Pavlova, Tatiana; Efimov, Sergey; Zhuravlev, Sergey

    2018-01-01

    Climate change simulation based on 30-member ensemble of Voeikov Main Geophysical Observatory RCM (resolution 25 km) for northern Eurasia is used to drive hydrological model CaMa-Flood. Using this modeling framework, we evaluate the uncertainties in the future projection of the peak river discharge and flood hazard by 2050-2059 relative to 1990-1999 under IPCC RCP8.5 scenario. Large ensemble size, along with reasonably high modeling resolution, allows one to efficiently sample natural climate variability and increase our ability to predict future changes in the hydrological extremes. It has been shown that the annual maximum river discharge can almost double by the mid-XXI century in the outlets of major Siberian rivers. In the western regions, there is a weak signal in the river discharge and flood hazard, hardly discernible above climate variability. Annual maximum flood area is projected to increase across Siberia mostly by 2-5% relative to the baseline period. A contribution of natural climate variability at different temporal scales to the uncertainty of ensemble prediction is discussed. The analysis shows that there expected considerable changes in the extreme river discharge probability at locations of the key hydropower facilities. This suggests that the extensive impact studies are required to develop recommendations for maintaining regional energy security.

  5. Ensembl 2002: accommodating comparative genomics.

    PubMed

    Clamp, M; Andrews, D; Barker, D; Bevan, P; Cameron, G; Chen, Y; Clark, L; Cox, T; Cuff, J; Curwen, V; Down, T; Durbin, R; Eyras, E; Gilbert, J; Hammond, M; Hubbard, T; Kasprzyk, A; Keefe, D; Lehvaslaiho, H; Iyer, V; Melsopp, C; Mongin, E; Pettett, R; Potter, S; Rust, A; Schmidt, E; Searle, S; Slater, G; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Stupka, E; Ureta-Vidal, A; Vastrik, I; Birney, E

    2003-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of human, mouse and other genome sequences, available as either an interactive web site or as flat files. Ensembl also integrates manually annotated gene structures from external sources where available. As well as being one of the leading sources of genome annotation, Ensembl is an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements. These range from sequence analysis to data storage and visualisation and installations exist around the world in both companies and at academic sites. With both human and mouse genome sequences available and more vertebrate sequences to follow, many of the recent developments in Ensembl have focusing on developing automatic comparative genome analysis and visualisation.

  6. Quantum ensembles of quantum classifiers.

    PubMed

    Schuld, Maria; Petruccione, Francesco

    2018-02-09

    Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.

  7. Algorithms that Defy the Gravity of Learning Curve

    DTIC Science & Technology

    2017-04-28

    three nearest neighbour-based anomaly detectors, i.e., an ensemble of nearest neigh- bours, a recent nearest neighbour-based ensemble method called iNNE...streams. Note that the change in sample size does not alter the geometrical data characteristics discussed here. 3.1 Experimental Methodology ...need to be answered. 3.6 Comparison with conventional ensemble methods Given the theoretical results, the third aim of this project (i.e., identify the

  8. Assessing a local ensemble Kalman filter: perfect model experiments with the National Centers for Environmental Prediction global model

    NASA Astrophysics Data System (ADS)

    Szunyogh, Istvan; Kostelich, Eric J.; Gyarmati, G.; Patil, D. J.; Hunt, Brian R.; Kalnay, Eugenia; Ott, Edward; Yorke, James A.

    2005-08-01

    The accuracy and computational efficiency of the recently proposed local ensemble Kalman filter (LEKF) data assimilation scheme is investigated on a state-of-the-art operational numerical weather prediction model using simulated observations. The model selected for this purpose is the T62 horizontal- and 28-level vertical-resolution version of the Global Forecast System (GFS) of the National Center for Environmental Prediction. The performance of the data assimilation system is assessed for different configurations of the LEKF scheme. It is shown that a modest size (40-member) ensemble is sufficient to track the evolution of the atmospheric state with high accuracy. For this ensemble size, the computational time per analysis is less than 9 min on a cluster of PCs. The analyses are extremely accurate in the mid-latitude storm track regions. The largest analysis errors, which are typically much smaller than the observational errors, occur where parametrized physical processes play important roles. Because these are also the regions where model errors are expected to be the largest, limitations of a real-data implementation of the ensemble-based Kalman filter may be easily mistaken for model errors. In light of these results, the importance of testing the ensemble-based Kalman filter data assimilation systems on simulated observations is stressed.

  9. Making Music or Gaining Grades? Assessment Practices in Tertiary Music Ensembles

    ERIC Educational Resources Information Center

    Harrison, Scott D.; Lebler, Don; Carey, Gemma; Hitchcock, Matt; O'Bryan, Jessica

    2013-01-01

    Participation in an ensemble is a significant aspect of tertiary music experience. Learning and assessment practices within ensembles have rarely been investigated in Australia and the perceptions of staff and students as to how they learn and are assessed within ensembles remain largely unexplored. This paper reports on part of a larger project…

  10. Finite-size anomalies of the Drude weight: Role of symmetries and ensembles

    NASA Astrophysics Data System (ADS)

    Sánchez, R. J.; Varma, V. K.

    2017-12-01

    We revisit the numerical problem of computing the high temperature spin stiffness, or Drude weight, D of the spin-1 /2 X X Z chain using exact diagonalization to systematically analyze its dependence on system symmetries and ensemble. Within the canonical ensemble and for states with zero total magnetization, we find D vanishes exactly due to spin-inversion symmetry for all but the anisotropies Δ˜M N=cos(π M /N ) with N ,M ∈Z+ coprimes and N >M , provided system sizes L ≥2 N , for which states with different spin-inversion signature become degenerate due to the underlying s l2 loop algebra symmetry. All these loop-algebra degenerate states carry finite currents which we conjecture [based on data from the system sizes and anisotropies Δ˜M N (with N

  11. Intrinsic Size Effect in Scaffolded Porous Calcium Silicate Particles and Mechanical Behavior of Their Self-Assembled Ensembles.

    PubMed

    Hwang, Sung Hoon; Shahsavari, Rouzbeh

    2018-01-10

    Scaffolded porous submicron particles with well-defined diameter, shape, and pore size have profound impacts on drug delivery, bone-tissue replacement, catalysis, sensors, photonic crystals, and self-healing materials. However, understanding the interplay between pore size, particle size, and mechanical properties of such ultrafine particles, especially at the level of individual particles and their ensemble states, is a challenge. Herein, we focus on porous calcium-silicate submicron particles with various diameters-as a model system-and perform extensive 900+ nanoindentations to completely map out their mechanical properties at three distinct structural forms from individual submicron particles to self-assembled ensembles to pressure-induced assembled arrays. Our results demonstrate a notable "intrinsic size effect" for individual porous submicron particles around ∼200-500 nm, induced by the ratio of particle characteristic diameter to pore characteristic size distribution. Increasing this ratio results in a brittle-to-ductile transition where the toughness of the submicron particles increases by 120%. This size effect becomes negligible as the porous particles form superstructures. Nevertheless, the self-assembled arrays collectively exhibit increasing elastic modulus as a function of applied forces, while pressure-induced compacted arrays exhibit no size effect. This study will impact tuning properties of individual scaffolded porous particles and can have implications on self-assembled superstructures exploiting porosity and particle size to impart new functionalities.

  12. TECA: A Parallel Toolkit for Extreme Climate Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabhat, Mr; Ruebel, Oliver; Byna, Surendra

    2012-03-12

    We present TECA, a parallel toolkit for detecting extreme events in large climate datasets. Modern climate datasets expose parallelism across a number of dimensions: spatial locations, timesteps and ensemble members. We design TECA to exploit these modes of parallelism and demonstrate a prototype implementation for detecting and tracking three classes of extreme events: tropical cyclones, extra-tropical cyclones and atmospheric rivers. We process a modern TB-sized CAM5 simulation dataset with TECA, and demonstrate good runtime performance for the three case studies.

  13. Uncertainty, Sensitivity Analysis, and Causal Identification in the Arctic using a Perturbed Parameter Ensemble of the HiLAT Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark

    Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less

  14. Transition from Poissonian to Gaussian-orthogonal-ensemble level statistics in a modified Artin's billiard

    NASA Astrophysics Data System (ADS)

    Csordás, A.; Graham, R.; Szépfalusy, P.; Vattay, G.

    1994-01-01

    One wall of an Artin's billiard on the Poincaré half-plane is replaced by a one-parameter (cp) family of nongeodetic walls. A brief description of the classical phase space of this system is given. In the quantum domain, the continuous and gradual transition from the Poisson-like to Gaussian-orthogonal-ensemble (GOE) level statistics due to the small perturbations breaking the symmetry responsible for the ``arithmetic chaos'' at cp=1 is studied. Another GOE-->Poisson transition due to the mixed phase space for large perturbations is also investigated. A satisfactory description of the intermediate level statistics by the Brody distribution was found in both cases. The study supports the existence of a scaling region around cp=1. A finite-size scaling relation for the Brody parameter as a function of 1-cp and the number of levels considered can be established.

  15. Potential predictability and forecast skill in ensemble climate forecast: the skill-persistence rule

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Rong, X.; Liu, Z.

    2017-12-01

    This study investigates the factors that impact the forecast skill for the real world (actual skill) and perfect model (perfect skill) in ensemble climate model forecast with a series of fully coupled general circulation model forecast experiments. It is found that the actual skill of sea surface temperature (SST) in seasonal forecast is substantially higher than the perfect skill on a large part of the tropical oceans, especially the tropical Indian Ocean and the central-eastern Pacific Ocean. The higher actual skill is found to be related to the higher observational SST persistence, suggesting a skill-persistence rule: a higher SST persistence in the real world than in the model could overwhelm the model bias to produce a higher forecast skill for the real world than for the perfect model. The relation between forecast skill and persistence is further examined using a first-order autoregressive model (AR1) analytically for theoretical solutions and numerically for analogue experiments. The AR1 model study shows that the skill-persistence rule is strictly valid in the case of infinite ensemble size, but can be distorted by the sampling error and non-AR1 processes.

  16. Investigating energy-based pool structure selection in the structure ensemble modeling with experimental distance constraints: The example from a multidomain protein Pub1.

    PubMed

    Zhu, Guanhua; Liu, Wei; Bao, Chenglong; Tong, Dudu; Ji, Hui; Shen, Zuowei; Yang, Daiwen; Lu, Lanyuan

    2018-05-01

    The structural variations of multidomain proteins with flexible parts mediate many biological processes, and a structure ensemble can be determined by selecting a weighted combination of representative structures from a simulated structure pool, producing the best fit to experimental constraints such as interatomic distance. In this study, a hybrid structure-based and physics-based atomistic force field with an efficient sampling strategy is adopted to simulate a model di-domain protein against experimental paramagnetic relaxation enhancement (PRE) data that correspond to distance constraints. The molecular dynamics simulations produce a wide range of conformations depicted on a protein energy landscape. Subsequently, a conformational ensemble recovered with low-energy structures and the minimum-size restraint is identified in good agreement with experimental PRE rates, and the result is also supported by chemical shift perturbations and small-angle X-ray scattering data. It is illustrated that the regularizations of energy and ensemble-size prevent an arbitrary interpretation of protein conformations. Moreover, energy is found to serve as a critical control to refine the structure pool and prevent data overfitting, because the absence of energy regularization exposes ensemble construction to the noise from high-energy structures and causes a more ambiguous representation of protein conformations. Finally, we perform structure-ensemble optimizations with a topology-based structure pool, to enhance the understanding on the ensemble results from different sources of pool candidates. © 2018 Wiley Periodicals, Inc.

  17. Particle shape inhomogeneity and plasmon-band broadening of solar-control LaB6 nanoparticles

    NASA Astrophysics Data System (ADS)

    Machida, Keisuke; Adachi, Kenji

    2015-07-01

    An ensemble inhomogeneity of non-spherical LaB6 nanoparticles dispersion has been analyzed with Mie theory to account for the observed broad plasmon band. LaB6 particle shape has been characterized using small-angle X-ray scattering (SAXS) and electron tomography (ET). SAXS scattering intensity is found to vary exponentially with exponent -3.10, indicating the particle shape of disk toward sphere. ET analysis disclosed dually grouped distribution of nanoparticle dispersion; one is large-sized at small aspect ratio and the other is small-sized with scattered high aspect ratio, reflecting the dual fragmentation modes during the milling process. Mie extinction calculations have been integrated for 100 000 particles of varying aspect ratio, which were produced randomly by using the Box-Muller method. The Mie integration method has produced a broad and smooth absorption band expanded towards low energy, in remarkable agreement with experimental profiles by assuming a SAXS- and ET-derived shape distribution, i.e., a majority of disks with a little incorporation of rods and spheres for the ensemble. The analysis envisages a high potential of LaB6 with further-increased visible transparency and plasmon peak upon controlled particle-shape and its distribution.

  18. Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Alba, Vincenzo

    By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.

  19. Response of ENSO amplitude to global warming in CESM large ensemble: uncertainty due to internal variability

    NASA Astrophysics Data System (ADS)

    Zheng, Xiao-Tong; Hui, Chang; Yeh, Sang-Wook

    2018-06-01

    El Niño-Southern Oscillation (ENSO) is the dominant mode of variability in the coupled ocean-atmospheric system. Future projections of ENSO change under global warming are highly uncertain among models. In this study, the effect of internal variability on ENSO amplitude change in future climate projections is investigated based on a 40-member ensemble from the Community Earth System Model Large Ensemble (CESM-LE) project. A large uncertainty is identified among ensemble members due to internal variability. The inter-member diversity is associated with a zonal dipole pattern of sea surface temperature (SST) change in the mean along the equator, which is similar to the second empirical orthogonal function (EOF) mode of tropical Pacific decadal variability (TPDV) in the unforced control simulation. The uncertainty in CESM-LE is comparable in magnitude to that among models of the Coupled Model Intercomparison Project phase 5 (CMIP5), suggesting the contribution of internal variability to the intermodel uncertainty in ENSO amplitude change. However, the causations between changes in ENSO amplitude and the mean state are distinct between CESM-LE and CMIP5 ensemble. The CESM-LE results indicate that a large ensemble of 15 members is needed to separate the relative contributions to ENSO amplitude change over the twenty-first century between forced response and internal variability.

  20. Relation Between Pore Size and the Compressibility of a Confined Fluid

    PubMed Central

    Gor, Gennady Y.; Siderius, Daniel W.; Rasmussen, Christopher J.; Krekelberg, William P.; Shen, Vincent K.; Bernstein, Noam

    2015-01-01

    When a fluid is confined to a nanopore, its thermodynamic properties differ from the properties of a bulk fluid, so measuring such properties of the confined fluid can provide information about the pore sizes. Here we report a simple relation between the pore size and isothermal compressibility of argon confined in these pores. Compressibility is calculated from the fluctuations of the number of particles in the grand canonical ensemble using two different simulation techniques: conventional grand-canonical Monte Carlo and grand-canonical ensemble transition-matrix Monte Carlo. Our results provide a theoretical framework for extracting the information on the pore sizes of fluid-saturated samples by measuring the compressibility from ultrasonic experiments. PMID:26590541

  1. Fluctuating observation time ensembles in the thermodynamics of trajectories

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.; Turner, Robert M.; Garrahan, Juan P.

    2014-03-01

    The dynamics of stochastic systems, both classical and quantum, can be studied by analysing the statistical properties of dynamical trajectories. The properties of ensembles of such trajectories for long, but fixed, times are described by large-deviation (LD) rate functions. These LD functions play the role of dynamical free energies: they are cumulant generating functions for time-integrated observables, and their analytic structure encodes dynamical phase behaviour. This ‘thermodynamics of trajectories’ approach is to trajectories and dynamics what the equilibrium ensemble method of statistical mechanics is to configurations and statics. Here we show that, just like in the static case, there are a variety of alternative ensembles of trajectories, each defined by their global constraints, with that of trajectories of fixed total time being just one of these. We show how the LD functions that describe an ensemble of trajectories where some time-extensive quantity is constant (and large) but where total observation time fluctuates can be mapped to those of the fixed-time ensemble. We discuss how the correspondence between generalized ensembles can be exploited in path sampling schemes for generating rare dynamical trajectories.

  2. Exploring diversity in ensemble classification: Applications in large area land cover mapping

    NASA Astrophysics Data System (ADS)

    Mellor, Andrew; Boukir, Samia

    2017-07-01

    Ensemble classifiers, such as random forests, are now commonly applied in the field of remote sensing, and have been shown to perform better than single classifier systems, resulting in reduced generalisation error. Diversity across the members of ensemble classifiers is known to have a strong influence on classification performance - whereby classifier errors are uncorrelated and more uniformly distributed across ensemble members. The relationship between ensemble diversity and classification performance has not yet been fully explored in the fields of information science and machine learning and has never been examined in the field of remote sensing. This study is a novel exploration of ensemble diversity and its link to classification performance, applied to a multi-class canopy cover classification problem using random forests and multisource remote sensing and ancillary GIS data, across seven million hectares of diverse dry-sclerophyll dominated public forests in Victoria Australia. A particular emphasis is placed on analysing the relationship between ensemble diversity and ensemble margin - two key concepts in ensemble learning. The main novelty of our work is on boosting diversity by emphasizing the contribution of lower margin instances used in the learning process. Exploring the influence of tree pruning on diversity is also a new empirical analysis that contributes to a better understanding of ensemble performance. Results reveal insights into the trade-off between ensemble classification accuracy and diversity, and through the ensemble margin, demonstrate how inducing diversity by targeting lower margin training samples is a means of achieving better classifier performance for more difficult or rarer classes and reducing information redundancy in classification problems. Our findings inform strategies for collecting training data and designing and parameterising ensemble classifiers, such as random forests. This is particularly important in large area remote sensing applications, for which training data is costly and resource intensive to collect.

  3. Pauci ex tanto numero: reducing redundancy in multi-model ensembles

    NASA Astrophysics Data System (ADS)

    Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.

    2013-02-01

    We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date no attempts in this direction are documented within the air quality (AQ) community, although the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared biases among models will determine a biased ensemble, making therefore essential the errors of the ensemble members to be independent so that bias can cancel out. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated) we discourage selecting the members of the ensemble simply on the basis of scores, that is, independence and skills need to be considered disjointly.

  4. Early sinkhole detection using a drone-based thermal camera and image processing

    NASA Astrophysics Data System (ADS)

    Lee, Eun Ju; Shin, Sang Young; Ko, Byoung Chul; Chang, Chunho

    2016-09-01

    Accurate advance detection of the sinkholes that are occurring more frequently now is an important way of preventing human fatalities and property damage. Unlike naturally occurring sinkholes, human-induced ones in urban areas are typically due to groundwater disturbances and leaks of water and sewage caused by large-scale construction. Although many sinkhole detection methods have been developed, it is still difficult to predict sinkholes that occur in depth areas. In addition, conventional methods are inappropriate for scanning a large area because of their high cost. Therefore, this paper uses a drone combined with a thermal far-infrared (FIR) camera to detect potential sinkholes over a large area based on computer vision and pattern classification techniques. To make a standard dataset, we dug eight holes of depths 0.5-2 m in increments of 0.5 m and with a maximum width of 1 m. We filmed these using the drone-based FIR camera at a height of 50 m. We first detect candidate regions by analysing cold spots in the thermal images based on the fact that a sinkhole typically has a lower thermal energy than its background. Then, these regions are classified into sinkhole and non-sinkhole classes using a pattern classifier. In this study, we ensemble the classification results based on a light convolutional neural network (CNN) and those based on a Boosted Random Forest (BRF) with handcrafted features. We apply the proposed ensemble method successfully to sinkhole data for various sizes and depths in different environments, and prove that the CNN ensemble and the BRF one with handcrafted features are better at detecting sinkholes than other classifiers or standalone CNN.

  5. Ensemble Simulations with Coupled Atmospheric Dynamic and Dispersion Models: Illustrating Uncertainties in Dosage Simulations.

    NASA Astrophysics Data System (ADS)

    Warner, Thomas T.; Sheu, Rong-Shyang; Bowers, James F.; Sykes, R. Ian; Dodd, Gregory C.; Henn, Douglas S.

    2002-05-01

    Ensemble simulations made using a coupled atmospheric dynamic model and a probabilistic Lagrangian puff dispersion model were employed in a forensic analysis of the transport and dispersion of a toxic gas that may have been released near Al Muthanna, Iraq, during the Gulf War. The ensemble study had two objectives, the first of which was to determine the sensitivity of the calculated dosage fields to the choices that must be made about the configuration of the atmospheric dynamic model. In this test, various choices were used for model physics representations and for the large-scale analyses that were used to construct the model initial and boundary conditions. The second study objective was to examine the dispersion model's ability to use ensemble inputs to predict dosage probability distributions. Here, the dispersion model was used with the ensemble mean fields from the individual atmospheric dynamic model runs, including the variability in the individual wind fields, to generate dosage probabilities. These are compared with the explicit dosage probabilities derived from the individual runs of the coupled modeling system. The results demonstrate that the specific choices made about the dynamic-model configuration and the large-scale analyses can have a large impact on the simulated dosages. For example, the area near the source that is exposed to a selected dosage threshold varies by up to a factor of 4 among members of the ensemble. The agreement between the explicit and ensemble dosage probabilities is relatively good for both low and high dosage levels. Although only one ensemble was considered in this study, the encouraging results suggest that a probabilistic dispersion model may be of value in quantifying the effects of uncertainties in a dynamic-model ensemble on dispersion model predictions of atmospheric transport and dispersion.

  6. Sensory processing patterns predict the integration of information held in visual working memory.

    PubMed

    Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne

    2016-02-01

    Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Ocean state and uncertainty forecasts using HYCOM with Local Ensemble Transfer Kalman Filter (LETKF)

    NASA Astrophysics Data System (ADS)

    Wei, Mozheng; Hogan, Pat; Rowley, Clark; Smedstad, Ole-Martin; Wallcraft, Alan; Penny, Steve

    2017-04-01

    An ensemble forecast system based on the US Navy's operational HYCOM using Local Ensemble Transfer Kalman Filter (LETKF) technology has been developed for ocean state and uncertainty forecasts. One of the advantages is that the best possible initial analysis states for the HYCOM forecasts are provided by the LETKF which assimilates the operational observations using ensemble method. The background covariance during this assimilation process is supplied with the ensemble, thus it avoids the difficulty of developing tangent linear and adjoint models for 4D-VAR from the complicated hybrid isopycnal vertical coordinate in HYCOM. Another advantage is that the ensemble system provides the valuable uncertainty estimate corresponding to every state forecast from HYCOM. Uncertainty forecasts have been proven to be critical for the downstream users and managers to make more scientifically sound decisions in numerical prediction community. In addition, ensemble mean is generally more accurate and skilful than the single traditional deterministic forecast with the same resolution. We will introduce the ensemble system design and setup, present some results from 30-member ensemble experiment, and discuss scientific, technical and computational issues and challenges, such as covariance localization, inflation, model related uncertainties and sensitivity to the ensemble size.

  8. The Effects of Theta Precession on Spatial Learning and Simplicial Complex Dynamics in a Topological Model of the Hippocampal Spatial Map

    PubMed Central

    Arai, Mamiko; Brandt, Vicky; Dabaghian, Yuri

    2014-01-01

    Learning arises through the activity of large ensembles of cells, yet most of the data neuroscientists accumulate is at the level of individual neurons; we need models that can bridge this gap. We have taken spatial learning as our starting point, computationally modeling the activity of place cells using methods derived from algebraic topology, especially persistent homology. We previously showed that ensembles of hundreds of place cells could accurately encode topological information about different environments (“learn” the space) within certain values of place cell firing rate, place field size, and cell population; we called this parameter space the learning region. Here we advance the model both technically and conceptually. To make the model more physiological, we explored the effects of theta precession on spatial learning in our virtual ensembles. Theta precession, which is believed to influence learning and memory, did in fact enhance learning in our model, increasing both speed and the size of the learning region. Interestingly, theta precession also increased the number of spurious loops during simplicial complex formation. We next explored how downstream readout neurons might define co-firing by grouping together cells within different windows of time and thereby capturing different degrees of temporal overlap between spike trains. Our model's optimum coactivity window correlates well with experimental data, ranging from ∼150–200 msec. We further studied the relationship between learning time, window width, and theta precession. Our results validate our topological model for spatial learning and open new avenues for connecting data at the level of individual neurons to behavioral outcomes at the neuronal ensemble level. Finally, we analyzed the dynamics of simplicial complex formation and loop transience to propose that the simplicial complex provides a useful working description of the spatial learning process. PMID:24945927

  9. Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble

    NASA Astrophysics Data System (ADS)

    Yongye, Austin B.; Bender, Andreas; Martínez-Mayorga, Karina

    2010-08-01

    Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged- RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged- RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1-4), medium (5-9) and high (10-15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments.

  10. Probabilistic flood warning using grand ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    He, Y.; Wetterhall, F.; Cloke, H.; Pappenberger, F.; Wilson, M.; Freer, J.; McGregor, G.

    2009-04-01

    As the severity of floods increases, possibly due to climate and landuse change, there is urgent need for more effective and reliable warning systems. The incorporation of numerical weather predictions (NWP) into a flood warning system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and can lead to a high number of false or missed warnings. An ensemble of weather forecasts from one Ensemble Prediction System (EPS), when used on catchment hydrology, can provide improved early flood warning as some of the uncertainties can be quantified. EPS forecasts from a single weather centre only account for part of the uncertainties originating from initial conditions and stochastic physics. Other sources of uncertainties, including numerical implementations and/or data assimilation, can only be assessed if a grand ensemble of EPSs from different weather centres is used. When various models that produce EPS from different weather centres are aggregated, the probabilistic nature of the ensemble precipitation forecasts can be better retained and accounted for. The availability of twelve global EPSs through the 'THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a new opportunity for the design of an improved probabilistic flood forecasting framework. This work presents a case study using the TIGGE database for flood warning on a meso-scale catchment. The upper reach of the River Severn catchment located in the Midlands Region of England is selected due to its abundant data for investigation and its relatively small size (4062 km2) (compared to the resolution of the NWPs). This choice was deliberate as we hypothesize that the uncertainty in the forcing of smaller catchments cannot be represented by a single EPS with a very limited number of ensemble members, but only through the variance given by a large number ensembles and ensemble system. A coupled atmospheric-hydrologic-hydraulic cascade system driven by the TIGGE ensemble forecasts is set up to study the potential benefits of using the TIGGE database in early flood warning. Physically based and fully distributed LISFLOOD suite of models is selected to simulate discharge and flood inundation consecutively. The results show the TIGGE database is a promising tool to produce forecasts of discharge and flood inundation comparable with the observed discharge and simulated inundation driven by the observed discharge. The spread of discharge forecasts varies from centre to centre, but it is generally large, implying a significant level of uncertainties. Precipitation input uncertainties dominate and propagate through the cascade chain. The current NWPs fall short of representing the spatial variability of precipitation on a comparatively small catchment. This perhaps indicates the need to improve NWPs resolution and/or disaggregation techniques to narrow down the spatial gap between meteorology and hydrology. It is not necessarily true that early flood warning becomes more reliable when more ensemble forecasts are employed. It is difficult to identify the best forecast centre(s), but in general the chance of detecting floods is increased by using the TIGGE database. Only one flood event was studied because most of the TIGGE data became available after October 2007. It is necessary to test the TIGGE ensemble forecasts with other flood events in other catchments with different hydrological and climatic regimes before general conclusions can be made on its robustness and applicability.

  11. Ensemble training to improve recognition using 2D ear

    NASA Astrophysics Data System (ADS)

    Middendorff, Christopher; Bowyer, Kevin W.

    2009-05-01

    The ear has gained popularity as a biometric feature due to the robustness of the shape over time and across emotional expression. Popular methods of ear biometrics analyze the ear as a whole, leaving these methods vulnerable to error due to occlusion. Many researchers explore ear recognition using an ensemble, but none present a method for designing the individual parts that comprise the ensemble. In this work, we introduce a method of modifying the ensemble shapes to improve performance. We determine how different properties of an ensemble training system can affect overall performance. We show that ensembles built from small parts will outperform ensembles built with larger parts, and that incorporating a large number of parts improves the performance of the ensemble.

  12. The Ensembl genome database project.

    PubMed

    Hubbard, T; Barker, D; Birney, E; Cameron, G; Chen, Y; Clark, L; Cox, T; Cuff, J; Curwen, V; Down, T; Durbin, R; Eyras, E; Gilbert, J; Hammond, M; Huminiecki, L; Kasprzyk, A; Lehvaslaiho, H; Lijnzaad, P; Melsopp, C; Mongin, E; Pettett, R; Pocock, M; Potter, S; Rust, A; Schmidt, E; Searle, S; Slater, G; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Stupka, E; Ureta-Vidal, A; Vastrik, I; Clamp, M

    2002-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of the human genome sequence, with confirmed gene predictions that have been integrated with external data sources, and is available as either an interactive web site or as flat files. It is also an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements from sequence analysis to data storage and visualisation. The Ensembl site is one of the leading sources of human genome sequence annotation and provided much of the analysis for publication by the international human genome project of the draft genome. The Ensembl system is being installed around the world in both companies and academic sites on machines ranging from supercomputers to laptops.

  13. Multi-model analysis in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.

  14. Products of random matrices from fixed trace and induced Ginibre ensembles

    NASA Astrophysics Data System (ADS)

    Akemann, Gernot; Cikovic, Milan

    2018-05-01

    We investigate the microcanonical version of the complex induced Ginibre ensemble, by introducing a fixed trace constraint for its second moment. Like for the canonical Ginibre ensemble, its complex eigenvalues can be interpreted as a two-dimensional Coulomb gas, which are now subject to a constraint and a modified, collective confining potential. Despite the lack of determinantal structure in this fixed trace ensemble, we compute all its density correlation functions at finite matrix size and compare to a fixed trace ensemble of normal matrices, representing a different Coulomb gas. Our main tool of investigation is the Laplace transform, that maps back the fixed trace to the induced Ginibre ensemble. Products of random matrices have been used to study the Lyapunov and stability exponents for chaotic dynamical systems, where the latter are based on the complex eigenvalues of the product matrix. Because little is known about the universality of the eigenvalue distribution of such product matrices, we then study the product of m induced Ginibre matrices with a fixed trace constraint—which are clearly non-Gaussian—and M  ‑  m such Ginibre matrices without constraint. Using an m-fold inverse Laplace transform, we obtain a concise result for the spectral density of such a mixed product matrix at finite matrix size, for arbitrary fixed m and M. Very recently local and global universality was proven by the authors and their coworker for a more general, single elliptic fixed trace ensemble in the bulk of the spectrum. Here, we argue that the spectral density of mixed products is in the same universality class as the product of M independent induced Ginibre ensembles.

  15. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  16. On averaging aspect ratios and distortion parameters over ice crystal population ensembles for estimating effective scattering asymmetry parameters

    PubMed Central

    van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian

    2017-01-01

    The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127

  17. Pauci ex tanto numero: reduce redundancy in multi-model ensembles

    NASA Astrophysics Data System (ADS)

    Solazzo, E.; Riccio, A.; Kioutsioukis, I.; Galmarini, S.

    2013-08-01

    We explicitly address the fundamental issue of member diversity in multi-model ensembles. To date, no attempts in this direction have been documented within the air quality (AQ) community despite the extensive use of ensembles in this field. Common biases and redundancy are the two issues directly deriving from lack of independence, undermining the significance of a multi-model ensemble, and are the subject of this study. Shared, dependant biases among models do not cancel out but will instead determine a biased ensemble. Redundancy derives from having too large a portion of common variance among the members of the ensemble, producing overconfidence in the predictions and underestimation of the uncertainty. The two issues of common biases and redundancy are analysed in detail using the AQMEII ensemble of AQ model results for four air pollutants in two European regions. We show that models share large portions of bias and variance, extending well beyond those induced by common inputs. We make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble with the advantage of being poorly correlated. Selecting the members for generating skilful, non-redundant ensembles from such subsets proved, however, non-trivial. We propose and discuss various methods of member selection and rate the ensemble performance they produce. In most cases, the full ensemble is outscored by the reduced ones. We conclude that, although independence of outputs may not always guarantee enhancement of scores (but this depends upon the skill being investigated), we discourage selecting the members of the ensemble simply on the basis of scores; that is, independence and skills need to be considered disjointly.

  18. A Nonlinear Model for Interactive Data Analysis and Visualization and an Implementation Using Progressive Computation for Massive Remote Climate Data Ensembles

    NASA Astrophysics Data System (ADS)

    Christensen, C.; Liu, S.; Scorzelli, G.; Lee, J. W.; Bremer, P. T.; Summa, B.; Pascucci, V.

    2017-12-01

    The creation, distribution, analysis, and visualization of large spatiotemporal datasets is a growing challenge for the study of climate and weather phenomena in which increasingly massive domains are utilized to resolve finer features, resulting in datasets that are simply too large to be effectively shared. Existing workflows typically consist of pipelines of independent processes that preclude many possible optimizations. As data sizes increase, these pipelines are difficult or impossible to execute interactively and instead simply run as large offline batch processes. Rather than limiting our conceptualization of such systems to pipelines (or dataflows), we propose a new model for interactive data analysis and visualization systems in which we comprehensively consider the processes involved from data inception through analysis and visualization in order to describe systems composed of these processes in a manner that facilitates interactive implementations of the entire system rather than of only a particular component. We demonstrate the application of this new model with the implementation of an interactive system that supports progressive execution of arbitrary user scripts for the analysis and visualization of massive, disparately located climate data ensembles. It is currently in operation as part of the Earth System Grid Federation server running at Lawrence Livermore National Lab, and accessible through both web-based and desktop clients. Our system facilitates interactive analysis and visualization of massive remote datasets up to petabytes in size, such as the 3.5 PB 7km NASA GEOS-5 Nature Run simulation, previously only possible offline or at reduced resolution. To support the community, we have enabled general distribution of our application using public frameworks including Docker and Anaconda.

  19. Interactions between terrestrial mammals and the fruits of two neotropical rainforest tree species

    NASA Astrophysics Data System (ADS)

    Camargo-Sanabria, Angela A.; Mendoza, Eduardo

    2016-05-01

    Mammalian frugivory is a distinctive biotic interaction of tropical forests; however, most efforts in the Neotropics have focused on cases of animals foraging in the forest canopy, in particular primates and bats. In contrast much less is known about this interaction when it involves fruits deposited on the forest floor and terrestrial mammals. We conducted a camera-trapping survey to analyze the characteristics of the mammalian ensembles visiting fruits of Licania platypus and Pouteria sapota deposited on the forest floor in a well preserved tropical rainforest of Mexico. Both tree species produce large fruits but contrast in their population densities and fruit chemical composition. In particular, we expected that more species of terrestrial mammals would consume P. sapota fruits due to its higher pulp:seed ratio, lower availability and greater carbohydrate content. We monitored fruits at the base of 13 trees (P. sapota, n = 4 and L. platypus, n = 9) using camera-traps. We recorded 13 mammal species from which we had evidence of 8 consuming or removing fruits. These eight species accounted for 70% of the species of mammalian frugivores active in the forest floor of our study area. The ensemble of frugivores associated with L. platypus (6 spp.) was a subset of that associated with P. sapota (8 spp). Large body-sized species such as Tapirus bairdii, Pecari tajacu and Cuniculus paca were the mammals more frequently interacting with fruits of the focal species. Our results further our understanding of the characteristics of the interaction between terrestrial mammalian frugivores and large-sized fruits, helping to gain a more balanced view of its importance across different tropical forests and providing a baseline to compare against defaunated forests.

  20. An 'Observational Large Ensemble' to compare observed and modeled temperature trend uncertainty due to internal variability.

    NASA Astrophysics Data System (ADS)

    Poppick, A. N.; McKinnon, K. A.; Dunn-Sigouin, E.; Deser, C.

    2017-12-01

    Initial condition climate model ensembles suggest that regional temperature trends can be highly variable on decadal timescales due to characteristics of internal climate variability. Accounting for trend uncertainty due to internal variability is therefore necessary to contextualize recent observed temperature changes. However, while the variability of trends in a climate model ensemble can be evaluated directly (as the spread across ensemble members), internal variability simulated by a climate model may be inconsistent with observations. Observation-based methods for assessing the role of internal variability on trend uncertainty are therefore required. Here, we use a statistical resampling approach to assess trend uncertainty due to internal variability in historical 50-year (1966-2015) winter near-surface air temperature trends over North America. We compare this estimate of trend uncertainty to simulated trend variability in the NCAR CESM1 Large Ensemble (LENS), finding that uncertainty in wintertime temperature trends over North America due to internal variability is largely overestimated by CESM1, on average by a factor of 32%. Our observation-based resampling approach is combined with the forced signal from LENS to produce an 'Observational Large Ensemble' (OLENS). The members of OLENS indicate a range of spatially coherent fields of temperature trends resulting from different sequences of internal variability consistent with observations. The smaller trend variability in OLENS suggests that uncertainty in the historical climate change signal in observations due to internal variability is less than suggested by LENS.

  1. Programming in the Zone: Repertoire Selection for the Large Ensemble

    ERIC Educational Resources Information Center

    Hopkins, Michael

    2013-01-01

    One of the great challenges ensemble directors face is selecting high-quality repertoire that matches the musical and technical levels of their ensembles. Thoughtful repertoire selection can lead to increased student motivation as well as greater enthusiasm for the music program from parents, administrators, teachers, and community members. Common…

  2. Task switching in a hierarchical task structure: evidence for the fragility of the task repetition benefit.

    PubMed

    Lien, Mei-Ching; Ruthruff, Eric

    2004-05-01

    This study examined how task switching is affected by hierarchical task organization. Traditional task-switching studies, which use a constant temporal and spatial distance between each task element (defined as a stimulus requiring a response), promote a flat task structure. Using this approach, Experiment 1 revealed a large switch cost of 238 ms. In Experiments 2-5, adjacent task elements were grouped temporally and/or spatially (forming an ensemble) to create a hierarchical task organization. Results indicate that the effect of switching at the ensemble level dominated the effect of switching at the element level. Experiments 6 and 7, using an ensemble of 3 task elements, revealed that the element-level switch cost was virtually absent between ensembles but was large within an ensemble. The authors conclude that the element-level task repetition benefit is fragile and can be eliminated in a hierarchical task organization.

  3. Task switching in a hierarchical task structure: evidence for the fragility of the task repetition benefit

    NASA Technical Reports Server (NTRS)

    Lien, Mei-Ching; Ruthruff, Eric

    2004-01-01

    This study examined how task switching is affected by hierarchical task organization. Traditional task-switching studies, which use a constant temporal and spatial distance between each task element (defined as a stimulus requiring a response), promote a flat task structure. Using this approach, Experiment 1 revealed a large switch cost of 238 ms. In Experiments 2-5, adjacent task elements were grouped temporally and/or spatially (forming an ensemble) to create a hierarchical task organization. Results indicate that the effect of switching at the ensemble level dominated the effect of switching at the element level. Experiments 6 and 7, using an ensemble of 3 task elements, revealed that the element-level switch cost was virtually absent between ensembles but was large within an ensemble. The authors conclude that the element-level task repetition benefit is fragile and can be eliminated in a hierarchical task organization.

  4. A new Method for the Estimation of Initial Condition Uncertainty Structures in Mesoscale Models

    NASA Astrophysics Data System (ADS)

    Keller, J. D.; Bach, L.; Hense, A.

    2012-12-01

    The estimation of fast growing error modes of a system is a key interest of ensemble data assimilation when assessing uncertainty in initial conditions. Over the last two decades three methods (and variations of these methods) have evolved for global numerical weather prediction models: ensemble Kalman filter, singular vectors and breeding of growing modes (or now ensemble transform). While the former incorporates a priori model error information and observation error estimates to determine ensemble initial conditions, the latter two techniques directly address the error structures associated with Lyapunov vectors. However, in global models these structures are mainly associated with transient global wave patterns. When assessing initial condition uncertainty in mesoscale limited area models, several problems regarding the aforementioned techniques arise: (a) additional sources of uncertainty on the smaller scales contribute to the error and (b) error structures from the global scale may quickly move through the model domain (depending on the size of the domain). To address the latter problem, perturbation structures from global models are often included in the mesoscale predictions as perturbed boundary conditions. However, the initial perturbations (when used) are often generated with a variant of an ensemble Kalman filter which does not necessarily focus on the large scale error patterns. In the framework of the European regional reanalysis project of the Hans-Ertel-Center for Weather Research we use a mesoscale model with an implemented nudging data assimilation scheme which does not support ensemble data assimilation at all. In preparation of an ensemble-based regional reanalysis and for the estimation of three-dimensional atmospheric covariance structures, we implemented a new method for the assessment of fast growing error modes for mesoscale limited area models. The so-called self-breeding is development based on the breeding of growing modes technique. Initial perturbations are integrated forward for a short time period and then rescaled and added to the initial state again. Iterating this rapid breeding cycle provides estimates for the initial uncertainty structure (or local Lyapunov vectors) given a specific norm. To avoid that all ensemble perturbations converge towards the leading local Lyapunov vector we apply an ensemble transform variant to orthogonalize the perturbations in the sub-space spanned by the ensemble. By choosing different kind of norms to measure perturbation growth, this technique allows for estimating uncertainty patterns targeted at specific sources of errors (e.g. convection, turbulence). With case study experiments we show applications of the self-breeding method for different sources of uncertainty and different horizontal scales.

  5. Ensemble-based docking: From hit discovery to metabolism and toxicity predictions

    DOE PAGES

    Evangelista, Wilfredo; Weir, Rebecca; Ellingson, Sally; ...

    2016-07-29

    The use of ensemble-based docking for the exploration of biochemical pathways and toxicity prediction of drug candidates is described. We describe the computational engineering work necessary to enable large ensemble docking campaigns on supercomputers. We show examples where ensemble-based docking has significantly increased the number and the diversity of validated drug candidates. Finally, we illustrate how ensemble-based docking can be extended beyond hit discovery and toward providing a structural basis for the prediction of metabolism and off-target binding relevant to pre-clinical and clinical trials.

  6. JEnsembl: a version-aware Java API to Ensembl data systems.

    PubMed

    Paterson, Trevor; Law, Andy

    2012-11-01

    The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing 'through time' comparative analyses to be performed. Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net).

  7. Adaptive probabilistic collocation based Kalman filter for unsaturated flow problem

    NASA Astrophysics Data System (ADS)

    Man, J.; Li, W.; Zeng, L.; Wu, L.

    2015-12-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the Polynomial Chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so called "cure of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF is even more computationally expensive than EnKF. Motivated by recent developments in uncertainty quantification, we propose a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problem. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to alleviate the inconsistency between model parameters and states. The performance of RAPCKF is tested by unsaturated flow numerical cases. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.

  8. Spatio-temporal Eigenvector Filtering: Application on Bioenergy Crop Impacts

    NASA Astrophysics Data System (ADS)

    Wang, M.; Kamarianakis, Y.; Georgescu, M.

    2017-12-01

    A suite of 10-year ensemble-based simulations was conducted to investigate the hydroclimatic impacts due to large-scale deployment of perennial bioenergy crops across the continental United States. Given the large size of the simulated dataset (about 60Tb), traditional hierarchical spatio-temporal statistical modelling cannot be implemented for the evaluation of physics parameterizations and biofuel impacts. In this work, we propose a filtering algorithm that takes into account the spatio-temporal autocorrelation structure of the data while avoiding spatial confounding. This method is used to quantify the robustness of simulated hydroclimatic impacts associated with bioenergy crops to alternative physics parameterizations and observational datasets. Results are evaluated against those obtained from three alternative Bayesian spatio-temporal specifications.

  9. Assessing the impact of land use change on hydrology by ensemble modelling (LUCHEM) II: Ensemble combinations and predictions

    USGS Publications Warehouse

    Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.

    2009-01-01

    This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.

  10. Benefits of an ultra large and multiresolution ensemble for estimating available wind power

    NASA Astrophysics Data System (ADS)

    Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik

    2016-04-01

    In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.

  11. Genuine non-self-averaging and ultraslow convergence in gelation.

    PubMed

    Cho, Y S; Mazza, M G; Kahng, B; Nagler, J

    2016-08-01

    In irreversible aggregation processes droplets or polymers of microscopic size successively coalesce until a large cluster of macroscopic scale forms. This gelation transition is widely believed to be self-averaging, meaning that the order parameter (the relative size of the largest connected cluster) attains well-defined values upon ensemble averaging with no sample-to-sample fluctuations in the thermodynamic limit. Here, we report on anomalous gelation transition types. Depending on the growth rate of the largest clusters, the gelation transition can show very diverse patterns as a function of the control parameter, which includes multiple stochastic discontinuous transitions, genuine non-self-averaging and ultraslow convergence of the transition point. Our framework may be helpful in understanding and controlling gelation.

  12. How accurately can the microcanonical ensemble describe small isolated quantum systems?

    NASA Astrophysics Data System (ADS)

    Ikeda, Tatsuhiko N.; Ueda, Masahito

    2015-08-01

    We numerically investigate quantum quenches of a nonintegrable hard-core Bose-Hubbard model to test the accuracy of the microcanonical ensemble in small isolated quantum systems. We show that, in a certain range of system size, the accuracy increases with the dimension of the Hilbert space D as 1 /D . We ascribe this rapid improvement to the absence of correlations between many-body energy eigenstates. Outside of that range, the accuracy is found to scale either as 1 /√{D } or algebraically with the system size.

  13. Examining dynamic interactions among experimental factors influencing hydrologic data assimilation with the ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Cai, X. M.; Ancell, B. C.; Fan, Y. R.

    2017-11-01

    The ensemble Kalman filter (EnKF) is recognized as a powerful data assimilation technique that generates an ensemble of model variables through stochastic perturbations of forcing data and observations. However, relatively little guidance exists with regard to the proper specification of the magnitude of the perturbation and the ensemble size, posing a significant challenge in optimally implementing the EnKF. This paper presents a robust data assimilation system (RDAS), in which a multi-factorial design of the EnKF experiments is first proposed for hydrologic ensemble predictions. A multi-way analysis of variance is then used to examine potential interactions among factors affecting the EnKF experiments, achieving optimality of the RDAS with maximized performance of hydrologic predictions. The RDAS is applied to the Xiangxi River watershed which is the most representative watershed in China's Three Gorges Reservoir region to demonstrate its validity and applicability. Results reveal that the pairwise interaction between perturbed precipitation and streamflow observations has the most significant impact on the performance of the EnKF system, and their interactions vary dynamically across different settings of the ensemble size and the evapotranspiration perturbation. In addition, the interactions among experimental factors vary greatly in magnitude and direction depending on different statistical metrics for model evaluation including the Nash-Sutcliffe efficiency and the Box-Cox transformed root-mean-square error. It is thus necessary to test various evaluation metrics in order to enhance the robustness of hydrologic prediction systems.

  14. Ensemble Perception of Size in 4-5-Year-Old Children

    ERIC Educational Resources Information Center

    Sweeny, Timothy D.; Wurnitsch, Nicole; Gopnik, Alison; Whitney, David

    2015-01-01

    Groups of objects are nearly everywhere we look. Adults can perceive and understand the "gist" of multiple objects at once, engaging ensemble-coding mechanisms that summarize a group's overall appearance. Are these group-perception mechanisms in place early in childhood? Here, we provide the first evidence that 4-5-year-old children use…

  15. Four-Wave Mixing Spectroscopy of Quantum Dot Molecules

    NASA Astrophysics Data System (ADS)

    Sitek, A.; Machnikowski, P.

    2007-08-01

    We study theoretically the nonlinear four-wave mixing response of an ensemble of coupled pairs of quantum dots (quantum dot molecules). We discuss the shape of the echo signal depending on the parameters of the ensemble: the statistics of transition energies and the degree of size correlations between the dots forming the molecules.

  16. Analysis of the regional MiKlip decadal prediction system over Europe: skill, added value of regionalization, and ensemble size dependeny

    NASA Astrophysics Data System (ADS)

    Reyers, Mark; Moemken, Julia; Pinto, Joaquim; Feldmann, Hendrik; Kottmeier, Christoph; MiKlip Module-C Team

    2017-04-01

    Decadal climate predictions can provide a useful basis for decision making support systems for the public and private sectors. Several generations of decadal hindcasts and predictions have been generated throughout the German research program MiKlip. Together with the global climate predictions computed with MPI-ESM, the regional climate model (RCM) COSMO-CLM is used for regional downscaling by MiKlip Module-C. The RCMs provide climate information on spatial and temporal scales closer to the needs of potential users. In this study, two downscaled hindcast generations are analysed (named b0 and b1). The respective global generations are both initialized by nudging them towards different reanalysis anomaly fields. An ensemble of five starting years (1961, 1971, 1981, 1991, and 2001), each comprising ten ensemble members, is used for both generations in order to quantify the regional decadal prediction skill for precipitation and near-surface temperature and wind speed over Europe. All datasets (including hindcasts, observations, reanalysis, and historical MPI-ESM runs) are pre-processed in an analogue manner by (i) removing the long-term trend and (ii) re-gridding to a common grid. Our analysis shows that there is potential for skillful decadal predictions over Europe in the regional MiKlip ensemble, but the skill is not systematic and depends on the PRUDENCE region and the variable. Further, the differences between the two hindcast generations are mostly small. As we used detrended time series, the predictive skill found in our study can probably attributed to reasonable predictions of anomalies which are associated with the natural climate variability. In a sensitivity study, it is shown that the results may strongly change when the long-term trend is kept in the datasets, as here the skill of predicting the long-term trend (e.g. for temperature) also plays a major role. The regionalization of the global ensemble provides an added value for decadal predictions for some complex regions like the Mediterranean and Iberian Peninsula, while for other regions no systematic improvement is found. A clear dependence of the performance of the regional MiKlip system on the ensemble size is detected. For all variables in both hindcast generations, the skill increases when the ensemble is enlarged. The results indicate that a number of ten members is an appropriate ensemble size for decadal predictions over Europe.

  17. Stratospheric ozone levels and their role for the dynamic response to volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Muthers, Stefan; Anet, Julien G.; Raible, Christoph C.; Brönnimann, Stefan; Arfeuille, Florian; Peter, Tom; Rozanov, Eugene; Shapiro, Alexander; Beer, Juerg; Steinhilber, Friedhelm; Brugnara, Yuri; Schmutz, Werner

    2013-04-01

    The role of different background ozone climatologies for the dynamic response to tropical volcanic eruptions is analyzed using an ensemble of simulation with the atmospheric-chemistry-ocean model SOCOL/MPIOM. In this sensitivity study a single tropical eruption of Tambora-size is applied to an ensemble with either pre-industrial ozone concentrations or present day concentrations respectively. The analysis focuses on the characteristic of the Northern Europe winter warming pattern following the eruption, that has been identified after several eruptions in observations and in proxy data. The sensitivity study reveals a higher probability for a large and significant winter warming pattern with pre-industrial ozone levels, when the dynamic response of the chemistry to the eruption is disabled in the model. The positive temperature anomaly is driven by a positive NAO-like pressure pattern that lead to the advection of warm Atlantic air towards Northern Europe. With present day concentrations winter warmings are also found in some ensemble members, but overall the probability is strongly reduced. It is shown, that with pre-industial day ozone concentrations the coupling between positive anomalies of the polar vortex and the zonal wind in the troposphere is more effective, which could explain the higher likelihood of positive NAO-like pressure patterns and positive temperature anomalies in Northern Europe.

  18. Extending Correlation Filter-Based Visual Tracking by Tree-Structured Ensemble and Spatial Windowing.

    PubMed

    Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin

    2017-11-01

    Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.

  19. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  20. Striking Up the Band: Music Education through a Foucaultian Lens

    ERIC Educational Resources Information Center

    Mantie, Roger

    2012-01-01

    Large ensembles (e.g., choirs-orchestras-bands) have become prominent fixtures in most secondary schools and university schools/faculties of music in Canada and the United States. At the secondary school level, large ensembles have become, in effect, practically synonymous with the words "music education." This article derives from the…

  1. Influence of blocking on Northern European and Western Russian heatwaves in large climate model ensembles

    NASA Astrophysics Data System (ADS)

    Schaller, N.; Sillmann, J.; Anstey, J.; Fischer, E. M.; Grams, C. M.; Russo, S.

    2018-05-01

    Better preparedness for summer heatwaves could mitigate their adverse effects on society. This can potentially be attained through an increased understanding of the relationship between heatwaves and one of their main dynamical drivers, atmospheric blocking. In the 1979–2015 period, we find that there is a significant correlation between summer heatwave magnitudes and the number of days influenced by atmospheric blocking in Northern Europe and Western Russia. Using three large global climate model ensembles, we find similar correlations, indicating that these three models are able to represent the relationship between extreme temperature and atmospheric blocking, despite having biases in their simulation of individual climate variables such as temperature or geopotential height. Our results emphasize the need to use large ensembles of different global climate models as single realizations do not always capture this relationship. The three large ensembles further suggest that the relationship between summer heatwaves and atmospheric blocking will not change in the future. This could be used to statistically model heatwaves with atmospheric blocking as a covariate and aid decision-makers in planning disaster risk reduction and adaptation to climate change.

  2. A probabilistic verification score for contours demonstrated with idealized ice-edge forecasts

    NASA Astrophysics Data System (ADS)

    Goessling, Helge; Jung, Thomas

    2017-04-01

    We introduce a probabilistic verification score for ensemble-based forecasts of contours: the Spatial Probability Score (SPS). Defined as the spatial integral of local (Half) Brier Scores, the SPS can be considered the spatial analog of the Continuous Ranked Probability Score (CRPS). Applying the SPS to idealized seasonal ensemble forecasts of the Arctic sea-ice edge in a global coupled climate model, we demonstrate that the SPS responds properly to ensemble size, bias, and spread. When applied to individual forecasts or ensemble means (or quantiles), the SPS is reduced to the 'volume' of mismatch, in case of the ice edge corresponding to the Integrated Ice Edge Error (IIEE).

  3. Optical equivalence of isotropic ensembles of ellipsoidal particles in the Rayleigh-Gans-Debye and anomalous diffraction approximations and its consequences

    NASA Astrophysics Data System (ADS)

    Paramonov, L. E.

    2012-05-01

    Light scattering by isotropic ensembles of ellipsoidal particles is considered in the Rayleigh-Gans-Debye approximation. It is proved that randomly oriented ellipsoidal particles are optically equivalent to polydisperse randomly oriented spheroidal particles and polydisperse spherical particles. Density functions of the shape and size distributions for equivalent ensembles of spheroidal and spherical particles are presented. In the anomalous diffraction approximation, equivalent ensembles of particles are shown to also have equal extinction, scattering, and absorption coefficients. Consequences of optical equivalence are considered. The results are illustrated by numerical calculations of the angular dependence of the scattering phase function using the T-matrix method and the Mie theory.

  4. Sensitivity test and ensemble hazard assessment for tephra fallout at Campi Flegrei, Italy

    NASA Astrophysics Data System (ADS)

    Selva, J.; Costa, A.; De Natale, G.; Di Vito, M. A.; Isaia, R.; Macedonio, G.

    2018-02-01

    We present the results of a statistical study on tephra dispersal in the case of a reactivation of the Campi Flegrei volcano. To represent the spectrum of possible eruptive sizes, four classes of eruptions were considered. Excluding the lava emission, three classes are explosive (Small, Medium, and Large) and can produce a significant quantity of volcanic ash. Hazard assessments were made through simulations of atmospheric dispersion of ash and lapilli, considering the full variability of winds and eruptive vents. The results are presented in form of conditional hazard curves given the occurrence of specific eruptive sizes, representative members of each size class, and then combined to quantify the conditional hazard given an eruption of any size. The main focus of this analysis was to constrain the epistemic uncertainty (i.e. associated with the level of scientific knowledge of phenomena), in order to provide unbiased hazard estimations. The epistemic uncertainty on the estimation of hazard curves was quantified, making use of scientifically acceptable alternatives to be aggregated in the final results. The choice of such alternative models was made after a comprehensive sensitivity analysis which considered different weather databases, alternative modelling of submarine eruptive vents and tephra total grain-size distributions (TGSD) with a different relative mass fraction of fine ash, and the effect of ash aggregation. The results showed that the dominant uncertainty is related to the combined effect of the uncertainty with regard to the fraction of fine particles with respect to the total mass and on how ash aggregation is modelled. The latter is particularly relevant in the case of magma-water interactions during explosive eruptive phases, when a large fraction of fine ash can form accretionary lapilli that might contribute significantly in increasing the tephra load in the proximal areas. The variability induced by the use of different meteorological databases and the selected approach to modelling offshore eruptions were relatively insignificant. The uncertainty arising from the alternative implementations, which would have been neglected in standard (Bayesian) quantifications, were finally quantified by ensemble modelling, and represented by hazard and probability maps produced at different confidence levels.

  5. Extinction spectra of suspensions of microspheres: determination of the spectral refractive index and particle size distribution with nanometer accuracy.

    PubMed

    Gienger, Jonas; Bär, Markus; Neukammer, Jörg

    2018-01-10

    A method is presented to infer simultaneously the wavelength-dependent real refractive index (RI) of the material of microspheres and their size distribution from extinction measurements of particle suspensions. To derive the averaged spectral optical extinction cross section of the microspheres from such ensemble measurements, we determined the particle concentration by flow cytometry to an accuracy of typically 2% and adjusted the particle concentration to ensure that perturbations due to multiple scattering are negligible. For analysis of the extinction spectra, we employ Mie theory, a series-expansion representation of the refractive index and nonlinear numerical optimization. In contrast to other approaches, our method offers the advantage to simultaneously determine size, size distribution, and spectral refractive index of ensembles of microparticles including uncertainty estimation.

  6. A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies

    PubMed Central

    Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.

    2018-01-01

    Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759

  7. The role of protein homochirality in shaping the energy landscape of folding

    PubMed Central

    Nanda, Vikas; Andrianarijaona, Aina; Narayanan, Chitra

    2007-01-01

    The homochirality, or isotacticity, of the natural amino acids facilitates the formation of regular secondary structures such as α-helices and β-sheets. However, many examples exist in nature where novel polypeptide topologies use both l- and d-amino acids. In this study, we explore how stereochemistry of the polypeptide backbone influences basic properties such as compactness and the size of fold space by simulating both lattice and all-atom polypeptide chains. We formulate a rectangular lattice chain model in both two and three dimensions, where monomers are chiral, having the effect of restricting local conformation. Syndiotactic chains with alternating chirality of adjacent monomers have a very large ensemble of accessible conformations characterized predominantly by extended structures. Isotactic chains on the other hand, have far fewer possible conformations and a significant fraction of these are compact. Syndiotactic chains are often unable to access maximally compact states available to their isotactic counterparts of the same length. Similar features are observed in all-atom models of isotactic versus syndiotactic polyalanine. Our results suggest that protein isotacticity has evolved to increase the enthalpy of chain collapse by facilitating compact helical states and to reduce the entropic cost of folding by restricting the size of the unfolded ensemble of competing states. PMID:17600146

  8. Weather extremes in very large, high-resolution ensembles: the weatherathome experiment

    NASA Astrophysics Data System (ADS)

    Allen, M. R.; Rosier, S.; Massey, N.; Rye, C.; Bowery, A.; Miller, J.; Otto, F.; Jones, R.; Wilson, S.; Mote, P.; Stone, D. A.; Yamazaki, Y. H.; Carrington, D.

    2011-12-01

    Resolution and ensemble size are often seen as alternatives in climate modelling. Models with sufficient resolution to simulate many classes of extreme weather cannot normally be run often enough to assess the statistics of rare events, still less how these statistics may be changing. As a result, assessments of the impact of external forcing on regional climate extremes must be based either on statistical downscaling from relatively coarse-resolution models, or statistical extrapolation from 10-year to 100-year events. Under the weatherathome experiment, part of the climateprediction.net initiative, we have compiled the Met Office Regional Climate Model HadRM3P to run on personal computer volunteered by the general public at 25 and 50km resolution, embedded within the HadAM3P global atmosphere model. With a global network of about 50,000 volunteers, this allows us to run time-slice ensembles of essentially unlimited size, exploring the statistics of extreme weather under a range of scenarios for surface forcing and atmospheric composition, allowing for uncertainty in both boundary conditions and model parameters. Current experiments, developed with the support of Microsoft Research, focus on three regions, the Western USA, Europe and Southern Africa. We initially simulate the period 1959-2010 to establish which variables are realistically simulated by the model and on what scales. Our next experiments are focussing on the Event Attribution problem, exploring how the probability of various types of extreme weather would have been different over the recent past in a world unaffected by human influence, following the design of Pall et al (2011), but extended to a longer period and higher spatial resolution. We will present the first results of the unique, global, participatory experiment and discuss the implications for the attribution of recent weather events to anthropogenic influence on climate.

  9. Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF): A data assimilation scheme for memory intensive, high performance computing

    NASA Astrophysics Data System (ADS)

    Hut, Rolf; Amisigo, Barnabas A.; Steele-Dunne, Susan; van de Giesen, Nick

    2015-12-01

    Reduction of Used Memory Ensemble Kalman Filtering (RumEnKF) is introduced as a variant on the Ensemble Kalman Filter (EnKF). RumEnKF differs from EnKF in that it does not store the entire ensemble, but rather only saves the first two moments of the ensemble distribution. In this way, the number of ensemble members that can be calculated is less dependent on available memory, and mainly on available computing power (CPU). RumEnKF is developed to make optimal use of current generation super computer architecture, where the number of available floating point operations (flops) increases more rapidly than the available memory and where inter-node communication can quickly become a bottleneck. RumEnKF reduces the used memory compared to the EnKF when the number of ensemble members is greater than half the number of state variables. In this paper, three simple models are used (auto-regressive, low dimensional Lorenz and high dimensional Lorenz) to show that RumEnKF performs similarly to the EnKF. Furthermore, it is also shown that increasing the ensemble size has a similar impact on the estimation error from the three algorithms.

  10. Acoustic vibrations of metal nano-objects: Time-domain investigations

    NASA Astrophysics Data System (ADS)

    Crut, Aurélien; Maioli, Paolo; Del Fatti, Natalia; Vallée, Fabrice

    2015-01-01

    Theoretical and time-domain experimental investigations of the vibrational acoustic response of nano-objects are described focusing on metallic ones. Acoustic vibrations are modeled using a macroscopic-like approach based on continuum mechanics with the proper boundary conditions, a model which yields results in excellent agreement with the experimental ones and those of atomistic calculations, down to the nanometric scale. Vibrational mode excitation and detection mechanisms and the associated mode selection in ultrafast pump-probe spectroscopy are discussed, and the measured time-dependent signals in single and ensemble of nanoparticles modeled. The launched modes, their period and their damping rate are compared to experimental results obtained on ensembles of nano-objects with different composition, morphology and environment, and with size ranging from one to hundreds of nanometers. Recent extension of time-domain spectroscopy to individual nano-objects has shed new light on the vibrational responses of isolated nanoparticles, in particular on their damping, but also raises questions on the origin of its large particle to particle dispersion.

  11. The memory loophole

    NASA Astrophysics Data System (ADS)

    Shanahan, Daniel

    2008-05-01

    The memory loophole supposes that the measurement of an entangled pair is influenced by the measurements of earlier pairs in the same run of measurements. To assert the memory loophole is thus to deny that measurement is intrinsically random. It is argued that measurement might instead involve a process of recovery and equilibrium in the measuring apparatus akin to that described in thermodynamics by Le Chatelier's principle. The predictions of quantum mechanics would then arise from conservation of the measured property in the combined system of apparatus and measured ensemble. Measurement would be consistent with classical laws of conservation, not simply in the classical limit of large numbers, but whatever the size of the ensemble. However variances from quantum mechanical predictions would be self-correcting and centripetal, rather than Markovian and increasing as under the standard theory. Entanglement correlations would persist, not because the entangled particles act in concert (which would entail nonlocality), but because the measurements of the particles were influenced by the one fluctuating state of imbalance in the process of measurement.

  12. Peptidic Macrocycles - Conformational Sampling and Thermodynamic Characterization

    PubMed Central

    2018-01-01

    Macrocycles are of considerable interest as highly specific drug candidates, yet they challenge standard conformer generators with their large number of rotatable bonds and conformational restrictions. Here, we present a molecular dynamics-based routine that bypasses current limitations in conformational sampling and extensively profiles the free energy landscape of peptidic macrocycles in solution. We perform accelerated molecular dynamics simulations to capture a diverse conformational ensemble. By applying an energetic cutoff, followed by geometric clustering, we demonstrate the striking robustness and efficiency of the approach in identifying highly populated conformational states of cyclic peptides. The resulting structural and thermodynamic information is benchmarked against interproton distances from NMR experiments and conformational states identified by X-ray crystallography. Using three different model systems of varying size and flexibility, we show that the method reliably reproduces experimentally determined structural ensembles and is capable of identifying key conformational states that include the bioactive conformation. Thus, the described approach is a robust method to generate conformations of peptidic macrocycles and holds promise for structure-based drug design. PMID:29652495

  13. Squids old and young: Scale-free design for a simple billboard

    NASA Astrophysics Data System (ADS)

    Packard, Andrew

    2011-03-01

    Squids employ a large range of brightness-contrast spatial frequencies in their camouflage and signalling displays. The 'billboard' of coloured elements ('spots'=chromatophore organs) in the skin is built autopoietically-probably by lateral inhibitory processes-and enlarges as much as 10,000-fold during development. The resulting two-dimensional array is a fractal-like colour/size hierarchy lying in several layers of a multilayered network. Dynamic control of the array by muscles and nerves produces patterns that recall 'half-tone' processing (cf. ink-jet printer). In the more sophisticated (loliginid) squids, patterns also combine 'continuous tones' (cf. dye-sublimation printer). Physiologists and engineers can exploit the natural colour-coding of the integument to understand nerve and muscle system dynamics, examined here at the level of the ensemble. Integrative functions of the whole (H) are analysed in terms of the power spectrum within and between ensembles and of spontaneous waves travelling through the billboard. Video material may be obtained from the author at the above address.

  14. Peptidic Macrocycles - Conformational Sampling and Thermodynamic Characterization.

    PubMed

    Kamenik, Anna S; Lessel, Uta; Fuchs, Julian E; Fox, Thomas; Liedl, Klaus R

    2018-05-29

    Macrocycles are of considerable interest as highly specific drug candidates, yet they challenge standard conformer generators with their large number of rotatable bonds and conformational restrictions. Here, we present a molecular dynamics-based routine that bypasses current limitations in conformational sampling and extensively profiles the free energy landscape of peptidic macrocycles in solution. We perform accelerated molecular dynamics simulations to capture a diverse conformational ensemble. By applying an energetic cutoff, followed by geometric clustering, we demonstrate the striking robustness and efficiency of the approach in identifying highly populated conformational states of cyclic peptides. The resulting structural and thermodynamic information is benchmarked against interproton distances from NMR experiments and conformational states identified by X-ray crystallography. Using three different model systems of varying size and flexibility, we show that the method reliably reproduces experimentally determined structural ensembles and is capable of identifying key conformational states that include the bioactive conformation. Thus, the described approach is a robust method to generate conformations of peptidic macrocycles and holds promise for structure-based drug design.

  15. Spherical cluster ensembles with fractal structure in LaSrMnO: New form of self-organization in solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okunev, V. D.; Samoilenko, Z. A.; Burkhovetski, V. V.

    The growth of La{sub 0.7}Sr{sub 0.3}MnO{sub 3} films in magnetron plasma, in special conditions, leads to the appearance of ensembles of micron-sized spherical crystalline clusters with fractal structure, which we consider to be a new form of self-organization in solids. Each ensemble contains 10{sup 5}-10{sup 6} elementary clusters, 100-250 A in diameter. Interaction of the clusters in the ensemble is realized through the interatomic chemical bonds, intrinsic to the manganites. Integration of peripheral areas of interacting clusters results in the formation of common intercluster medium in the ensemble. We argue that the ensembles with fractal structure built into paramagnetic disorderedmore » matrix have ferromagnetic properties. Absence of sharp borders between elementary clusters and the presence of common intercluster medium inside each ensemble permits to rearrange magnetic order and to change the volume of the ferromagnetic phase, providing automatically a high sensitivity of the material to the external field.« less

  16. Large unbalanced credit scoring using Lasso-logistic regression ensemble.

    PubMed

    Wang, Hong; Xu, Qingsong; Zhou, Lifeng

    2015-01-01

    Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data.

  17. From climate model ensembles to climate change impacts and adaptation: A case study of water resource management in the southwest of England

    NASA Astrophysics Data System (ADS)

    Lopez, Ana; Fung, Fai; New, Mark; Watts, Glenn; Weston, Alan; Wilby, Robert L.

    2009-08-01

    The majority of climate change impacts and adaptation studies so far have been based on at most a few deterministic realizations of future climate, usually representing different emissions scenarios. Large ensembles of climate models are increasingly available either as ensembles of opportunity or perturbed physics ensembles, providing a wealth of additional data that is potentially useful for improving adaptation strategies to climate change. Because of the novelty of this ensemble information, there is little previous experience of practical applications or of the added value of this information for impacts and adaptation decision making. This paper evaluates the value of perturbed physics ensembles of climate models for understanding and planning public water supply under climate change. We deliberately select water resource models that are already used by water supply companies and regulators on the assumption that uptake of information from large ensembles of climate models will be more likely if it does not involve significant investment in new modeling tools and methods. We illustrate the methods with a case study on the Wimbleball water resource zone in the southwest of England. This zone is sufficiently simple to demonstrate the utility of the approach but with enough complexity to allow a variety of different decisions to be made. Our research shows that the additional information contained in the climate model ensemble provides a better understanding of the possible ranges of future conditions, compared to the use of single-model scenarios. Furthermore, with careful presentation, decision makers will find the results from large ensembles of models more accessible and be able to more easily compare the merits of different management options and the timing of different adaptation. The overhead in additional time and expertise for carrying out the impacts analysis will be justified by the increased quality of the decision-making process. We remark that even though we have focused our study on a water resource system in the United Kingdom, our conclusions about the added value of climate model ensembles in guiding adaptation decisions can be generalized to other sectors and geographical regions.

  18. On the Local Equivalence Between the Canonical and the Microcanonical Ensembles for Quantum Spin Systems

    NASA Astrophysics Data System (ADS)

    Tasaki, Hal

    2018-06-01

    We study a quantum spin system on the d-dimensional hypercubic lattice Λ with N=L^d sites with periodic boundary conditions. We take an arbitrary translation invariant short-ranged Hamiltonian. For this system, we consider both the canonical ensemble with inverse temperature β _0 and the microcanonical ensemble with the corresponding energy U_N(β _0) . For an arbitrary self-adjoint operator \\hat{A} whose support is contained in a hypercubic block B inside Λ , we prove that the expectation values of \\hat{A} with respect to these two ensembles are close to each other for large N provided that β _0 is sufficiently small and the number of sites in B is o(N^{1/2}) . This establishes the equivalence of ensembles on the level of local states in a large but finite system. The result is essentially that of Brandao and Cramer (here restricted to the case of the canonical and the microcanonical ensembles), but we prove improved estimates in an elementary manner. We also review and prove standard results on the thermodynamic limits of thermodynamic functions and the equivalence of ensembles in terms of thermodynamic functions. The present paper assumes only elementary knowledge on quantum statistical mechanics and quantum spin systems.

  19. JEnsembl: a version-aware Java API to Ensembl data systems

    PubMed Central

    Paterson, Trevor; Law, Andy

    2012-01-01

    Motivation: The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. Results: The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing ‘through time’ comparative analyses to be performed. Availability: Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net). Contact: jensembl-develop@lists.sf.net, andy.law@roslin.ed.ac.uk, trevor.paterson@roslin.ed.ac.uk PMID:22945789

  20. Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing

    NASA Astrophysics Data System (ADS)

    Toye, Habib; Zhan, Peng; Gopalakrishnan, Ganesh; Kartadikaria, Aditya R.; Huang, Huang; Knio, Omar; Hoteit, Ibrahim

    2017-07-01

    We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation and of the Data Research Testbed (DART) for ensemble data assimilation. DART has been configured to integrate all members of an ensemble adjustment Kalman filter (EAKF) in parallel, based on which we adapted the ensemble operations in DART to use an invariant ensemble, i.e., an ensemble Optimal Interpolation (EnOI) algorithm. This approach requires only single forward model integration in the forecast step and therefore saves substantial computational cost. To deal with the strong seasonal variability of the Red Sea, the EnOI ensemble is then seasonally selected from a climatology of long-term model outputs. Observations of remote sensing sea surface height (SSH) and sea surface temperature (SST) are assimilated every 3 days. Real-time atmospheric fields from the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) are used as forcing in different assimilation experiments. We investigate the behaviors of the EAKF and (seasonal-) EnOI and compare their performances for assimilating and forecasting the circulation of the Red Sea. We further assess the sensitivity of the assimilation system to various filtering parameters (ensemble size, inflation) and atmospheric forcing.

  1. Challenges in Visual Analysis of Ensembles

    DOE PAGES

    Crossno, Patricia

    2018-04-12

    Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.

  2. Challenges in Visual Analysis of Ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crossno, Patricia

    Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.

  3. Western North Pacific Tropical Cyclone Model Tracks in Present and Future Climates

    NASA Astrophysics Data System (ADS)

    Nakamura, Jennifer; Camargo, Suzana J.; Sobel, Adam H.; Henderson, Naomi; Emanuel, Kerry A.; Kumar, Arun; LaRow, Timothy E.; Murakami, Hiroyuki; Roberts, Malcolm J.; Scoccimarro, Enrico; Vidale, Pier Luigi; Wang, Hui; Wehner, Michael F.; Zhao, Ming

    2017-09-01

    Western North Pacific tropical cyclone (TC) model tracks are analyzed in two large multimodel ensembles, spanning a large variety of models and multiple future climate scenarios. Two methodologies are used to synthesize the properties of TC tracks in this large data set: cluster analysis and mass moment ellipses. First, the models' TC tracks are compared to observed TC tracks' characteristics, and a subset of the models is chosen for analysis, based on the tracks' similarity to observations and sample size. Potential changes in track types in a warming climate are identified by comparing the kernel smoothed probability distributions of various track variables in historical and future scenarios using a Kolmogorov-Smirnov significance test. Two track changes are identified. The first is a statistically significant increase in the north-south expansion, which can also be viewed as a poleward shift, as TC tracks are prevented from expanding equatorward due to the weak Coriolis force near the equator. The second change is an eastward shift in the storm tracks that occur near the central Pacific in one of the multimodel ensembles, indicating a possible increase in the occurrence of storms near Hawaii in a warming climate. The dependence of the results on which model and future scenario are considered emphasizes the necessity of including multiple models and scenarios when considering future changes in TC characteristics.

  4. Electrical coupling in ensembles of nonexcitable cells: modeling the spatial map of single cell potentials.

    PubMed

    Cervera, Javier; Manzanares, Jose Antonio; Mafe, Salvador

    2015-02-19

    We analyze the coupling of model nonexcitable (non-neural) cells assuming that the cell membrane potential is the basic individual property. We obtain this potential on the basis of the inward and outward rectifying voltage-gated channels characteristic of cell membranes. We concentrate on the electrical coupling of a cell ensemble rather than on the biochemical and mechanical characteristics of the individual cells, obtain the map of single cell potentials using simple assumptions, and suggest procedures to collectively modify this spatial map. The response of the cell ensemble to an external perturbation and the consequences of cell isolation, heterogeneity, and ensemble size are also analyzed. The results suggest that simple coupling mechanisms can be significant for the biophysical chemistry of model biomolecular ensembles. In particular, the spatiotemporal map of single cell potentials should be relevant for the uptake and distribution of charged nanoparticles over model cell ensembles and the collective properties of droplet networks incorporating protein ion channels inserted in lipid bilayers.

  5. Ensemble-marginalized Kalman filter for linear time-dependent PDEs with noisy boundary conditions: application to heat transfer in building walls

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco; Sawlan, Zaid; Scavino, Marco; Tempone, Raúl; Wood, Christopher

    2018-07-01

    In this work, we present the ensemble-marginalized Kalman filter (EnMKF), a sequential algorithm analogous to our previously proposed approach (Ruggeri et al 2017 Bayesian Anal. 12 407–33, Iglesias et al 2018 Int. J. Heat Mass Transfer 116 417–31), for estimating the state and parameters of linear parabolic partial differential equations in initial-boundary value problems when the boundary data are noisy. We apply EnMKF to infer the thermal properties of building walls and to estimate the corresponding heat flux from real and synthetic data. Compared with a modified ensemble Kalman filter (EnKF) that is not marginalized, EnMKF reduces the bias error, avoids the collapse of the ensemble without needing to add inflation, and converges to the mean field posterior using or less of the ensemble size required by EnKF. According to our results, the marginalization technique in EnMKF is key to performance improvement with smaller ensembles at any fixed time.

  6. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the apparent benefit from using ensembles for cloudy radiance assimilation in an EnVar context.

  7. Current and Future Decadal Trends in the Oceanic Carbon Uptake Are Dominated by Internal Variability

    NASA Astrophysics Data System (ADS)

    Li, Hongmei; Ilyina, Tatiana

    2018-01-01

    We investigate the internal decadal variability of the ocean carbon uptake using 100 ensemble simulations based on the Max Planck Institute Earth system model (MPI-ESM). We find that on decadal time scales, internal variability (ensemble spread) is as large as the forced temporal variability (ensemble mean), and the largest internal variability is found in major carbon sink regions, that is, the 50-65°S band of the Southern Ocean, the North Pacific, and the North Atlantic. The MPI-ESM ensemble produces both positive and negative 10 year trends in the ocean carbon uptake in agreement with observational estimates. Negative decadal trends are projected to occur in the future under RCP4.5 scenario. Due to the large internal variability, the Southern Ocean and the North Pacific require the most ensemble members (more than 53 and 46, respectively) to reproduce the forced decadal trends. This number increases up to 79 in future decades as CO2 emission trajectory changes.

  8. Competition versus Cooperation: Implications for Music Teachers Following Students Feedback from Participation in a Large-Scale Cooperative Music Festival

    ERIC Educational Resources Information Center

    Lowe, Geoffrey M.

    2018-01-01

    Competition is reported in the general education literature as having a largely detrimental impact upon student engagement and long-term motivation, yet competition has long been an accepted part of the music education ensemble landscape. Adjudicated ensemble competitions and competition-festivals are commonplace in most Australian states, as…

  9. Increased Sparsity of Hippocampal CA1 Neuronal Ensembles in a Mouse Model of Down Syndrome Assayed by Arc Expression

    PubMed Central

    Smith-Hicks, Constance L.; Cai, Peiling; Savonenko, Alena V.; Reeves, Roger H.; Worley, Paul F.

    2017-01-01

    Down syndrome (DS) is the leading chromosomal cause of intellectual disability, yet the neural substrates of learning and memory deficits remain poorly understood. Here, we interrogate neural networks linked to learning and memory in a well-characterized model of DS, the Ts65Dn mouse. We report that Ts65Dn mice exhibit exploratory behavior that is not different from littermate wild-type (WT) controls yet behavioral activation of Arc mRNA transcription in pyramidal neurons of the CA1 region of the hippocampus is altered in Ts65Dn mice. In WT mice, a 5 min period of exploration of a novel environment resulted in Arc mRNA transcription in 39% of CA1 neurons. By contrast, the same period of exploration resulted in only ~20% of CA1 neurons transcribing Arc mRNA in Ts65Dn mice indicating increased sparsity of the behaviorally induced ensemble. Like WT mice the CA1 pyramidal neurons of Ts65Dn mice reactivated Arc transcription during a second exposure to the same environment 20 min after the first experience, but the size of the reactivated ensemble was only ~60% of that in WT mice. After repeated daily exposures there was a further decline in the size of the reactivated ensemble in Ts65Dn and a disruption of reactivation. Together these data demonstrate reduction in the size of the behaviorally induced network that expresses Arc in Ts65Dn mice and disruption of the long-term stability of the ensemble. We propose that these deficits in network formation and stability contribute to cognitive symptoms in DS. PMID:28217086

  10. Random forests ensemble classifier trained with data resampling strategy to improve cardiac arrhythmia diagnosis.

    PubMed

    Ozçift, Akin

    2011-05-01

    Supervised classification algorithms are commonly used in the designing of computer-aided diagnosis systems. In this study, we present a resampling strategy based Random Forests (RF) ensemble classifier to improve diagnosis of cardiac arrhythmia. Random forests is an ensemble classifier that consists of many decision trees and outputs the class that is the mode of the class's output by individual trees. In this way, an RF ensemble classifier performs better than a single tree from classification performance point of view. In general, multiclass datasets having unbalanced distribution of sample sizes are difficult to analyze in terms of class discrimination. Cardiac arrhythmia is such a dataset that has multiple classes with small sample sizes and it is therefore adequate to test our resampling based training strategy. The dataset contains 452 samples in fourteen types of arrhythmias and eleven of these classes have sample sizes less than 15. Our diagnosis strategy consists of two parts: (i) a correlation based feature selection algorithm is used to select relevant features from cardiac arrhythmia dataset. (ii) RF machine learning algorithm is used to evaluate the performance of selected features with and without simple random sampling to evaluate the efficiency of proposed training strategy. The resultant accuracy of the classifier is found to be 90.0% and this is a quite high diagnosis performance for cardiac arrhythmia. Furthermore, three case studies, i.e., thyroid, cardiotocography and audiology, are used to benchmark the effectiveness of the proposed method. The results of experiments demonstrated the efficiency of random sampling strategy in training RF ensemble classification algorithm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  12. Ensembles of physical states and random quantum circuits on graphs

    NASA Astrophysics Data System (ADS)

    Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo

    2012-11-01

    In this paper we continue and extend the investigations of the ensembles of random physical states introduced in Hamma [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.040502 109, 040502 (2012)]. These ensembles are constructed by finite-length random quantum circuits (RQC) acting on the (hyper)edges of an underlying (hyper)graph structure. The latter encodes for the locality structure associated with finite-time quantum evolutions generated by physical, i.e., local, Hamiltonians. Our goal is to analyze physical properties of typical states in these ensembles; in particular here we focus on proxies of quantum entanglement as purity and α-Renyi entropies. The problem is formulated in terms of matrix elements of superoperators which depend on the graph structure, choice of probability measure over the local unitaries, and circuit length. In the α=2 case these superoperators act on a restricted multiqubit space generated by permutation operators associated to the subsets of vertices of the graph. For permutationally invariant interactions the dynamics can be further restricted to an exponentially smaller subspace. We consider different families of RQCs and study their typical entanglement properties for finite time as well as their asymptotic behavior. We find that area law holds in average and that the volume law is a typical property (that is, it holds in average and the fluctuations around the average are vanishing for the large system) of physical states. The area law arises when the evolution time is O(1) with respect to the size L of the system, while the volume law arises as is typical when the evolution time scales like O(L).

  13. Investigation of short-term effective radiative forcing of fire aerosols over North America using nudged hindcast ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yawen; Zhang, Kai; Qian, Yun

    Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less

  14. Investigation of short-term effective radiative forcing of fire aerosols over North America using nudged hindcast ensembles

    DOE PAGES

    Liu, Yawen; Zhang, Kai; Qian, Yun; ...

    2018-01-03

    Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less

  15. Optical and structural properties of ensembles of colloidal Ag{sub 2}S quantum dots in gelatin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikov, O. V., E-mail: Ovchinnikov-O-V@rambler.ru; Smirnov, M. S.; Shapiro, B. I.

    2015-03-15

    The size dependences of the absorption and luminescence spectra of ensembles of hydrophilic colloidal Ag{sub 2}S quantum dots produced by the sol-gel method and dispersed in gelatin are analyzed. By X-ray diffraction analysis and transmission electron microscopy, the formation of core/shell nanoparticles is detected. The characteristic feature of the nanoparticles is the formation of crystalline cores, 1.5–2.0 nm in dimensions, and shells of gelatin and its complexes with the components of synthesis. The observed slight size dependence of the position of infrared photoluminescence bands (in the range 1000–1400 nm) in the ensembles of hydrophilic colloidal Ag{sub 2}S quantum dots ismore » explained within the context of the model of the radiative recombination of electrons localized at structural and impurity defects with free holes.« less

  16. Effects of ensemble and summary displays on interpretations of geospatial uncertainty data.

    PubMed

    Padilla, Lace M; Ruginski, Ian T; Creem-Regehr, Sarah H

    2017-01-01

    Ensemble and summary displays are two widely used methods to represent visual-spatial uncertainty; however, there is disagreement about which is the most effective technique to communicate uncertainty to the general public. Visualization scientists create ensemble displays by plotting multiple data points on the same Cartesian coordinate plane. Despite their use in scientific practice, it is more common in public presentations to use visualizations of summary displays, which scientists create by plotting statistical parameters of the ensemble members. While prior work has demonstrated that viewers make different decisions when viewing summary and ensemble displays, it is unclear what components of the displays lead to diverging judgments. This study aims to compare the salience of visual features - or visual elements that attract bottom-up attention - as one possible source of diverging judgments made with ensemble and summary displays in the context of hurricane track forecasts. We report that salient visual features of both ensemble and summary displays influence participant judgment. Specifically, we find that salient features of summary displays of geospatial uncertainty can be misunderstood as displaying size information. Further, salient features of ensemble displays evoke judgments that are indicative of accurate interpretations of the underlying probability distribution of the ensemble data. However, when participants use ensemble displays to make point-based judgments, they may overweight individual ensemble members in their decision-making process. We propose that ensemble displays are a promising alternative to summary displays in a geospatial context but that decisions about visualization methods should be informed by the viewer's task.

  17. The ensemble switch method for computing interfacial tensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmitz, Fabian; Virnau, Peter

    2015-04-14

    We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.

  18. Multiple-instance ensemble learning for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Ergul, Ugur; Bilgin, Gokhan

    2017-10-01

    An ensemble framework for multiple-instance (MI) learning (MIL) is introduced for use in hyperspectral images (HSIs) by inspiring the bagging (bootstrap aggregation) method in ensemble learning. Ensemble-based bagging is performed by a small percentage of training samples, and MI bags are formed by a local windowing process with variable window sizes on selected instances. In addition to bootstrap aggregation, random subspace is another method used to diversify base classifiers. The proposed method is implemented using four MIL classification algorithms. The classifier model learning phase is carried out with MI bags, and the estimation phase is performed over single-test instances. In the experimental part of the study, two different HSIs that have ground-truth information are used, and comparative results are demonstrated with state-of-the-art classification methods. In general, the MI ensemble approach produces more compact results in terms of both diversity and error compared to equipollent non-MIL algorithms.

  19. Non-covalent nanodiamond-polymer dispersions and electrostatic immobilization of bovine serum albumin protein

    NASA Astrophysics Data System (ADS)

    Skaltsas, T.; Pispas, S.; Tagmatarchis, N.

    2015-11-01

    Nanodiamonds (NDs) lack efficient dispersion, not only in solvents but also in aqueous media. The latter is of great importance, considering the inherent biocompatibility of NDs and the plethora of suitable strategies for immobilizing functional biomolecules. In this work, a series of polymers was non-covalently interacted with NDs, forming ND-polymer ensembles, and their dispersibility and stability was examined. Dynamic light scattering gave valuable information regarding the size of the ensembles in liquid phase, while their morphology was further examined by high-resolution transmission electron microscopy imaging. In addition, thermal analysis measurements were applied to collect information on the thermal behavior of NDs and their ensembles and to calculate the amount of polymer interacting with the NDs, as well as the dispersibility values of the ND-polymer ensembles. Finally, the bovine serum albumin protein was electrostatically bound to a ND-polymer ensemble in which the polymeric moiety was carrying quaternized pyridine units.

  20. Size dependence of single-photon superradiance of cold and dilute atomic ensembles

    NASA Astrophysics Data System (ADS)

    Kuraptsev, A. S.; Sokolov, I. M.

    2017-11-01

    We report a theoretical investigation of angular distribution of a single-photon superradiance from cold and dilute atomic clouds. In the present work we focus our attention on the dependence of superradiance on the size and shape of the cloud. We analyze the dynamics of the afterglow of atomic ensemble excited by pulse radiation. Two theoretical approaches are used. The first is the quantum microscopic approach based on a coupled-dipole model. The second approach is random walk approximation. We show that the results obtained in both approaches coincide with a good accuracy for incoherent fluorescence excited by short resonant pulses. We also show that the superradiance decay rate changes with size differently for radiation emitted into different directions.

  1. Large Unbalanced Credit Scoring Using Lasso-Logistic Regression Ensemble

    PubMed Central

    Wang, Hong; Xu, Qingsong; Zhou, Lifeng

    2015-01-01

    Recently, various ensemble learning methods with different base classifiers have been proposed for credit scoring problems. However, for various reasons, there has been little research using logistic regression as the base classifier. In this paper, given large unbalanced data, we consider the plausibility of ensemble learning using regularized logistic regression as the base classifier to deal with credit scoring problems. In this research, the data is first balanced and diversified by clustering and bagging algorithms. Then we apply a Lasso-logistic regression learning ensemble to evaluate the credit risks. We show that the proposed algorithm outperforms popular credit scoring models such as decision tree, Lasso-logistic regression and random forests in terms of AUC and F-measure. We also provide two importance measures for the proposed model to identify important variables in the data. PMID:25706988

  2. Can limited area NWP and/or RCM models improve on large scales inside their domain?

    NASA Astrophysics Data System (ADS)

    Mesinger, Fedor; Veljovic, Katarina

    2017-04-01

    In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales. Average rms wind difference at 250 hPa compared to ECMWF analyses was used as another verification measure. With 21 members run, at about the same resolution of the driver global and the nested Eta during the first 10 days of the experiment, both verification measures generally demonstrate advantage of the Eta, in particular during and after the time of a deep upper tropospheric trough crossing the Rockies at the first 2-6 days of the experiment. Rerunning the Eta ensemble switched to use sigma (Eta/sigma) showed this advantage of the Eta to come to a considerable degree, but not entirely, from its use of the eta coordinate. Compared to cumulative scores of the ensembles run, this is demonstrated to even a greater degree by the number of "wins" of one model vs. another. Thus, at 4.5 day time when the trough just about crossed the Rockies, all 21 Eta/eta members have better ETSa scores than their ECMWF driver members. Eta/sigma has 19 members improving upon ECMWF, but loses to Eta/eta by a score of as much as 20 to 1. ECMWF members do better with rms scores, losing to Eta/eta by 18 vs. 3, but winning over Eta/sigma by 12 to 9. Examples of wind plots behind these results are shown, and additional reasons possibly helping or not helping the results summarized are discussed.

  3. Climateprediction.com: Public Involvement, Multi-Million Member Ensembles and Systematic Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Stainforth, D. A.; Allen, M.; Kettleborough, J.; Collins, M.; Heaps, A.; Stott, P.; Wehner, M.

    2001-12-01

    The climateprediction.com project is preparing to carry out the first systematic uncertainty analysis of climate forecasts using large ensembles of GCM climate simulations. This will be done by involving schools, businesses and members of the public, and utilizing the novel technology of distributed computing. Each participant will be asked to run one member of the ensemble on their PC. The model used will initially be the UK Met Office's Unified Model (UM). It will be run under Windows and software will be provided to enable those involved to view their model output as it develops. The project will use this method to carry out large perturbed physics GCM ensembles and thereby analyse the uncertainty in the forecasts from such models. Each participant/ensemble member will therefore have a version of the UM in which certain aspects of the model physics have been perturbed from their default values. Of course the non-linear nature of the system means that it will be necessary to look not just at perturbations to individual parameters in specific schemes, such as the cloud parameterization, but also to the many combinations of perturbations. This rapidly leads to the need for very large, perhaps multi-million member ensembles, which could only be undertaken using the distributed computing methodology. The status of the project will be presented and the Windows client will be demonstrated. In addition, initial results will be presented from beta test runs using a demo release for Linux PCs and Alpha workstations. Although small by comparison to the whole project, these pilot results constitute a 20-50 member perturbed physics climate ensemble with results indicating how climate sensitivity can be substantially affected by individual parameter values in the cloud scheme.

  4. A new large initial condition ensemble to assess avoided impacts in a climate mitigation scenario

    NASA Astrophysics Data System (ADS)

    Sanderson, B. M.; Tebaldi, C.; Knutti, R.; Oleson, K. W.

    2014-12-01

    It has recently been demonstrated that when considering timescales of up to 50 years, natural variability may play an equal role to anthropogenic forcing on subcontinental trends for a variety of climate indicators. Thus, for many questions assessing climate impacts on such time and spatial scales, it has become clear that a significant number of ensemble members may be required to produce robust statistics (and especially so for extreme events). However, large ensemble experiments to date have considered the role of variability in a single scenario, leaving uncertain the relationship between the forced climate trajectory and the variability about that path. To address this issue, we present a new, publicly available, 15 member initial condition ensemble of 21st century climate projections for the RCP 4.5 scenario using the CESM1.1 Earth System Model, which we propose as a companion project to the existing 40 member CESM large ensemble which uses the higher greenhouse gas emission future of RCP8.5. This provides a valuable data set for assessing what societal and ecological impacts might be avoided through a moderate mitigation strategy in contrast to a fossil fuel intensive future. We present some early analyses of these combined ensembles to assess to what degree the climate variability can be considered to combine linearly with the underlying forced response. In regions where there is no detectable relationship between the mean state and the variability about the mean trajectory, then linear assumptions can be trivially exploited to utilize a single ensemble or control simulation to characterize the variability in any scenario of interest. We highlight regions where there is a detectable nonlinearity in extreme event frequency, how far in the future they will be manifested and propose mechanisms to account for these effects.

  5. Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble

    PubMed Central

    Yongye, Austin B.; Bender, Andreas

    2010-01-01

    Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged-RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged-RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1–4), medium (5–9) and high (10–15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments. Electronic supplementary material The online version of this article (doi:10.1007/s10822-010-9365-1) contains supplementary material, which is available to authorized users. PMID:20499135

  6. Entanglement with negative Wigner function of three thousand atoms heralded by one photon

    NASA Astrophysics Data System (ADS)

    McConnell, Robert; Zhang, Hao; Hu, Jiazhong; Ćuk, Senka; Vuletić, Vladan

    2016-06-01

    Quantum-mechanically correlated (entangled) states of many particles are of interest in quantum information, quantum computing and quantum metrology. Metrologically useful entangled states of large atomic ensembles have been experimentally realized [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], but these states display Gaussian spin distribution functions with a non-negative Wigner function. Non-Gaussian entangled states have been produced in small ensembles of ions [11, 12], and very recently in large atomic ensembles [13, 14, 15]. Here, we generate entanglement in a large atomic ensemble via the interaction with a very weak laser pulse; remarkably, the detection of a single photon prepares several thousand atoms in an entangled state. We reconstruct a negative-valued Wigner function, an important hallmark of nonclassicality, and verify an entanglement depth (minimum number of mutually entangled atoms) of 2910 ± 190 out of 3100 atoms. Attaining such a negative Wigner function and the mutual entanglement of virtually all atoms is unprecedented for an ensemble containing more than a few particles. While the achieved purity of the state is slightly below the threshold for entanglement-induced metrological gain, further technical improvement should allow the generation of states that surpass this threshold, and of more complex Schrödinger cat states for quantum metrology and information processing.

  7. Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection

    PubMed Central

    Liu, Wenfen

    2017-01-01

    Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447

  8. SVM and SVM Ensembles in Breast Cancer Prediction.

    PubMed

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.

  9. SVM and SVM Ensembles in Breast Cancer Prediction

    PubMed Central

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807

  10. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.

    PubMed

    Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A

    2015-01-15

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.

  11. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets

    PubMed Central

    Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.

    2014-01-01

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  12. Distinct contributions of attention and working memory to visual statistical learning and ensemble processing.

    PubMed

    Hall, Michelle G; Mattingley, Jason B; Dux, Paul E

    2015-08-01

    The brain exploits redundancies in the environment to efficiently represent the complexity of the visual world. One example of this is ensemble processing, which provides a statistical summary of elements within a set (e.g., mean size). Another is statistical learning, which involves the encoding of stable spatial or temporal relationships between objects. It has been suggested that ensemble processing over arrays of oriented lines disrupts statistical learning of structure within the arrays (Zhao, Ngo, McKendrick, & Turk-Browne, 2011). Here we asked whether ensemble processing and statistical learning are mutually incompatible, or whether this disruption might occur because ensemble processing encourages participants to process the stimulus arrays in a way that impedes statistical learning. In Experiment 1, we replicated Zhao and colleagues' finding that ensemble processing disrupts statistical learning. In Experiments 2 and 3, we found that statistical learning was unimpaired by ensemble processing when task demands necessitated (a) focal attention to individual items within the stimulus arrays and (b) the retention of individual items in working memory. Together, these results are consistent with an account suggesting that ensemble processing and statistical learning can operate over the same stimuli given appropriate stimulus processing demands during exposure to regularities. (c) 2015 APA, all rights reserved).

  13. Encoding of Spatial Attention by Primate Prefrontal Cortex Neuronal Ensembles

    PubMed Central

    Treue, Stefan

    2018-01-01

    Abstract Single neurons in the primate lateral prefrontal cortex (LPFC) encode information about the allocation of visual attention and the features of visual stimuli. However, how this compares to the performance of neuronal ensembles at encoding the same information is poorly understood. Here, we recorded the responses of neuronal ensembles in the LPFC of two macaque monkeys while they performed a task that required attending to one of two moving random dot patterns positioned in different hemifields and ignoring the other pattern. We found single units selective for the location of the attended stimulus as well as for its motion direction. To determine the coding of both variables in the population of recorded units, we used a linear classifier and progressively built neuronal ensembles by iteratively adding units according to their individual performance (best single units), or by iteratively adding units based on their contribution to the ensemble performance (best ensemble). For both methods, ensembles of relatively small sizes (n < 60) yielded substantially higher decoding performance relative to individual single units. However, the decoder reached similar performance using fewer neurons with the best ensemble building method compared with the best single units method. Our results indicate that neuronal ensembles within the LPFC encode more information about the attended spatial and nonspatial features of visual stimuli than individual neurons. They further suggest that efficient coding of attention can be achieved by relatively small neuronal ensembles characterized by a certain relationship between signal and noise correlation structures. PMID:29568798

  14. Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5

    DOE PAGES

    Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.

    2018-02-21

    Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.

  15. Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.

    Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.

  16. Entanglement with negative Wigner function of almost 3,000 atoms heralded by one photon.

    PubMed

    McConnell, Robert; Zhang, Hao; Hu, Jiazhong; Ćuk, Senka; Vuletić, Vladan

    2015-03-26

    Quantum-mechanically correlated (entangled) states of many particles are of interest in quantum information, quantum computing and quantum metrology. Metrologically useful entangled states of large atomic ensembles have been experimentally realized, but these states display Gaussian spin distribution functions with a non-negative Wigner quasiprobability distribution function. Non-Gaussian entangled states have been produced in small ensembles of ions, and very recently in large atomic ensembles. Here we generate entanglement in a large atomic ensemble via an interaction with a very weak laser pulse; remarkably, the detection of a single photon prepares several thousand atoms in an entangled state. We reconstruct a negative-valued Wigner function--an important hallmark of non-classicality--and verify an entanglement depth (the minimum number of mutually entangled atoms) of 2,910 ± 190 out of 3,100 atoms. Attaining such a negative Wigner function and the mutual entanglement of virtually all atoms is unprecedented for an ensemble containing more than a few particles. Although the achieved purity of the state is slightly below the threshold for entanglement-induced metrological gain, further technical improvement should allow the generation of states that surpass this threshold, and of more complex Schrödinger cat states for quantum metrology and information processing. More generally, our results demonstrate the power of heralded methods for entanglement generation, and illustrate how the information contained in a single photon can drastically alter the quantum state of a large system.

  17. Data assimilation for unsaturated flow models with restart adaptive probabilistic collocation based Kalman filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Li, Weixuan; Zeng, Lingzao

    2016-06-01

    The ensemble Kalman filter (EnKF) has gained popularity in hydrological data assimilation problems. As a Monte Carlo based method, a relatively large ensemble size is usually required to guarantee the accuracy. As an alternative approach, the probabilistic collocation based Kalman filter (PCKF) employs the polynomial chaos to approximate the original system. In this way, the sampling error can be reduced. However, PCKF suffers from the so-called "curse of dimensionality". When the system nonlinearity is strong and number of parameters is large, PCKF could be even more computationally expensive than EnKF. Motivated by most recent developments in uncertainty quantification, we proposemore » a restart adaptive probabilistic collocation based Kalman filter (RAPCKF) for data assimilation in unsaturated flow problems. During the implementation of RAPCKF, the important parameters are identified and active PCE basis functions are adaptively selected. The "restart" technology is used to eliminate the inconsistency between model parameters and states. The performance of RAPCKF is tested with numerical cases of unsaturated flow models. It is shown that RAPCKF is more efficient than EnKF with the same computational cost. Compared with the traditional PCKF, the RAPCKF is more applicable in strongly nonlinear and high dimensional problems.« less

  18. A statistical analysis of the dependency of closure assumptions in cumulus parameterization on the horizontal resolution

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    1994-01-01

    Simulated data from the UCLA cumulus ensemble model are used to investigate the quasi-universal validity of closure assumptions used in existing cumulus parameterizations. A closure assumption is quasi-universally valid if it is sensitive neither to convective cloud regimes nor to horizontal resolutions of large-scale/mesoscale models. The dependency of three types of closure assumptions, as classified by Arakawa and Chen, on the horizontal resolution is addressed in this study. Type I is the constraint on the coupling of the time tendencies of large-scale temperature and water vapor mixing ratio. Type II is the constraint on the coupling of cumulus heating and cumulus drying. Type III is a direct constraint on the intensity of a cumulus ensemble. The macroscopic behavior of simulated cumulus convection is first compared with the observed behavior in view of Type I and Type II closure assumptions using 'quick-look' and canonical correlation analyses. It is found that they are statistically similar to each other. The three types of closure assumptions are further examined with simulated data averaged over selected subdomain sizes ranging from 64 to 512 km. It is found that the dependency of Type I and Type II closure assumptions on the horizontal resolution is very weak and that Type III closure assumption is somewhat dependent upon the horizontal resolution. The influences of convective and mesoscale processes on the closure assumptions are also addressed by comparing the structures of canonical components with the corresponding vertical profiles in the convective and stratiform regions of cumulus ensembles analyzed directly from simulated data. The implication of these results for cumulus parameterization is discussed.

  19. Continental Shallow Convection Cloud-Base Mass Flux from Doppler Lidar and LASSO Ensemble Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Vogelmann, A. M.; Zhang, D.; Kollias, P.; Endo, S.; Lamer, K.; Gustafson, W. I., Jr.; Romps, D. M.

    2017-12-01

    Continental boundary layer clouds are important to simulations of weather and climate because of their impact on surface budgets and vertical transports of energy and moisture; however, model-parameterized boundary layer clouds do not agree well with observations in part because small-scale turbulence and convection are not properly represented. To advance parameterization development and evaluation, observational constraints are needed on critical parameters such as cloud-base mass flux and its relationship to cloud cover and the sub-cloud boundary layer structure including vertical velocity variance and skewness. In this study, these constraints are derived from Doppler lidar observations and ensemble large-eddy simulations (LES) from the U.S. Department of Energy Atmospheric Radiation Measurement (ARM) Facility Southern Great Plains (SGP) site in Oklahoma. The Doppler lidar analysis will extend the single-site, long-term analysis of Lamer and Kollias [2015] and augment this information with the short-term but unique 1-2 year period since five Doppler lidars began operation at the SGP, providing critical information on regional variability. These observations will be compared to the statistics obtained from ensemble, routine LES conducted by the LES ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/modeling/lasso). An Observation System Simulation Experiment (OSSE) will be presented that uses the LASSO LES fields to determine criteria for which relationships from Doppler lidar observations are adequately sampled to yield convergence. Any systematic differences between the observed and simulated relationships will be examined to understand factors contributing to the differences. Lamer, K., and P. Kollias (2015), Observations of fair-weather cumuli over land: Dynamical factors controlling cloud size and cover, Geophys. Res. Lett., 42, 8693-8701, doi:10.1002/2015GL064534

  20. Cyclone Activity in the Arctic From an Ensemble of Regional Climate Models (Arctic CORDEX)

    NASA Astrophysics Data System (ADS)

    Akperov, Mirseid; Rinke, Annette; Mokhov, Igor I.; Matthes, Heidrun; Semenov, Vladimir A.; Adakudlu, Muralidhar; Cassano, John; Christensen, Jens H.; Dembitskaya, Mariya A.; Dethloff, Klaus; Fettweis, Xavier; Glisan, Justin; Gutjahr, Oliver; Heinemann, Günther; Koenigk, Torben; Koldunov, Nikolay V.; Laprise, René; Mottram, Ruth; Nikiéma, Oumarou; Scinocca, John F.; Sein, Dmitry; Sobolowski, Stefan; Winger, Katja; Zhang, Wenxin

    2018-03-01

    The ability of state-of-the-art regional climate models to simulate cyclone activity in the Arctic is assessed based on an ensemble of 13 simulations from 11 models from the Arctic-CORDEX initiative. Some models employ large-scale spectral nudging techniques. Cyclone characteristics simulated by the ensemble are compared with the results forced by four reanalyses (ERA-Interim, National Centers for Environmental Prediction-Climate Forecast System Reanalysis, National Aeronautics and Space Administration-Modern-Era Retrospective analysis for Research and Applications Version 2, and Japan Meteorological Agency-Japanese 55-year reanalysis) in winter and summer for 1981-2010 period. In addition, we compare cyclone statistics between ERA-Interim and the Arctic System Reanalysis reanalyses for 2000-2010. Biases in cyclone frequency, intensity, and size over the Arctic are also quantified. Variations in cyclone frequency across the models are partly attributed to the differences in cyclone frequency over land. The variations across the models are largest for small and shallow cyclones for both seasons. A connection between biases in the zonal wind at 200 hPa and cyclone characteristics is found for both seasons. Most models underestimate zonal wind speed in both seasons, which likely leads to underestimation of cyclone mean depth and deep cyclone frequency in the Arctic. In general, the regional climate models are able to represent the spatial distribution of cyclone characteristics in the Arctic but models that employ large-scale spectral nudging show a better agreement with ERA-Interim reanalysis than the rest of the models. Trends also exhibit the benefits of nudging. Models with spectral nudging are able to reproduce the cyclone trends, whereas most of the nonnudged models fail to do so. However, the cyclone characteristics and trends are sensitive to the choice of nudged variables.

  1. A Single-column Model Ensemble Approach Applied to the TWP-ICE Experiment

    NASA Technical Reports Server (NTRS)

    Davies, L.; Jakob, C.; Cheung, K.; DelGenio, A.; Hill, A.; Hume, T.; Keane, R. J.; Komori, T.; Larson, V. E.; Lin, Y.; hide

    2013-01-01

    Single-column models (SCM) are useful test beds for investigating the parameterization schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best estimate large-scale observations prescribed. Errors estimating the observations will result in uncertainty in modeled simulations. One method to address the modeled uncertainty is to simulate an ensemble where the ensemble members span observational uncertainty. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best estimate product. These data are then used to carry out simulations with 11 SCM and two cloud-resolving models (CRM). Best estimate simulations are also performed. All models show that moisture-related variables are close to observations and there are limited differences between the best estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the surface evaporation term of the moisture budget between the SCM and CRM. Differences are also apparent between the models in the ensemble mean vertical structure of cloud variables, while for each model, cloud properties are relatively insensitive to forcing. The ensemble is further used to investigate cloud variables and precipitation and identifies differences between CRM and SCM particularly for relationships involving ice. This study highlights the additional analysis that can be performed using ensemble simulations and hence enables a more complete model investigation compared to using the more traditional single best estimate simulation only.

  2. Effects of ensembles on methane hydrate nucleation kinetics.

    PubMed

    Zhang, Zhengcai; Liu, Chan-Juan; Walsh, Matthew R; Guo, Guang-Jun

    2016-06-21

    By performing molecular dynamics simulations to form a hydrate with a methane nano-bubble in liquid water at 250 K and 50 MPa, we report how different ensembles, such as the NPT, NVT, and NVE ensembles, affect the nucleation kinetics of the methane hydrate. The nucleation trajectories are monitored using the face-saturated incomplete cage analysis (FSICA) and the mutually coordinated guest (MCG) order parameter (OP). The nucleation rate and the critical nucleus are obtained using the mean first-passage time (MFPT) method based on the FS cages and the MCG-1 OPs, respectively. The fitting results of MFPT show that hydrate nucleation and growth are coupled together, consistent with the cage adsorption hypothesis which emphasizes that the cage adsorption of methane is a mechanism for both hydrate nucleation and growth. For the three different ensembles, the hydrate nucleation rate is quantitatively ordered as follows: NPT > NVT > NVE, while the sequence of hydrate crystallinity is exactly reversed. However, the largest size of the critical nucleus appears in the NVT ensemble, rather than in the NVE ensemble. These results are helpful for choosing a suitable ensemble when to study hydrate formation via computer simulations, and emphasize the importance of the order degree of the critical nucleus.

  3. Deep learning ensemble with asymptotic techniques for oscillometric blood pressure estimation.

    PubMed

    Lee, Soojeong; Chang, Joon-Hyuk

    2017-11-01

    This paper proposes a deep learning based ensemble regression estimator with asymptotic techniques, and offers a method that can decrease uncertainty for oscillometric blood pressure (BP) measurements using the bootstrap and Monte-Carlo approach. While the former is used to estimate SBP and DBP, the latter attempts to determine confidence intervals (CIs) for SBP and DBP based on oscillometric BP measurements. This work originally employs deep belief networks (DBN)-deep neural networks (DNN) to effectively estimate BPs based on oscillometric measurements. However, there are some inherent problems with these methods. First, it is not easy to determine the best DBN-DNN estimator, and worthy information might be omitted when selecting one DBN-DNN estimator and discarding the others. Additionally, our input feature vectors, obtained from only five measurements per subject, represent a very small sample size; this is a critical weakness when using the DBN-DNN technique and can cause overfitting or underfitting, depending on the structure of the algorithm. To address these problems, an ensemble with an asymptotic approach (based on combining the bootstrap with the DBN-DNN technique) is utilized to generate the pseudo features needed to estimate the SBP and DBP. In the first stage, the bootstrap-aggregation technique is used to create ensemble parameters. Afterward, the AdaBoost approach is employed for the second-stage SBP and DBP estimation. We then use the bootstrap and Monte-Carlo techniques in order to determine the CIs based on the target BP estimated using the DBN-DNN ensemble regression estimator with the asymptotic technique in the third stage. The proposed method can mitigate the estimation uncertainty such as large the standard deviation of error (SDE) on comparing the proposed DBN-DNN ensemble regression estimator with the DBN-DNN single regression estimator, we identify that the SDEs of the SBP and DBP are reduced by 0.58 and 0.57  mmHg, respectively. These indicate that the proposed method actually enhances the performance by 9.18% and 10.88% compared with the DBN-DNN single estimator. The proposed methodology improves the accuracy of BP estimation and reduces the uncertainty for BP estimation. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Ensemble-based docking: From hit discovery to metabolism and toxicity predictions.

    PubMed

    Evangelista, Wilfredo; Weir, Rebecca L; Ellingson, Sally R; Harris, Jason B; Kapoor, Karan; Smith, Jeremy C; Baudry, Jerome

    2016-10-15

    This paper describes and illustrates the use of ensemble-based docking, i.e., using a collection of protein structures in docking calculations for hit discovery, the exploration of biochemical pathways and toxicity prediction of drug candidates. We describe the computational engineering work necessary to enable large ensemble docking campaigns on supercomputers. We show examples where ensemble-based docking has significantly increased the number and the diversity of validated drug candidates. Finally, we illustrate how ensemble-based docking can be extended beyond hit discovery and toward providing a structural basis for the prediction of metabolism and off-target binding relevant to pre-clinical and clinical trials. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Post-processing Seasonal Precipitation Forecasts via Integrating Climate Indices and the Analog Approach

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zhang, Y.; Wood, A.; Lee, H. S.; Wu, L.; Schaake, J. C.

    2016-12-01

    Seasonal precipitation forecasts are a primary driver for seasonal streamflow prediction that is critical for a range of water resources applications, such as reservoir operations and drought management. However, it is well known that seasonal precipitation forecasts from climate models are often biased and also too coarse in spatial resolution for hydrologic applications. Therefore, post-processing procedures such as downscaling and bias correction are often needed. In this presentation, we discuss results from a recent study that applies a two-step methodology to downscale and correct the ensemble mean precipitation forecasts from the Climate Forecast System (CFS). First, CFS forecasts are downscaled and bias corrected using monthly reforecast analogs: we identify past precipitation forecasts that are similar to the current forecast, and then use the finer-scale observational analysis fields from the corresponding dates to represent the post-processed ensemble forecasts. Second, we construct the posterior distribution of forecast precipitation from the post-processed ensemble by integrating climate indices: a correlation analysis is performed to identify dominant climate indices for the study region, which are then used to weight the analysis analogs selected in the first step using a Bayesian approach. The methodology is applied to the California Nevada River Forecast Center (CNRFC) and the Middle Atlantic River Forecast Center (MARFC) regions for 1982-2015, using the North American Land Data Assimilation System (NLDAS-2) precipitation as the analysis. The results from cross validation show that the post-processed CFS precipitation forecast are considerably more skillful than the raw CFS with the analog approach only. Integrating climate indices can further improve the skill if the number of ensemble members considered is large enough; however, the improvement is generally limited to the first couple of months when compared against climatology. Impacts of various factors such as ensemble size, lead time, and choice of climate indices will also be discussed.

  6. Slycat™ User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crossno, Patricia J.; Gittinger, Jaxon; Hunt, Warren L.

    Slycat™ is a web-based system for performing data analysis and visualization of potentially large quantities of remote, high-dimensional data. Slycat™ specializes in working with ensemble data. An ensemble is a group of related data sets, which typically consists of a set of simulation runs exploring the same problem space. An ensemble can be thought of as a set of samples within a multi-variate domain, where each sample is a vector whose value defines a point in high-dimensional space. To understand and describe the underlying problem being modeled in the simulations, ensemble analysis looks for shared behaviors and common features acrossmore » the group of runs. Additionally, ensemble analysis tries to quantify differences found in any members that deviate from the rest of the group. The Slycat™ system integrates data management, scalable analysis, and visualization. Results are viewed remotely on a user’s desktop via commodity web clients using a multi-tiered hierarchy of computation and data storage, as shown in Figure 1. Our goal is to operate on data as close to the source as possible, thereby reducing time and storage costs associated with data movement. Consequently, we are working to develop parallel analysis capabilities that operate on High Performance Computing (HPC) platforms, to explore approaches for reducing data size, and to implement strategies for staging computation across the Slycat™ hierarchy. Within Slycat™, data and visual analysis are organized around projects, which are shared by a project team. Project members are explicitly added, each with a designated set of permissions. Although users sign-in to access Slycat™, individual accounts are not maintained. Instead, authentication is used to determine project access. Within projects, Slycat™ models capture analysis results and enable data exploration through various visual representations. Although for scientists each simulation run is a model of real-world phenomena given certain conditions, we use the term model to refer to our modeling of the ensemble data, not the physics. Different model types often provide complementary perspectives on data features when analyzing the same data set. Each model visualizes data at several levels of abstraction, allowing the user to range from viewing the ensemble holistically to accessing numeric parameter values for a single run. Bookmarks provide a mechanism for sharing results, enabling interesting model states to be labeled and saved.« less

  7. Coupling an Ensemble of Electrons on Superfluid Helium to a Superconducting Circuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ge; Fragner, A.; Koolstra, G.

    2016-03-01

    The quantized lateral motional states and the spin states of electrons trapped on the surface of superfluid helium have been proposed as basic building blocks of a scalable quantum computer. Circuit quantum electrodynamics allows strong dipole coupling between electrons and a high-Q superconducting microwave resonator, enabling such sensitive detection and manipulation of electron degrees of freedom. Here, we present the first realization of a hybrid circuit in which a large number of electrons are trapped on the surface of superfluid helium inside a coplanar waveguide resonator. The high finesse of the resonator allows us to observe large dispersive shifts thatmore » are many times the linewidth and make fast and sensitive measurements on the collective vibrational modes of the electron ensemble, as well as the superfluid helium film underneath. Furthermore, a large ensemble coupling is observed in the dispersive regime during experiment, and it shows excellent agreement with our numeric model. The coupling strength of the ensemble to the cavity is found to be approximate to 1 MHz per electron, indicating the feasibility of achieving single electron strong coupling.« less

  8. I = 1 and I = 2 π-π scattering phase shifts from Nf = 2 + 1 lattice QCD

    NASA Astrophysics Data System (ADS)

    Bulava, John; Fahy, Brendan; Hörz, Ben; Juge, Keisuke J.; Morningstar, Colin; Wong, Chik Him

    2016-09-01

    The I = 1 p-wave and I = 2 s-wave elastic π-π scattering amplitudes are calculated from a first-principles lattice QCD simulation using a single ensemble of gauge field configurations with Nf = 2 + 1 dynamical flavors of anisotropic clover-improved Wilson fermions. This ensemble has a large spatial volume V =(3.7 fm)3, pion mass mπ = 230 MeV, and spatial lattice spacing as = 0.11 fm. Calculation of the necessary temporal correlation matrices is efficiently performed using the stochastic LapH method, while the large volume enables an improved energy resolution compared to previous work. For this single ensemble we obtain mρ /mπ = 3.350 (24), gρππ = 5.99 (26), and a clear signal for the I = 2 s-wave. The success of the stochastic LapH method in this proof-of-principle large-volume calculation paves the way for quantitative study of the lattice spacing effects and quark mass dependence of scattering amplitudes using state-of-the-art ensembles.

  9. Ensemble coding of face identity is not independent of the coding of individual identity.

    PubMed

    Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina

    2018-06-01

    Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.

  10. Multi-Conformer Ensemble Docking to Difficult Protein Targets

    DOE PAGES

    Ellingson, Sally R.; Miao, Yinglong; Baudry, Jerome; ...

    2014-09-08

    We investigate large-scale ensemble docking using five proteins from the Directory of Useful Decoys (DUD, dud.docking.org) for which docking to crystal structures has proven difficult. Molecular dynamics trajectories are produced for each protein and an ensemble of representative conformational structures extracted from the trajectories. Docking calculations are performed on these selected simulation structures and ensemble-based enrichment factors compared with those obtained using docking in crystal structures of the same protein targets or random selection of compounds. We also found simulation-derived snapshots with improved enrichment factors that increased the chemical diversity of docking hits for four of the five selected proteins.more » A combination of all the docking results obtained from molecular dynamics simulation followed by selection of top-ranking compounds appears to be an effective strategy for increasing the number and diversity of hits when using docking to screen large libraries of chemicals against difficult protein targets.« less

  11. Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles

    PubMed Central

    2016-01-01

    Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522

  12. Ensemble Kalman Filter Data Assimilation in a Solar Dynamo Model

    NASA Astrophysics Data System (ADS)

    Dikpati, M.

    2017-12-01

    Despite great advancement in solar dynamo models since the first model by Parker in 1955, there remain many challenges in the quest to build a dynamo-based prediction scheme that can accurately predict the solar cycle features. One of these challenges is to implement modern data assimilation techniques, which have been used in the oceanic and atmospheric prediction models. Development of data assimilation in solar models are in the early stages. Recently, observing system simulation experiments (OSSE's) have been performed using Ensemble Kalman Filter data assimilation, in the framework of Data Assimilation Research Testbed of NCAR (NCAR-DART), for estimating parameters in a solar dynamo model. I will demonstrate how the selection of ensemble size, number of observations, amount of error in observations and the choice of assimilation interval play important role in parameter estimation. I will also show how the results of parameter reconstruction improve when accuracy in low-latitude observations is increased, despite large error in polar region data. I will then describe how implementation of data assimilation in a solar dynamo model can bring more accuracy in the prediction of polar fields in North and South hemispheres during the declining phase of cycle 24. Recent evidence indicates that the strength of the Sun's polar field during the cycle minima might be a reliable predictor for the next sunspot cycle's amplitude; therefore it is crucial to accurately predict the polar field strength and pattern.

  13. Potential predictability and forecast skill in ensemble climate forecast: a skill-persistence rule

    NASA Astrophysics Data System (ADS)

    Jin, Yishuai; Rong, Xinyao; Liu, Zhengyu

    2017-12-01

    This study investigates the factors relationship between the forecast skills for the real world (actual skill) and perfect model (perfect skill) in ensemble climate model forecast with a series of fully coupled general circulation model forecast experiments. It is found that the actual skill for sea surface temperature (SST) in seasonal forecast is substantially higher than the perfect skill on a large part of the tropical oceans, especially the tropical Indian Ocean and the central-eastern Pacific Ocean. The higher actual skill is found to be related to the higher observational SST persistence, suggesting a skill-persistence rule: a higher SST persistence in the real world than in the model could overwhelm the model bias to produce a higher forecast skill for the real world than for the perfect model. The relation between forecast skill and persistence is further proved using a first-order autoregressive model (AR1) analytically for theoretical solutions and numerically for analogue experiments. The AR1 model study shows that the skill-persistence rule is strictly valid in the case of infinite ensemble size, but could be distorted by sampling errors and non-AR1 processes. This study suggests that the so called "perfect skill" is model dependent and cannot serve as an accurate estimate of the true upper limit of real world prediction skill, unless the model can capture at least the persistence property of the observation.

  14. Conservation of Mass and Preservation of Positivity with Ensemble-Type Kalman Filter Algorithms

    NASA Technical Reports Server (NTRS)

    Janjic, Tijana; Mclaughlin, Dennis; Cohn, Stephen E.; Verlaan, Martin

    2014-01-01

    This paper considers the incorporation of constraints to enforce physically based conservation laws in the ensemble Kalman filter. In particular, constraints are used to ensure that the ensemble members and the ensemble mean conserve mass and remain nonnegative through measurement updates. In certain situations filtering algorithms such as the ensemble Kalman filter (EnKF) and ensemble transform Kalman filter (ETKF) yield updated ensembles that conserve mass but are negative, even though the actual states must be nonnegative. In such situations if negative values are set to zero, or a log transform is introduced, the total mass will not be conserved. In this study, mass and positivity are both preserved by formulating the filter update as a set of quadratic programming problems that incorporate non-negativity constraints. Simple numerical experiments indicate that this approach can have a significant positive impact on the posterior ensemble distribution, giving results that are more physically plausible both for individual ensemble members and for the ensemble mean. In two examples, an update that includes a non-negativity constraint is able to properly describe the transport of a sharp feature (e.g., a triangle or cone). A number of implementation questions still need to be addressed, particularly the need to develop a computationally efficient quadratic programming update for large ensemble.

  15. Training set extension for SVM ensemble in P300-speller with familiar face paradigm.

    PubMed

    Li, Qi; Shi, Kaiyang; Gao, Ning; Li, Jian; Bai, Ou

    2018-03-27

    P300-spellers are brain-computer interface (BCI)-based character input systems. Support vector machine (SVM) ensembles are trained with large-scale training sets and used as classifiers in these systems. However, the required large-scale training data necessitate a prolonged collection time for each subject, which results in data collected toward the end of the period being contaminated by the subject's fatigue. This study aimed to develop a method for acquiring more training data based on a collected small training set. A new method was developed in which two corresponding training datasets in two sequences are superposed and averaged to extend the training set. The proposed method was tested offline on a P300-speller with the familiar face paradigm. The SVM ensemble with extended training set achieved 85% classification accuracy for the averaged results of four sequences, and 100% for 11 sequences in the P300-speller. In contrast, the conventional SVM ensemble with non-extended training set achieved only 65% accuracy for four sequences, and 92% for 11 sequences. The SVM ensemble with extended training set achieves higher classification accuracies than the conventional SVM ensemble, which verifies that the proposed method effectively improves the classification performance of BCI P300-spellers, thus enhancing their practicality.

  16. The Potential Observation Network Design with Mesoscale Ensemble Sensitivities in Complex Terrain

    DTIC Science & Technology

    2012-03-01

    in synoptic storms , extratropical transition and developing hurricanes. Because they rely on lagged covariances from a finite-sized ensemble, they...diagnose predictors of forecast error in synoptic storms , extratropical transition and developing hurricanes. Because they rely on lagged covariances...sensitivities can be used successfully to diagnose predictors of forecast error in synoptic storms (Torn and Hakim 2008), extratropical transition (Torn and

  17. Collective Dynamics of Specific Gene Ensembles Crucial for Neutrophil Differentiation: The Existence of Genome Vehicles Revealed

    PubMed Central

    Giuliani, Alessandro; Tomita, Masaru

    2010-01-01

    Cell fate decision remarkably generates specific cell differentiation path among the multiple possibilities that can arise through the complex interplay of high-dimensional genome activities. The coordinated action of thousands of genes to switch cell fate decision has indicated the existence of stable attractors guiding the process. However, origins of the intracellular mechanisms that create “cellular attractor” still remain unknown. Here, we examined the collective behavior of genome-wide expressions for neutrophil differentiation through two different stimuli, dimethyl sulfoxide (DMSO) and all-trans-retinoic acid (atRA). To overcome the difficulties of dealing with single gene expression noises, we grouped genes into ensembles and analyzed their expression dynamics in correlation space defined by Pearson correlation and mutual information. The standard deviation of correlation distributions of gene ensembles reduces when the ensemble size is increased following the inverse square root law, for both ensembles chosen randomly from whole genome and ranked according to expression variances across time. Choosing the ensemble size of 200 genes, we show the two probability distributions of correlations of randomly selected genes for atRA and DMSO responses overlapped after 48 hours, defining the neutrophil attractor. Next, tracking the ranked ensembles' trajectories, we noticed that only certain, not all, fall into the attractor in a fractal-like manner. The removal of these genome elements from the whole genomes, for both atRA and DMSO responses, destroys the attractor providing evidence for the existence of specific genome elements (named “genome vehicle”) responsible for the neutrophil attractor. Notably, within the genome vehicles, genes with low or moderate expression changes, which are often considered noisy and insignificant, are essential components for the creation of the neutrophil attractor. Further investigations along with our findings might provide a comprehensive mechanistic view of cell fate decision. PMID:20725638

  18. Rethinking the Default Construction of Multimodel Climate Ensembles

    DOE PAGES

    Rauser, Florian; Gleckler, Peter; Marotzke, Jochem

    2015-07-21

    Here, we discuss the current code of practice in the climate sciences to routinely create climate model ensembles as ensembles of opportunity from the newest phase of the Coupled Model Intercomparison Project (CMIP). We give a two-step argument to rethink this process. First, the differences between generations of ensembles corresponding to different CMIP phases in key climate quantities are not large enough to warrant an automatic separation into generational ensembles for CMIP3 and CMIP5. Second, we suggest that climate model ensembles cannot continue to be mere ensembles of opportunity but should always be based on a transparent scientific decision process.more » If ensembles can be constrained by observation, then they should be constructed as target ensembles that are specifically tailored to a physical question. If model ensembles cannot be constrained by observation, then they should be constructed as cross-generational ensembles, including all available model data to enhance structural model diversity and to better sample the underlying uncertainties. To facilitate this, CMIP should guide the necessarily ongoing process of updating experimental protocols for the evaluation and documentation of coupled models. Finally, with an emphasis on easy access to model data and facilitating the filtering of climate model data across all CMIP generations and experiments, our community could return to the underlying idea of using model data ensembles to improve uncertainty quantification, evaluation, and cross-institutional exchange.« less

  19. Formulation, General Features and Global Calibration of a Bioenergetically-Constrained Fishery Model

    PubMed Central

    Bianchi, Daniele; Galbraith, Eric D.

    2017-01-01

    Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change. PMID:28103280

  20. Formulation, General Features and Global Calibration of a Bioenergetically-Constrained Fishery Model.

    PubMed

    Carozza, David A; Bianchi, Daniele; Galbraith, Eric D

    2017-01-01

    Human exploitation of marine resources is profoundly altering marine ecosystems, while climate change is expected to further impact commercially-harvested fish and other species. Although the global fishery is a highly complex system with many unpredictable aspects, the bioenergetic limits on fish production and the response of fishing effort to profit are both relatively tractable, and are sure to play important roles. Here we describe a generalized, coupled biological-economic model of the global marine fishery that represents both of these aspects in a unified framework, the BiOeconomic mArine Trophic Size-spectrum (BOATS) model. BOATS predicts fish production according to size spectra as a function of net primary production and temperature, and dynamically determines harvest spectra from the biomass density and interactive, prognostic fishing effort. Within this framework, the equilibrium fish biomass is determined by the economic forcings of catchability, ex-vessel price and cost per unit effort, while the peak harvest depends on the ecosystem parameters. Comparison of a large ensemble of idealized simulations with observational databases, focusing on historical biomass and peak harvests, allows us to narrow the range of several uncertain ecosystem parameters, rule out most parameter combinations, and select an optimal ensemble of model variants. Compared to the prior distributions, model variants with lower values of the mortality rate, trophic efficiency, and allometric constant agree better with observations. For most acceptable parameter combinations, natural mortality rates are more strongly affected by temperature than growth rates, suggesting different sensitivities of these processes to climate change. These results highlight the utility of adopting large-scale, aggregated data constraints to reduce model parameter uncertainties and to better predict the response of fisheries to human behaviour and climate change.

  1. Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.

    2015-11-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.

  2. A first line of stress defense: small heat shock proteins and their function in protein homeostasis.

    PubMed

    Haslbeck, Martin; Vierling, Elizabeth

    2015-04-10

    Small heat shock proteins (sHsps) are virtually ubiquitous molecular chaperones that can prevent the irreversible aggregation of denaturing proteins. sHsps complex with a variety of non-native proteins in an ATP-independent manner and, in the context of the stress response, form a first line of defense against protein aggregation in order to maintain protein homeostasis. In vertebrates, they act to maintain the clarity of the eye lens, and in humans, sHsp mutations are linked to myopathies and neuropathies. Although found in all domains of life, sHsps are quite diverse and have evolved independently in metazoans, plants and fungi. sHsp monomers range in size from approximately 12 to 42kDa and are defined by a conserved β-sandwich α-crystallin domain, flanked by variable N- and C-terminal sequences. Most sHsps form large oligomeric ensembles with a broad distribution of different, sphere- or barrel-like oligomers, with the size and structure of the oligomers dictated by features of the N- and C-termini. The activity of sHsps is regulated by mechanisms that change the equilibrium distribution in tertiary features and/or quaternary structure of the sHsp ensembles. Cooperation and/or co-assembly between different sHsps in the same cellular compartment add an underexplored level of complexity to sHsp structure and function. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Finding structure in data using multivariate tree boosting

    PubMed Central

    Miller, Patrick J.; Lubke, Gitta H.; McArtor, Daniel B.; Bergeman, C. S.

    2016-01-01

    Technology and collaboration enable dramatic increases in the size of psychological and psychiatric data collections, but finding structure in these large data sets with many collected variables is challenging. Decision tree ensembles such as random forests (Strobl, Malley, & Tutz, 2009) are a useful tool for finding structure, but are difficult to interpret with multiple outcome variables which are often of interest in psychology. To find and interpret structure in data sets with multiple outcomes and many predictors (possibly exceeding the sample size), we introduce a multivariate extension to a decision tree ensemble method called gradient boosted regression trees (Friedman, 2001). Our extension, multivariate tree boosting, is a method for nonparametric regression that is useful for identifying important predictors, detecting predictors with nonlinear effects and interactions without specification of such effects, and for identifying predictors that cause two or more outcome variables to covary. We provide the R package ‘mvtboost’ to estimate, tune, and interpret the resulting model, which extends the implementation of univariate boosting in the R package ‘gbm’ (Ridgeway et al., 2015) to continuous, multivariate outcomes. To illustrate the approach, we analyze predictors of psychological well-being (Ryff & Keyes, 1995). Simulations verify that our approach identifies predictors with nonlinear effects and achieves high prediction accuracy, exceeding or matching the performance of (penalized) multivariate multiple regression and multivariate decision trees over a wide range of conditions. PMID:27918183

  4. In Vitro and In Vivo Single Myosin Step-Sizes in Striated Muscle a

    PubMed Central

    Burghardt, Thomas P.; Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2016-01-01

    Myosin in muscle transduces ATP free energy into the mechanical work of moving actin. It has a motor domain transducer containing ATP and actin binding sites, and, mechanical elements coupling motor impulse to the myosin filament backbone providing transduction/mechanical-coupling. The mechanical coupler is a lever-arm stabilized by bound essential and regulatory light chains. The lever-arm rotates cyclically to impel bound filamentous actin. Linear actin displacement due to lever-arm rotation is the myosin step-size. A high-throughput quantum dot labeled actin in vitro motility assay (Qdot assay) measures motor step-size in the context of an ensemble of actomyosin interactions. The ensemble context imposes a constant velocity constraint for myosins interacting with one actin filament. In a cardiac myosin producing multiple step-sizes, a “second characterization” is step-frequency that adjusts longer step-size to lower frequency maintaining a linear actin velocity identical to that from a shorter step-size and higher frequency actomyosin cycle. The step-frequency characteristic involves and integrates myosin enzyme kinetics, mechanical strain, and other ensemble affected characteristics. The high-throughput Qdot assay suits a new paradigm calling for wide surveillance of the vast number of disease or aging relevant myosin isoforms that contrasts with the alternative model calling for exhaustive research on a tiny subset myosin forms. The zebrafish embryo assay (Z assay) performs single myosin step-size and step-frequency assaying in vivo combining single myosin mechanical and whole muscle physiological characterizations in one model organism. The Qdot and Z assays cover “bottom-up” and “top-down” assaying of myosin characteristics. PMID:26728749

  5. A 12-year (1987-1998) Ensemble Simulation of the US Climate with a Variable Resolution Stretched Grid GCM

    NASA Technical Reports Server (NTRS)

    Fox-Rabinovitz, Michael S.; Takacs, Lawrence L.; Govindaraju, Ravi C.

    2002-01-01

    The variable-resolution stretched-grid (SG) GEOS (Goddard Earth Observing System) GCM has been used for limited ensemble integrations with a relatively coarse, 60 to 100 km, regional resolution over the U.S. The experiments have been run for the 12-year period, 1987-1998, that includes the recent ENSO cycles. Initial conditions 1-2 days apart are used for ensemble members. The goal of the experiments is analyzing the long-term SG-GCM ensemble integrations in terms of their potential in reducing the uncertainties of regional climate simulation while producing realistic mesoscales. The ensemble integration results are analyzed for both prognostic and diagnostic fields. A special attention is devoted to analyzing the variability of precipitation over the U.S. The internal variability of the SG-GCM has been assessed. The ensemble means appear to be closer to the verifying analyses than the individual ensemble members. The ensemble means capture realistic mesoscale patterns, especially those of induced by orography. Two ENSO cycles have been analyzed in terms their impact on the U.S. climate, especially on precipitation. The ability of the SG-GCM simulations to produce regional climate anomalies has been confirmed. However, the optimal size of the ensembles depending on fine regional resolution used, is still to be determined. The SG-GCM ensemble simulations are performed as a preparation or a preliminary stage for the international SGMIP (Stretched-Grid Model Intercomparison Project) that is under way with participation of the major centers and groups employing the SG-approach for regional climate modeling.

  6. Ensemble perception of color in autistic adults.

    PubMed

    Maule, John; Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna

    2017-05-01

    Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839-851. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  7. Ensemble perception of color in autistic adults

    PubMed Central

    Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna

    2016-01-01

    Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839–851. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:27874263

  8. epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.

    PubMed

    Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa

    2016-12-01

    Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  9. Interactive Scripting for Analysis and Visualization of Arbitrarily Large, Disparately Located Climate Data Ensembles Using a Progressive Runtime Server

    NASA Astrophysics Data System (ADS)

    Christensen, C.; Summa, B.; Scorzelli, G.; Lee, J. W.; Venkat, A.; Bremer, P. T.; Pascucci, V.

    2017-12-01

    Massive datasets are becoming more common due to increasingly detailed simulations and higher resolution acquisition devices. Yet accessing and processing these huge data collections for scientific analysis is still a significant challenge. Solutions that rely on extensive data transfers are increasingly untenable and often impossible due to lack of sufficient storage at the client side as well as insufficient bandwidth to conduct such large transfers, that in some cases could entail petabytes of data. Large-scale remote computing resources can be useful, but utilizing such systems typically entails some form of offline batch processing with long delays, data replications, and substantial cost for any mistakes. Both types of workflows can severely limit the flexible exploration and rapid evaluation of new hypotheses that are crucial to the scientific process and thereby impede scientific discovery. In order to facilitate interactivity in both analysis and visualization of these massive data ensembles, we introduce a dynamic runtime system suitable for progressive computation and interactive visualization of arbitrarily large, disparately located spatiotemporal datasets. Our system includes an embedded domain-specific language (EDSL) that allows users to express a wide range of data analysis operations in a simple and abstract manner. The underlying runtime system transparently resolves issues such as remote data access and resampling while at the same time maintaining interactivity through progressive and interruptible processing. Computations involving large amounts of data can be performed remotely in an incremental fashion that dramatically reduces data movement, while the client receives updates progressively thereby remaining robust to fluctuating network latency or limited bandwidth. This system facilitates interactive, incremental analysis and visualization of massive remote datasets up to petabytes in size. Our system is now available for general use in the community through both docker and anaconda.

  10. Turbulent Dynamics of Epithelial Cell Cultures

    NASA Astrophysics Data System (ADS)

    Blanch-Mercader, C.; Yashunsky, V.; Garcia, S.; Duclos, G.; Giomi, L.; Silberzan, P.

    2018-05-01

    We investigate the large length and long time scales collective flows and structural rearrangements within in vitro human bronchial epithelial cell (HBEC) cultures. Activity-driven collective flows result in ensembles of vortices randomly positioned in space. By analyzing a large population of vortices, we show that their area follows an exponential law with a constant mean value and their rotational frequency is size independent, both being characteristic features of the chaotic dynamics of active nematic suspensions. Indeed, we find that HBECs self-organize in nematic domains of several cell lengths. Nematic defects are found at the interface between domains with a total number that remains constant due to the dynamical balance of nucleation and annihilation events. The mean velocity fields in the vicinity of defects are well described by a hydrodynamic theory of extensile active nematics.

  11. Decadal prediction skill in the ocean with surface nudging in the IPSL-CM5A-LR climate model

    NASA Astrophysics Data System (ADS)

    Mignot, Juliette; García-Serrano, Javier; Swingedouw, Didier; Germe, Agathe; Nguyen, Sébastien; Ortega, Pablo; Guilyardi, Eric; Ray, Sulagna

    2016-08-01

    Two decadal prediction ensembles, based on the same climate model (IPSL-CM5A-LR) and the same surface nudging initialization strategy are analyzed and compared with a focus on upper-ocean variables in different regions of the globe. One ensemble consists of 3-member hindcasts launched every year since 1961 while the other ensemble benefits from 9 members but with start dates only every 5 years. Analysis includes anomaly correlation coefficients and root mean square errors computed against several reanalysis and gridded observational fields, as well as against the nudged simulation used to produce the hindcasts initial conditions. The last skill measure gives an upper limit of the predictability horizon one can expect in the forecast system, while the comparison with different datasets highlights uncertainty when assessing the actual skill. Results provide a potential prediction skill (verification against the nudged simulation) beyond the linear trend of the order of 10 years ahead at the global scale, but essentially associated with non-linear radiative forcings, in particular from volcanoes. At regional scale, we obtain 1 year in the tropical band, 10 years at midlatitudes in the North Atlantic and North Pacific, and 5 years at tropical latitudes in the North Atlantic, for both sea surface temperature (SST) and upper-ocean heat content. Actual prediction skill (verified against observational or reanalysis data) is overall more limited and less robust. Even so, large actual skill is found in the extratropical North Atlantic for SST and in the tropical to subtropical North Pacific for upper-ocean heat content. Results are analyzed with respect to the specific dynamics of the model and the way it is influenced by the nudging. The interplay between initialization and internal modes of variability is also analyzed for sea surface salinity. The study illustrates the importance of two key ingredients both necessary for the success of future coordinated decadal prediction exercises, a high frequency of start dates is needed to achieve robust statistical significance, and a large ensemble size is required to increase the signal to noise ratio.

  12. Numerical study of influence of different dispersed components of crystal cloud on transmission of radiant energy

    NASA Astrophysics Data System (ADS)

    Shefer, Olga

    2017-11-01

    The calculated results of the transmission of visible and infrared radiation by an atmosphere layer involving ensembles of large preferentially oriented crystals and spherical particles are presented. To calculate extinction characteristics, the physical optics method and the Mie theory are applied. Among all atmospheric particles, both the small particles that are commensurable with the wavelength of the incident radiation and the large plates and the columns are distinguished by the most pronounced dependence of the transmission on spectra of radiant energy. The work illustrates features of influence of parameters of the particle size distribution, particle aspect ratios, orientation and particle refractive index, also polarization state of the incident radiation on the transmission. The predominant effect of the plates on the wavelength dependence of the transmission is shown. A separated and cooperative contributes of the large plates and the small volume shape particles to the common transmission by medium are considered.

  13. The probability of occurrence of high-loss windstorms

    NASA Astrophysics Data System (ADS)

    Massey, Neil

    2016-04-01

    Windstorms are one of the largest meteorological risks to life and property in Europe. High - loss windstorms, in terms of insured losses, are a result of not only the windspeed of the storm but also the position and track of the storm. The two highest loss storms on record, Daria (1990) and Lothar (1999) caused so much damage because they tracked across highly populated areas of Europe. Although the frequency and intensity of high - loss wind storms in the observed record is known, there are not enough samples, due to the short observed record, to truly know the distribution of the frequency and intensity of windstorms over Europe and, by extension, the distribution of losses which could occur if the atmosphere had been in a different state due to the internal variability of the atmosphere. Risk and loss modelling exercises carried out by and for the reinsurance industry have typically stochastically perturbed the historical record of high - loss windstorms to produce distributions of potential windstorms with greater sample sizes than the observations. This poster presents a new method of generating many samples of potential windstorms and analyses the frequency of occurrence, intensity and potential losses of these windstorms. The large ensemble regional climate modelling project weather@home is used to generate many regional climate model representations (800 per year) of the weather over Europe between 1985 and 2010. The regional climate model is driven at the boundaries by a free running global climate model and so each ensemble member represents a potential state of the atmosphere, rather than an observed state. The winter storm season of October to March is analysed by applying an objective cyclone identification and tracking algorithm to each ensemble member. From the resulting tracks, the windspeed within a 1000km radius of the cyclone centre is extracted and the maximum windspeed over a 72 hour period is derived as the storm windspeed footprint. This footprint is fed into a population based loss model to estimate the losses for the storm. Additionally the same analysis is performed on data from the same regional climate model, driven at the boundaries by ERA - Interim. This allows the tracks and losses of the storms in the observed record to be recovered using the same tracking method and loss model. A storm track matching function is applied to the storm tracks in the large ensemble and so analogues of the observed storms can be recovered. The frequency of occurrence of the high - loss storms in the large ensemble can then be determined, and used as a proxy for the frequency of occurrence in the observations.

  14. Total probabilities of ensemble runoff forecasts

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian

    2017-04-01

    Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes distance, stream-connectivity and size of the catchment areas into account. This can in some cases have a slight negative impact on the calibration error, but avoids large differences between parameters of nearby locations, whether stream connected or not. The spatial calibration also makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.

  15. Dielectrophoretic immobilisation of nanoparticles as isolated singles in regular arrays

    NASA Astrophysics Data System (ADS)

    Knigge, Xenia; Wenger, Christian; Bier, Frank F.; Hölzel, Ralph

    2018-02-01

    We demonstrate the immobilisation of polystyrene nanoparticles on vertical nano-electrodes by means of dielectrophoresis. The electrodes have diameters of 500 nm or 50 nm, respectively, and are arranged in arrays of several thousand electrodes, allowing many thousands of experiments in parallel. At a frequency of 15 kHz, which is found favourable for polystyrene, several occupation patterns are observed, and both temporary and permanent immobilisation is achieved. In addition, a histogram method is applied, which allows to determine the number of particles occupying the electrodes. These results are validated with scanning electron microscopy images. Immobilising exactly one particle at each electrode tip is achieved for electrode tip diameters with half the particle size. Extension of this system down to the level of single molecules is envisaged, which will avoid ensemble averaging at still statistically large sample sizes.

  16. GaAs/GaP quantum dots: Ensemble of direct and indirect heterostructures with room temperature optical emission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dadgostar, S.; Mogilatenko, A.; Masselink, W. T.

    2016-03-07

    We describe the optical emission and the carrier dynamics of an ensemble of self-assembled GaAs quantum dots embedded in GaP(001). The QD formation is driven by the 3.6% lattice mismatch between GaAs and GaP in the Stranski-Krastanow mode after deposition of more than 1.2 monolayers of GaAs. The quantum dots have an areal density between 6 and 7.6 × 10{sup 10} per cm{sup −2} and multimodal size distribution. The luminescence spectra show two peaks in the range of 1.7 and 2.1 eV. The samples with larger quantum dots have red emission and show less thermal quenching compared with the samples with smaller QDs.more » The large QDs luminescence up to room temperature. We attribute the high energy emission to indirect carrier recombination in the thin quantum wells or small strained quantum dots, whereas the low energy red emission is due to the direct electron-hole recombination in the relaxed quantum dots.« less

  17. Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift

    PubMed Central

    Ortíz Díaz, Agustín; Ramos-Jiménez, Gonzalo; Frías Blanco, Isvani; Caballero Mota, Yailé; Morales-Bueno, Rafael

    2015-01-01

    The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. PMID:25879051

  18. Impact of hindcast length on estimates of seasonal climate predictability.

    PubMed

    Shi, W; Schaller, N; MacLeod, D; Palmer, T N; Weisheimer, A

    2015-03-16

    It has recently been argued that single-model seasonal forecast ensembles are overdispersive, implying that the real world is more predictable than indicated by estimates of so-called perfect model predictability, particularly over the North Atlantic. However, such estimates are based on relatively short forecast data sets comprising just 20 years of seasonal predictions. Here we study longer 40 year seasonal forecast data sets from multimodel seasonal forecast ensemble projects and show that sampling uncertainty due to the length of the hindcast periods is large. The skill of forecasting the North Atlantic Oscillation during winter varies within the 40 year data sets with high levels of skill found for some subperiods. It is demonstrated that while 20 year estimates of seasonal reliability can show evidence of overdispersive behavior, the 40 year estimates are more stable and show no evidence of overdispersion. Instead, the predominant feature on these longer time scales is underdispersion, particularly in the tropics. Predictions can appear overdispersive due to hindcast length sampling errorLonger hindcasts are more robust and underdispersive, especially in the tropicsTwenty hindcasts are an inadequate sample size to assess seasonal forecast skill.

  19. An ensemble-of-classifiers based approach for early diagnosis of Alzheimer's disease: classification using structural features of brain images.

    PubMed

    Farhan, Saima; Fahiem, Muhammad Abuzar; Tauseef, Huma

    2014-01-01

    Structural brain imaging is playing a vital role in identification of changes that occur in brain associated with Alzheimer's disease. This paper proposes an automated image processing based approach for the identification of AD from MRI of the brain. The proposed approach is novel in a sense that it has higher specificity/accuracy values despite the use of smaller feature set as compared to existing approaches. Moreover, the proposed approach is capable of identifying AD patients in early stages. The dataset selected consists of 85 age and gender matched individuals from OASIS database. The features selected are volume of GM, WM, and CSF and size of hippocampus. Three different classification models (SVM, MLP, and J48) are used for identification of patients and controls. In addition, an ensemble of classifiers, based on majority voting, is adopted to overcome the error caused by an independent base classifier. Ten-fold cross validation strategy is applied for the evaluation of our scheme. Moreover, to evaluate the performance of proposed approach, individual features and combination of features are fed to individual classifiers and ensemble based classifier. Using size of left hippocampus as feature, the accuracy achieved with ensemble of classifiers is 93.75%, with 100% specificity and 87.5% sensitivity.

  20. Ensemble gene function prediction database reveals genes important for complex I formation in Arabidopsis thaliana.

    PubMed

    Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek

    2018-03-01

    Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  1. Active classifier selection for RGB-D object categorization using a Markov random field ensemble method

    NASA Astrophysics Data System (ADS)

    Durner, Maximilian; Márton, Zoltán.; Hillenbrand, Ulrich; Ali, Haider; Kleinsteuber, Martin

    2017-03-01

    In this work, a new ensemble method for the task of category recognition in different environments is presented. The focus is on service robotic perception in an open environment, where the robot's task is to recognize previously unseen objects of predefined categories, based on training on a public dataset. We propose an ensemble learning approach to be able to flexibly combine complementary sources of information (different state-of-the-art descriptors computed on color and depth images), based on a Markov Random Field (MRF). By exploiting its specific characteristics, the MRF ensemble method can also be executed as a Dynamic Classifier Selection (DCS) system. In the experiments, the committee- and topology-dependent performance boost of our ensemble is shown. Despite reduced computational costs and using less information, our strategy performs on the same level as common ensemble approaches. Finally, the impact of large differences between datasets is analyzed.

  2. Evolution of precipitation extremes in two large ensembles of climate simulations

    NASA Astrophysics Data System (ADS)

    Martel, Jean-Luc; Mailhot, Alain; Talbot, Guillaume; Brissette, François; Ludwig, Ralf; Frigon, Anne; Leduc, Martin; Turcotte, Richard

    2017-04-01

    Recent studies project significant changes in the future distribution of precipitation extremes due to global warming. It is likely that extreme precipitation intensity will increase in a future climate and that extreme events will be more frequent. In this work, annual maxima daily precipitation series from the Canadian Earth System Model (CanESM2) 50-member large ensemble (spatial resolution of 2.8°x2.8°) and the Community Earth System Model (CESM1) 40-member large ensemble (spatial resolution of 1°x1°) are used to investigate extreme precipitation over the historical (1980-2010) and future (2070-2100) periods. The use of these ensembles results in respectively 1 500 (30 years x 50 members) and 1200 (30 years x 40 members) simulated years over both the historical and future periods. These large datasets allow the computation of empirical daily extreme precipitation quantiles for large return periods. Using the CanESM2 and CESM1 large ensembles, extreme daily precipitation with return periods ranging from 2 to 100 years are computed in historical and future periods to assess the impact of climate change. Results indicate that daily precipitation extremes generally increase in the future over most land grid points and that these increases will also impact the 100-year extreme daily precipitation. Considering that many public infrastructures have lifespans exceeding 75 years, the increase in extremes has important implications on service levels of water infrastructures and public safety. Estimated increases in precipitation associated to very extreme precipitation events (e.g. 100 years) will drastically change the likelihood of flooding and their extent in future climate. These results, although interesting, need to be extended to sub-daily durations, relevant for urban flooding protection and urban infrastructure design (e.g. sewer networks, culverts). Models and simulations at finer spatial and temporal resolution are therefore needed.

  3. Post-processing ECMWF precipitation and temperature ensemble reforecasts for operational hydrologic forecasting at various spatial scales

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Reggiani, P.; Weerts, A. H.

    2013-09-01

    The ECMWF temperature and precipitation ensemble reforecasts are evaluated for biases in the mean, spread and forecast probabilities, and how these biases propagate to streamflow ensemble forecasts. The forcing ensembles are subsequently post-processed to reduce bias and increase skill, and to investigate whether this leads to improved streamflow ensemble forecasts. Multiple post-processing techniques are used: quantile-to-quantile transform, linear regression with an assumption of bivariate normality and logistic regression. Both the raw and post-processed ensembles are run through a hydrologic model of the river Rhine to create streamflow ensembles. The results are compared using multiple verification metrics and skill scores: relative mean error, Brier skill score and its decompositions, mean continuous ranked probability skill score and its decomposition, and the ROC score. Verification of the streamflow ensembles is performed at multiple spatial scales: relatively small headwater basins, large tributaries and the Rhine outlet at Lobith. The streamflow ensembles are verified against simulated streamflow, in order to isolate the effects of biases in the forcing ensembles and any improvements therein. The results indicate that the forcing ensembles contain significant biases, and that these cascade to the streamflow ensembles. Some of the bias in the forcing ensembles is unconditional in nature; this was resolved by a simple quantile-to-quantile transform. Improvements in conditional bias and skill of the forcing ensembles vary with forecast lead time, amount, and spatial scale, but are generally moderate. The translation to streamflow forecast skill is further muted, and several explanations are considered, including limitations in the modelling of the space-time covariability of the forcing ensembles and the presence of storages.

  4. Does internal variability change in response to global warming? A large ensemble modelling study of tropical rainfall

    NASA Astrophysics Data System (ADS)

    Milinski, S.; Bader, J.; Jungclaus, J. H.; Marotzke, J.

    2017-12-01

    There is some consensus on mean state changes of rainfall under global warming; changes of the internal variability, on the other hand, are more difficult to analyse and have not been discussed as much despite their importance for understanding changes in extreme events, such as droughts or floodings. We analyse changes in the rainfall variability in the tropical Atlantic region. We use a 100-member ensemble of historical (1850-2005) model simulations with the Max Planck Institute for Meteorology Earth System Model (MPI-ESM1) to identify changes of internal rainfall variability. To investigate the effects of global warming on the internal variability, we employ an additional ensemble of model simulations with stronger external forcing (1% CO2-increase per year, same integration length as the historical simulations) with 68 ensemble members. The focus of our study is on the oceanic Atlantic ITCZ. We find that the internal variability of rainfall over the tropical Atlantic does change due to global warming and that these changes in variability are larger than changes in the mean state in some regions. From splitting the total variance into patterns of variability, we see that the variability on the southern flank of the ITCZ becomes more dominant, i.e. explaining a larger fraction of the total variance in a warmer climate. In agreement with previous studies, we find that changes in the mean state show an increase and narrowing of the ITCZ. The large ensembles allow us to do a statistically robust differentiation between the changes in variability that can be explained by internal variability and those that can be attributed to the external forcing. Furthermore, we argue that internal variability in a transient climate is only well defined in the ensemble domain and not in the temporal domain, which requires the use of a large ensemble.

  5. Surface and top-of-atmosphere radiative feedback kernels for CESM-CAM5

    NASA Astrophysics Data System (ADS)

    Pendergrass, Angeline G.; Conley, Andrew; Vitt, Francis M.

    2018-02-01

    Radiative kernels at the top of the atmosphere are useful for decomposing changes in atmospheric radiative fluxes due to feedbacks from atmosphere and surface temperature, water vapor, and surface albedo. Here we describe and validate radiative kernels calculated with the large-ensemble version of CAM5, CESM1.1.2, at the top of the atmosphere and the surface. Estimates of the radiative forcing from greenhouse gases and aerosols in RCP8.5 in the CESM large-ensemble simulations are also diagnosed. As an application, feedbacks are calculated for the CESM large ensemble. The kernels are freely available at https://doi.org/10.5065/D6F47MT6, and accompanying software can be downloaded from https://github.com/apendergrass/cam5-kernels.

  6. Large-scale recording of neuronal ensembles.

    PubMed

    Buzsáki, György

    2004-05-01

    How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron-electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.

  7. Forced synchronization of large-scale circulation to increase predictability of surface states

    NASA Astrophysics Data System (ADS)

    Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Wiegerinck, Wim; Duane, Gregory

    2016-04-01

    Numerical models are key tools in the projection of the future climate change. The lack of perfect initial condition and perfect knowledge of the laws of physics, as well as inherent chaotic behavior limit predictions. Conceptually, the atmospheric variables can be decomposed into a predictable component (signal) and an unpredictable component (noise). In ensemble prediction the anomaly of ensemble mean is regarded as the signal and the ensemble spread the noise. Naturally the prediction skill will be higher if the signal-to-noise ratio (SNR) is larger in the initial conditions. We run two ensemble experiments in order to explore a way to reduce the SNR of surface winds and temperature. One ensemble experiment is AGCM with prescribing sea surface temperature (SST); the other is AGCM with both prescribing SST and nudging the high-level temperature and winds to ERA-Interim. Each ensemble has 30 members. Larger SNR is expected and found over the tropical ocean in the first experiment because the tropical circulation is associated with the convection and the associated surface wind convergence as these are to a large extent driven by the SST. However, small SNR is found over high latitude ocean and land surface due to the chaotic and non-synchronized atmosphere states. In the second experiment the higher level temperature and winds are forced to be synchronized (nudged to reanalysis) and hence a larger SNR of surface winds and temperature is expected. Furthermore, different nudging coefficients are also tested in order to understand the limitation of both synchronization of large-scale circulation and the surface states. These experiments will be useful for the developing strategies to synchronize the 3-D states of atmospheric models that can be later used to build a super model.

  8. An Ensemble Recentering Kalman Filter with an Application to Argo Temperature Data Assimilation into the NASA GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.

    2013-01-01

    A two-step ensemble recentering Kalman filter (ERKF) analysis scheme is introduced. The algorithm consists of a recentering step followed by an ensemble Kalman filter (EnKF) analysis step. The recentering step is formulated such as to adjust the prior distribution of an ensemble of model states so that the deviations of individual samples from the sample mean are unchanged but the original sample mean is shifted to the prior position of the most likely particle, where the likelihood of each particle is measured in terms of closeness to a chosen subset of the observations. The computational cost of the ERKF is essentially the same as that of a same size EnKF. The ERKF is applied to the assimilation of Argo temperature profiles into the OGCM component of an ensemble of NASA GEOS-5 coupled models. Unassimilated Argo salt data are used for validation. A surprisingly small number (16) of model trajectories is sufficient to significantly improve model estimates of salinity over estimates from an ensemble run without assimilation. The two-step algorithm also performs better than the EnKF although its performance is degraded in poorly observed regions.

  9. Ensemble Deep Learning for Biomedical Time Series Classification

    PubMed Central

    2016-01-01

    Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828

  10. Ensemble Data Mining Methods

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2004-01-01

    Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.

  11. Plasticity of the Binding Site of Renin: Optimized Selection of Protein Structures for Ensemble Docking.

    PubMed

    Strecker, Claas; Meyer, Bernd

    2018-05-29

    Protein flexibility poses a major challenge to docking of potential ligands in that the binding site can adopt different shapes. Docking algorithms usually keep the protein rigid and only allow the ligand to be treated as flexible. However, a wrong assessment of the shape of the binding pocket can prevent a ligand from adapting a correct pose. Ensemble docking is a simple yet promising method to solve this problem: Ligands are docked into multiple structures, and the results are subsequently merged. Selection of protein structures is a significant factor for this approach. In this work we perform a comprehensive and comparative study evaluating the impact of structure selection on ensemble docking. We perform ensemble docking with several crystal structures and with structures derived from molecular dynamics simulations of renin, an attractive target for antihypertensive drugs. Here, 500 ns of MD simulations revealed binding site shapes not found in any available crystal structure. We evaluate the importance of structure selection for ensemble docking by comparing binding pose prediction, ability to rank actives above nonactives (screening utility), and scoring accuracy. As a result, for ensemble definition k-means clustering appears to be better suited than hierarchical clustering with average linkage. The best performing ensemble consists of four crystal structures and is able to reproduce the native ligand poses better than any individual crystal structure. Moreover this ensemble outperforms 88% of all individual crystal structures in terms of screening utility as well as scoring accuracy. Similarly, ensembles of MD-derived structures perform on average better than 75% of any individual crystal structure in terms of scoring accuracy at all inspected ensembles sizes.

  12. Comparison of different assimilation schemes in an operational assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre

    2016-04-01

    In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system.

  13. A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles

    NASA Astrophysics Data System (ADS)

    Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.

    2016-12-01

    Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.

  14. Entanglement of 3000 atoms by detecting one photon

    NASA Astrophysics Data System (ADS)

    Vuletic, Vladan

    2016-05-01

    Quantum-mechanically correlated (entangled) states of many particles are of interest in quantum information, quantum computing and quantum metrology. In particular, entangled states of many particles can be used to overcome limits on measurements performed with ensembles of independent atoms (standard quantum limit). Metrologically useful entangled states of large atomic ensembles (spin squeezed states) have been experimentally realized. These states display Gaussian spin distribution functions with a non-negative Wigner quasiprobability distribution function. We report the generation of entanglement in a large atomic ensemble via an interaction with a very weak laser pulse; remarkably, the detection of a single photon prepares several thousand atoms in an entangled state. We reconstruct a negative-valued Wigner function, and verify an entanglement depth (the minimum number of mutually entangled atoms) that comprises 90% of the atomic ensemble containing 3100 atoms. Further technical improvement should allow the generation of more complex Schrödinger cat states, and of states the overcome the standard quantum limit.

  15. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  16. Comparison of initial perturbation methods for the mesoscale ensemble prediction system of the Meteorological Research Institute for the WWRP Beijing 2008 Olympics Research and Development Project (B08RDP)

    NASA Astrophysics Data System (ADS)

    Saito, Kazuo; Hara, Masahiro; Kunii, Masaru; Seko, Hiromu; Yamaguchi, Munehiko

    2011-05-01

    Different initial perturbation methods for the mesoscale ensemble prediction were compared by the Meteorological Research Institute (MRI) as a part of the intercomparison of mesoscale ensemble prediction systems (EPSs) of the World Weather Research Programme (WWRP) Beijing 2008 Olympics Research and Development Project (B08RDP). Five initial perturbation methods for mesoscale ensemble prediction were developed for B08RDP and compared at MRI: (1) a downscaling method of the Japan Meteorological Agency (JMA)'s operational one-week EPS (WEP), (2) a targeted global model singular vector (GSV) method, (3) a mesoscale model singular vector (MSV) method based on the adjoint model of the JMA non-hydrostatic model (NHM), (4) a mesoscale breeding growing mode (MBD) method based on the NHM forecast and (5) a local ensemble transform (LET) method based on the local ensemble transform Kalman filter (LETKF) using NHM. These perturbation methods were applied to the preliminary experiments of the B08RDP Tier-1 mesoscale ensemble prediction with a horizontal resolution of 15 km. To make the comparison easier, the same horizontal resolution (40 km) was employed for the three mesoscale model-based initial perturbation methods (MSV, MBD and LET). The GSV method completely outperformed the WEP method, confirming the advantage of targeting in mesoscale EPS. The GSV method generally performed well with regard to root mean square errors of the ensemble mean, large growth rates of ensemble spreads throughout the 36-h forecast period, and high detection rates and high Brier skill scores (BSSs) for weak rains. On the other hand, the mesoscale model-based initial perturbation methods showed good detection rates and BSSs for intense rains. The MSV method showed a rapid growth in the ensemble spread of precipitation up to a forecast time of 6 h, which suggests suitability of the mesoscale SV for short-range EPSs, but the initial large growth of the perturbation did not last long. The performance of the MBD method was good for ensemble prediction of intense rain with a relatively small computing cost. The LET method showed similar characteristics to the MBD method, but the spread and growth rate were slightly smaller and the relative operating characteristic area skill score and BSS did not surpass those of MBD. These characteristic features of the five methods were confirmed by checking the evolution of the total energy norms and their growth rates. Characteristics of the initial perturbations obtained by four methods (GSV, MSV, MBD and LET) were examined for the case of a synoptic low-pressure system passing over eastern China. With GSV and MSV, the regions of large spread were near the low-pressure system, but with MSV, the distribution was more concentrated on the mesoscale disturbance. On the other hand, large-spread areas were observed southwest of the disturbance in MBD and LET. The horizontal pattern of LET perturbation was similar to that of MBD, but the amplitude of the LET perturbation reflected the observation density.

  17. Probabilistic versus deterministic skill in predicting the western North Pacific-East Asian summer monsoon variability with multimodel ensembles

    NASA Astrophysics Data System (ADS)

    Yang, Xiu-Qun; Yang, Dejian; Xie, Qian; Zhang, Yaocun; Ren, Xuejuan; Tang, Youmin

    2017-04-01

    Based on historical forecasts of three quasi-operational multi-model ensemble (MME) systems, this study assesses the superiority of coupled MME over contributing single-model ensembles (SMEs) and over uncoupled atmospheric MME in predicting the Western North Pacific-East Asian summer monsoon variability. The probabilistic and deterministic forecast skills are measured by Brier skill score (BSS) and anomaly correlation (AC), respectively. A forecast-format dependent MME superiority over SMEs is found. The probabilistic forecast skill of the MME is always significantly better than that of each SME, while the deterministic forecast skill of the MME can be lower than that of some SMEs. The MME superiority arises from both the model diversity and the ensemble size increase in the tropics, and primarily from the ensemble size increase in the subtropics. The BSS is composed of reliability and resolution, two attributes characterizing probabilistic forecast skill. The probabilistic skill increase of the MME is dominated by the dramatic improvement in reliability, while resolution is not always improved, similar to AC. A monotonic resolution-AC relationship is further found and qualitatively explained, whereas little relationship can be identified between reliability and AC. It is argued that the MME's success in improving the reliability arises from an effective reduction of the overconfidence in forecast distributions. Moreover, it is examined that the seasonal predictions with coupled MME are more skillful than those with the uncoupled atmospheric MME forced by persisting sea surface temperature (SST) anomalies, since the coupled MME has better predicted the SST anomaly evolution in three key regions.

  18. Watching single molecules dance

    NASA Astrophysics Data System (ADS)

    Mehta, Amit Dinesh

    Molecular motors convert chemical energy, from ATP hydrolysis or ion flow, into mechanical motion. A variety of increasingly precise mechanical probes have been developed to monitor and perturb these motors at the single molecule level. Several outstanding questions can be best approached at the single molecule level. These include: how far does a motor progress per energy quanta consumed? how does its reaction cycle respond to load? how many productive catalytic cycles can it undergo per diffusional encounter with its track? and what is the mechanical stiffness of a single molecule connection? A dual beam optical trap, in conjunction with in vitro ensemble motility assays, has been used to characterize two members of the myosin superfamily: muscle myosin II and chick brain myosin V. Both move the helical polymer actin, but myosin II acts in large ensembles to drive muscle contraction or cytokinesis, while myosin V acts in small numbers to transport vesicles. An optical trapping apparatus was rendered sufficiently precise to identify a myosin working stroke with 1nm or so, barring systematic errors such as those perhaps due to random protein orientations. This and other light microscopic motility assays were used to characterize myosin V: unlike myosin II this vesicle transport protein moves through many increments of travel while remaining strongly bound to a single actin filament. The step size, stall force, and travel distance of myosin V reveal a remarkably efficient motor capable of moving along a helical track for over a micrometer without significantly spiraling around it. Such properties are fully consistent with the putative role of an organelle transport motor, present in small numbers to maintain movement over long ranges relative to cellular size scales. The contrast between myosin II and myosin V resembles that between a human running on the moon and one walking on earth, where the former allows for faster motion when in larger ensembles but for less travel distance when in smaller ones.

  19. Variability of North Atlantic Hurricane Frequency in a Large Ensemble of High-Resolution Climate Simulations

    NASA Astrophysics Data System (ADS)

    Mei, W.; Kamae, Y.; Xie, S. P.

    2017-12-01

    Forced and internal variability of North Atlantic hurricane frequency during 1951-2010 is studied using a large ensemble of climate simulations by a 60-km atmospheric general circulation model that is forced by observed sea surface temperatures (SSTs). The simulations well capture the interannual-to-decadal variability of hurricane frequency in best track data, and further suggest a possible underestimate of hurricane counts in the current best track data prior to 1966 when satellite measurements were unavailable. A genesis potential index (GPI) averaged over the Main Development Region (MDR) accounts for more than 80% of the forced variations in hurricane frequency, with potential intensity and vertical wind shear being the dominant factors. In line with previous studies, the difference between MDR SST and tropical mean SST is a simple but useful predictor; a one-degree increase in this SST difference produces 7.1±1.4 more hurricanes. The hurricane frequency also exhibits internal variability that is comparable in magnitude to the interannual variability. The 100-member ensemble allows us to address the following important questions: (1) Are the observations equivalent to one realization of such a large ensemble? (2) How many ensemble members are needed to reproduce the variability in observations and in the forced component of the simulations? The sources of the internal variability in hurricane frequency will be identified and discussed. The results provide an explanation for the relatively week correlation ( 0.6) between MDR GPI and hurricane frequency on interannual timescales in observations.

  20. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  1. Emergent patterns in interacting neuronal sub-populations

    NASA Astrophysics Data System (ADS)

    Kamal, Neeraj Kumar; Sinha, Sudeshna

    2015-05-01

    We investigate an ensemble of coupled model neurons, consisting of groups of varying sizes and intrinsic dynamics, ranging from periodic to chaotic, where the inter-group coupling interaction is effectively like a dynamic signal from a different sub-population. We observe that the minority group can significantly influence the majority group. For instance, when a small chaotic group is coupled to a large periodic group, the chaotic group de-synchronizes. However, counter-intuitively, when a small periodic group couples strongly to a large chaotic group, it leads to complete synchronization in the majority chaotic population, which also spikes at the frequency of the small periodic group. It then appears that the small group of periodic neurons can act like a pacemaker for the whole network. Further, we report the existence of varied clustering patterns, ranging from sets of synchronized clusters to anti-phase clusters, governed by the interplay of the relative sizes and dynamics of the sub-populations. So these results have relevance in understanding how a group can influence the synchrony of another group of dynamically different elements, reminiscent of event-related synchronization/de-synchronization in complex networks.

  2. Methane adsorption in nanoporous carbon: the numerical estimation of optimal storage conditions

    NASA Astrophysics Data System (ADS)

    Ortiz, L.; Kuchta, B.; Firlej, L.; Roth, M. W.; Wexler, C.

    2016-05-01

    The efficient storage and transportation of natural gas is one of the most important enabling technologies for use in energy applications. Adsorption in porous systems, which will allow the transportation of high-density fuel under low pressure, is one of the possible solutions. We present and discuss extensive grand canonical Monte Carlo (GCMC) simulation results of the adsorption of methane into slit-shaped graphitic pores of various widths (between 7 Å and 50 Å), and at pressures P between 0 bar and 360 bar. Our results shed light on the dependence of film structure on pore width and pressure. For large widths, we observe multi-layer adsorption at supercritical conditions, with excess amounts even at large distances from the pore walls originating from the attractive interaction exerted by a very high-density film in the first layer. We are also able to successfully model the experimental adsorption isotherms of heterogeneous activated carbon samples by means of an ensemble average of the pore widths, based exclusively on the pore-size distributions (PSD) calculated from subcritical nitrogen adsorption isotherms. Finally, we propose a new formula, based on the PSD ensemble averages, to calculate the isosteric heat of adsorption of heterogeneous systems from single-pore-width calculations. The methods proposed here will contribute to the rational design and optimization of future adsorption-based storage tanks.

  3. Parameter estimation and prediction for the course of a single epidemic outbreak of a plant disease.

    PubMed

    Kleczkowski, A; Gilligan, C A

    2007-10-22

    Many epidemics of plant diseases are characterized by large variability among individual outbreaks. However, individual epidemics often follow a well-defined trajectory which is much more predictable in the short term than the ensemble (collection) of potential epidemics. In this paper, we introduce a modelling framework that allows us to deal with individual replicated outbreaks, based upon a Bayesian hierarchical analysis. Information about 'similar' replicate epidemics can be incorporated into a hierarchical model, allowing both ensemble and individual parameters to be estimated. The model is used to analyse the data from a replicated experiment involving spread of Rhizoctonia solani on radish in the presence or absence of a biocontrol agent, Trichoderma viride. The rate of primary (soil-to-plant) infection is found to be the most variable factor determining the final size of epidemics. Breakdown of biological control in some replicates results in high levels of primary infection and increased variability. The model can be used to predict new outbreaks of disease based upon knowledge from a 'library' of previous epidemics and partial information about the current outbreak. We show that forecasting improves significantly with knowledge about the history of a particular epidemic, whereas the precision of hindcasting to identify the past course of the epidemic is largely independent of detailed knowledge of the epidemic trajectory. The results have important consequences for parameter estimation, inference and prediction for emerging epidemic outbreaks.

  4. Fractal morphology, imaging and mass spectrometry of single aerosol particles in flight.

    PubMed

    Loh, N D; Hampton, C Y; Martin, A V; Starodub, D; Sierra, R G; Barty, A; Aquila, A; Schulz, J; Lomb, L; Steinbrener, J; Shoeman, R L; Kassemeyer, S; Bostedt, C; Bozek, J; Epp, S W; Erk, B; Hartmann, R; Rolles, D; Rudenko, A; Rudek, B; Foucar, L; Kimmel, N; Weidenspointner, G; Hauser, G; Holl, P; Pedersoli, E; Liang, M; Hunter, M S; Hunter, M M; Gumprecht, L; Coppola, N; Wunderer, C; Graafsma, H; Maia, F R N C; Ekeberg, T; Hantke, M; Fleckenstein, H; Hirsemann, H; Nass, K; White, T A; Tobias, H J; Farquar, G R; Benner, W H; Hau-Riege, S P; Reich, C; Hartmann, A; Soltau, H; Marchesini, S; Bajt, S; Barthelmess, M; Bucksbaum, P; Hodgson, K O; Strüder, L; Ullrich, J; Frank, M; Schlichting, I; Chapman, H N; Bogan, M J

    2012-06-27

    The morphology of micrometre-size particulate matter is of critical importance in fields ranging from toxicology to climate science, yet these properties are surprisingly difficult to measure in the particles' native environment. Electron microscopy requires collection of particles on a substrate; visible light scattering provides insufficient resolution; and X-ray synchrotron studies have been limited to ensembles of particles. Here we demonstrate an in situ method for imaging individual sub-micrometre particles to nanometre resolution in their native environment, using intense, coherent X-ray pulses from the Linac Coherent Light Source free-electron laser. We introduced individual aerosol particles into the pulsed X-ray beam, which is sufficiently intense that diffraction from individual particles can be measured for morphological analysis. At the same time, ion fragments ejected from the beam were analysed using mass spectrometry, to determine the composition of single aerosol particles. Our results show the extent of internal dilation symmetry of individual soot particles subject to non-equilibrium aggregation, and the surprisingly large variability in their fractal dimensions. More broadly, our methods can be extended to resolve both static and dynamic morphology of general ensembles of disordered particles. Such general morphology has implications in topics such as solvent accessibilities in proteins, vibrational energy transfer by the hydrodynamic interaction of amino acids, and large-scale production of nanoscale structures by flame synthesis.

  5. Spectral partitioning in equitable graphs.

    PubMed

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  6. Spectral partitioning in equitable graphs

    NASA Astrophysics Data System (ADS)

    Barucca, Paolo

    2017-06-01

    Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of equitable graphs, i.e., random graphs with a block-regular structure, is studied, for which analytical results can be obtained. In particular, the spectral density of this ensemble is computed exactly for a modular and bipartite structure. Kesten-McKay's law for random regular graphs is found analytically to apply also for modular and bipartite structures when blocks are homogeneous. An exact solution to graph partitioning for two equal-sized communities is proposed and verified numerically, and a conjecture on the absence of an efficient recovery detectability transition in equitable graphs is suggested. A final discussion summarizes results and outlines their relevance for the solution of graph partitioning problems in other graph ensembles, in particular for the study of detectability thresholds and resolution limits in stochastic block models.

  7. Generalized thermalization for integrable system under quantum quench.

    PubMed

    Muralidharan, Sushruth; Lochan, Kinjalk; Shankaranarayanan, S

    2018-01-01

    We investigate equilibration and generalized thermalization of the quantum Harmonic chain under local quantum quench. The quench action we consider is connecting two disjoint harmonic chains of different sizes and the system jumps between two integrable settings. We verify the validity of the generalized Gibbs ensemble description for this infinite-dimensional Hilbert space system and also identify equilibration between the subsystems as in classical systems. Using Bogoliubov transformations, we show that the eigenstates of the system prior to the quench evolve toward the Gibbs Generalized Ensemble description. Eigenstates that are more delocalized (in the sense of inverse participation ratio) prior to the quench, tend to equilibrate more rapidly. Further, through the phase space properties of a generalized Gibbs ensemble and the strength of stimulated emission, we identify the necessary criterion on the initial states for such relaxation at late times and also find out the states that would potentially not be described by the generalized Gibbs ensemble description.

  8. Perceived Average Orientation Reflects Effective Gist of the Surface.

    PubMed

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  9. The Impact of Density and Ratio on Object-Ensemble Representation in Human Anterior-Medial Ventral Visual Cortex

    PubMed Central

    Cant, Jonathan S.; Xu, Yaoda

    2015-01-01

    Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency. PMID:24964917

  10. Skill of Global Raw and Postprocessed Ensemble Predictions of Rainfall over Northern Tropical Africa

    NASA Astrophysics Data System (ADS)

    Vogel, Peter; Knippertz, Peter; Fink, Andreas H.; Schlueter, Andreas; Gneiting, Tilmann

    2018-04-01

    Accumulated precipitation forecasts are of high socioeconomic importance for agriculturally dominated societies in northern tropical Africa. In this study, we analyze the performance of nine operational global ensemble prediction systems (EPSs) relative to climatology-based forecasts for 1 to 5-day accumulated precipitation based on the monsoon seasons 2007-2014 for three regions within northern tropical Africa. To assess the full potential of raw ensemble forecasts across spatial scales, we apply state-of-the-art statistical postprocessing methods in form of Bayesian Model Averaging (BMA) and Ensemble Model Output Statistics (EMOS), and verify against station and spatially aggregated, satellite-based gridded observations. Raw ensemble forecasts are uncalibrated, unreliable, and underperform relative to climatology, independently of region, accumulation time, monsoon season, and ensemble. Differences between raw ensemble and climatological forecasts are large, and partly stem from poor prediction for low precipitation amounts. BMA and EMOS postprocessed forecasts are calibrated, reliable, and strongly improve on the raw ensembles, but - somewhat disappointingly - typically do not outperform climatology. Most EPSs exhibit slight improvements over the period 2007-2014, but overall have little added value compared to climatology. We suspect that the parametrization of convection is a potential cause for the sobering lack of ensemble forecast skill in a region dominated by mesoscale convective systems.

  11. Multiple electrokinetic actuators for feedback control of colloidal crystal size.

    PubMed

    Juárez, Jaime J; Mathai, Pramod P; Liddle, J Alexander; Bevan, Michael A

    2012-10-21

    We report a feedback control method to precisely target the number of colloidal particles in quasi-2D ensembles and their subsequent assembly into crystals in a quadrupole electrode. Our approach relies on tracking the number of particles within a quadrupole electrode, which is used in a real-time feedback control algorithm to dynamically actuate competing electrokinetic transport mechanisms. Particles are removed from the quadrupole using DC-field mediated electrophoretic-electroosmotic transport, while high-frequency AC-field mediated dielectrophoretic transport is used to concentrate and assemble colloidal crystals. Our results show successful control of the size of crystals containing 20 to 250 colloidal particles with less than 10% error. Assembled crystals are characterized by their radius of gyration, crystallinity, and number of edge particles, and demonstrate the expected size-dependent properties. Our findings demonstrate successful ensemble feedback control of the assembly of different sized colloidal crystals using multiple actuators, which has broad implications for control over nano- and micro- scale assembly processes involving colloidal components.

  12. Achieving Greater Musical Independence in Ensembles through Cognitive Apprenticeship

    ERIC Educational Resources Information Center

    Weidner, Brian N.

    2018-01-01

    Musical independence is a common objective for large-ensemble classes, but traditional, teacher-centric instructional practices for these groups may discourage rather than promote students' critical thinking and decision making in music. Cognitive apprenticeship provides an instructional approach through which student musicians can develop skills…

  13. Ensemble learning with trees and rules: supervised, semi-supervised, unsupervised

    USDA-ARS?s Scientific Manuscript database

    In this article, we propose several new approaches for post processing a large ensemble of conjunctive rules for supervised and semi-supervised learning problems. We show with various examples that for high dimensional regression problems the models constructed by the post processing the rules with ...

  14. A Comparison of Perturbed Initial Conditions and Multiphysics Ensembles in a Severe Weather Episode in Spain

    NASA Technical Reports Server (NTRS)

    Tapiador, Francisco; Tao, Wei-Kuo; Angelis, Carlos F.; Martinez, Miguel A.; Cecilia Marcos; Antonio Rodriguez; Hou, Arthur; Jong Shi, Jain

    2012-01-01

    Ensembles of numerical model forecasts are of interest to operational early warning forecasters as the spread of the ensemble provides an indication of the uncertainty of the alerts, and the mean value is deemed to outperform the forecasts of the individual models. This paper explores two ensembles on a severe weather episode in Spain, aiming to ascertain the relative usefulness of each one. One ensemble uses sensible choices of physical parameterizations (precipitation microphysics, land surface physics, and cumulus physics) while the other follows a perturbed initial conditions approach. The results show that, depending on the parameterizations, large differences can be expected in terms of storm location, spatial structure of the precipitation field, and rain intensity. It is also found that the spread of the perturbed initial conditions ensemble is smaller than the dispersion due to physical parameterizations. This confirms that in severe weather situations operational forecasts should address moist physics deficiencies to realize the full benefits of the ensemble approach, in addition to optimizing initial conditions. The results also provide insights into differences in simulations arising from ensembles of weather models using several combinations of different physical parameterizations.

  15. Motor-motor interactions in ensembles of muscle myosin: using theory to connect single molecule to ensemble measurements

    NASA Astrophysics Data System (ADS)

    Walcott, Sam

    2013-03-01

    Interactions between the proteins actin and myosin drive muscle contraction. Properties of a single myosin interacting with an actin filament are largely known, but a trillion myosins work together in muscle. We are interested in how single-molecule properties relate to ensemble function. Myosin's reaction rates depend on force, so ensemble models keep track of both molecular state and force on each molecule. These models make subtle predictions, e.g. that myosin, when part of an ensemble, moves actin faster than when isolated. This acceleration arises because forces between molecules speed reaction kinetics. Experiments support this prediction and allow parameter estimates. A model based on this analysis describes experiments from single molecule to ensemble. In vivo, actin is regulated by proteins that, when present, cause the binding of one myosin to speed the binding of its neighbors; binding becomes cooperative. Although such interactions preclude the mean field approximation, a set of linear ODEs describes these ensembles under simplified experimental conditions. In these experiments cooperativity is strong, with the binding of one molecule affecting ten neighbors on either side. We progress toward a description of myosin ensembles under physiological conditions.

  16. Long-range energy transfer in self-assembled quantum dot-DNA cascades

    NASA Astrophysics Data System (ADS)

    Goodman, Samuel M.; Siu, Albert; Singh, Vivek; Nagpal, Prashant

    2015-11-01

    The size-dependent energy bandgaps of semiconductor nanocrystals or quantum dots (QDs) can be utilized in converting broadband incident radiation efficiently into electric current by cascade energy transfer (ET) between layers of different sized quantum dots, followed by charge dissociation and transport in the bottom layer. Self-assembling such cascade structures with angstrom-scale spatial precision is important for building realistic devices, and DNA-based QD self-assembly can provide an important alternative. Here we show long-range Dexter energy transfer in QD-DNA self-assembled single constructs and ensemble devices. Using photoluminescence, scanning tunneling spectroscopy, current-sensing AFM measurements in single QD-DNA cascade constructs, and temperature-dependent ensemble devices using TiO2 nanotubes, we show that Dexter energy transfer, likely mediated by the exciton-shelves formed in these QD-DNA self-assembled structures, can be used for efficient transport of energy across QD-DNA thin films.The size-dependent energy bandgaps of semiconductor nanocrystals or quantum dots (QDs) can be utilized in converting broadband incident radiation efficiently into electric current by cascade energy transfer (ET) between layers of different sized quantum dots, followed by charge dissociation and transport in the bottom layer. Self-assembling such cascade structures with angstrom-scale spatial precision is important for building realistic devices, and DNA-based QD self-assembly can provide an important alternative. Here we show long-range Dexter energy transfer in QD-DNA self-assembled single constructs and ensemble devices. Using photoluminescence, scanning tunneling spectroscopy, current-sensing AFM measurements in single QD-DNA cascade constructs, and temperature-dependent ensemble devices using TiO2 nanotubes, we show that Dexter energy transfer, likely mediated by the exciton-shelves formed in these QD-DNA self-assembled structures, can be used for efficient transport of energy across QD-DNA thin films. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr04778a

  17. Field-theoretic simulations of block copolymer nanocomposites in a constant interfacial tension ensemble.

    PubMed

    Koski, Jason P; Riggleman, Robert A

    2017-04-28

    Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.

  18. High-Temperature unfolding of a trp-Cage mini-protein: a molecular dynamics simulation study

    PubMed Central

    Seshasayee, Aswin Sai Narain

    2005-01-01

    Background Trp cage is a recently-constructed fast-folding miniprotein. It consists of a short helix, a 3,10 helix and a C-terminal poly-proline that packs against a Trp in the alpha helix. It is known to fold within 4 ns. Results High-temperature unfolding molecular dynamics simulations of the Trp cage miniprotein have been carried out in explicit water using the OPLS-AA force-field incorporated in the program GROMACS. The radius of gyration (Rg) and Root Mean Square Deviation (RMSD) have been used as order parameters to follow the unfolding process. Distributions of Rg were used to identify ensembles. Conclusion Three ensembles could be identified. While the native-state ensemble shows an Rg distribution that is slightly skewed, the second ensemble, which is presumably the Transition State Ensemble (TSE), shows an excellent fit. The denatured ensemble shows large fluctuations, but a Gaussian curve could be fitted. This means that the unfolding process is two-state. Representative structures from each of these ensembles are presented here. PMID:15760474

  19. A new method for determining the optimal lagged ensemble

    PubMed Central

    DelSole, T.; Tippett, M. K.; Pegion, K.

    2017-01-01

    Abstract We propose a general methodology for determining the lagged ensemble that minimizes the mean square forecast error. The MSE of a lagged ensemble is shown to depend only on a quantity called the cross‐lead error covariance matrix, which can be estimated from a short hindcast data set and parameterized in terms of analytic functions of time. The resulting parameterization allows the skill of forecasts to be evaluated for an arbitrary ensemble size and initialization frequency. Remarkably, the parameterization also can estimate the MSE of a burst ensemble simply by taking the limit of an infinitely small interval between initialization times. This methodology is applied to forecasts of the Madden Julian Oscillation (MJO) from version 2 of the Climate Forecast System version 2 (CFSv2). For leads greater than a week, little improvement is found in the MJO forecast skill when ensembles larger than 5 days are used or initializations greater than 4 times per day. We find that if the initialization frequency is too infrequent, important structures of the lagged error covariance matrix are lost. Lastly, we demonstrate that the forecast error at leads ≥10 days can be reduced by optimally weighting the lagged ensemble members. The weights are shown to depend only on the cross‐lead error covariance matrix. While the methodology developed here is applied to CFSv2, the technique can be easily adapted to other forecast systems. PMID:28580050

  20. Using a Guided Machine Learning Ensemble Model to Predict Discharge Disposition following Meningioma Resection.

    PubMed

    Muhlestein, Whitney E; Akagi, Dallin S; Kallos, Justiss A; Morone, Peter J; Weaver, Kyle D; Thompson, Reid C; Chambless, Lola B

    2018-04-01

    Objective  Machine learning (ML) algorithms are powerful tools for predicting patient outcomes. This study pilots a novel approach to algorithm selection and model creation using prediction of discharge disposition following meningioma resection as a proof of concept. Materials and Methods  A diversity of ML algorithms were trained on a single-institution database of meningioma patients to predict discharge disposition. Algorithms were ranked by predictive power and top performers were combined to create an ensemble model. The final ensemble was internally validated on never-before-seen data to demonstrate generalizability. The predictive power of the ensemble was compared with a logistic regression. Further analyses were performed to identify how important variables impact the ensemble. Results  Our ensemble model predicted disposition significantly better than a logistic regression (area under the curve of 0.78 and 0.71, respectively, p  = 0.01). Tumor size, presentation at the emergency department, body mass index, convexity location, and preoperative motor deficit most strongly influence the model, though the independent impact of individual variables is nuanced. Conclusion  Using a novel ML technique, we built a guided ML ensemble model that predicts discharge destination following meningioma resection with greater predictive power than a logistic regression, and that provides greater clinical insight than a univariate analysis. These techniques can be extended to predict many other patient outcomes of interest.

  1. Measurement of photoemission and secondary emission from laboratory dust grains

    NASA Technical Reports Server (NTRS)

    Hazelton, Robert C.; Yadlowsky, Edward J.; Settersten, Thomas B.; Spanjers, Gregory G.; Moschella, John J.

    1995-01-01

    The overall goal of this project is experimentally determine the emission properties of dust grains in order to provide theorists and modelers with an accurate data base to use in codes that predict the charging of grains in various plasma environments encountered in the magnetospheres of the planets. In general these modelers use values which have been measured on planar, bulk samples of the materials in question. The large enhancements expected due to the small size of grains can have a dramatic impact upon the predictions and the ultimate utility of these predictions. The first experimental measurement of energy resolved profiles of the secondary electron emission coefficient, 6, of sub-micron diameter particles has been accomplished. Bismuth particles in the size range of .022 to .165 micrometers were generated in a moderate pressure vacuum oven (average size is a function of oven temperature and pressure) and introduced into a high vacuum chamber where they interacted with a high energy electron beam (0.4 to 20 keV). Large enhancements in emission were observed with a peak value, delta(sub max) = 4. 5 measured for the ensemble of particles with a mean size of .022 micrometers. This is in contrast to the published value, delta(sub max) = 1.2, for bulk bismuth. The observed profiles are in general agreement with recent theoretical predictions made by Chow et al. at UCSD.

  2. Precise Control of Quantum Confinement in Cesium Lead Halide Perovskite Quantum Dots via Thermodynamic Equilibrium.

    PubMed

    Dong, Yitong; Qiao, Tian; Kim, Doyun; Parobek, David; Rossi, Daniel; Son, Dong Hee

    2018-05-09

    Cesium lead halide (CsPbX 3 ) nanocrystals have emerged as a new family of materials that can outperform the existing semiconductor nanocrystals due to their superb optical and charge-transport properties. However, the lack of a robust method for producing quantum dots with controlled size and high ensemble uniformity has been one of the major obstacles in exploring the useful properties of excitons in zero-dimensional nanostructures of CsPbX 3 . Here, we report a new synthesis approach that enables the precise control of the size based on the equilibrium rather than kinetics, producing CsPbX 3 quantum dots nearly free of heterogeneous broadening in their exciton luminescence. The high level of size control and ensemble uniformity achieved here will open the door to harnessing the benefits of excitons in CsPbX 3 quantum dots for photonic and energy-harvesting applications.

  3. Decorrelation correction for nanoparticle tracking analysis of dilute polydisperse suspensions in bulk flow

    NASA Astrophysics Data System (ADS)

    Hartman, John; Kirby, Brian

    2017-03-01

    Nanoparticle tracking analysis, a multiprobe single particle tracking technique, is a widely used method to quickly determine the concentration and size distribution of colloidal particle suspensions. Many popular tools remove non-Brownian components of particle motion by subtracting the ensemble-average displacement at each time step, which is termed dedrifting. Though critical for accurate size measurements, dedrifting is shown here to introduce significant biasing error and can fundamentally limit the dynamic range of particle size that can be measured for dilute heterogeneous suspensions such as biological extracellular vesicles. We report a more accurate estimate of particle mean-square displacement, which we call decorrelation analysis, that accounts for correlations between individual and ensemble particle motion, which are spuriously introduced by dedrifting. Particle tracking simulation and experimental results show that this approach more accurately determines particle diameters for low-concentration polydisperse suspensions when compared with standard dedrifting techniques.

  4. Ability of an ensemble of regional climate models to reproduce weather regimes over Europe-Atlantic during the period 1961-2000

    NASA Astrophysics Data System (ADS)

    Sanchez-Gomez, Emilia; Somot, S.; Déqué, M.

    2009-10-01

    One of the main concerns in regional climate modeling is to which extent limited-area regional climate models (RCM) reproduce the large-scale atmospheric conditions of their driving general circulation model (GCM). In this work we investigate the ability of a multi-model ensemble of regional climate simulations to reproduce the large-scale weather regimes of the driving conditions. The ensemble consists of a set of 13 RCMs on a European domain, driven at their lateral boundaries by the ERA40 reanalysis for the time period 1961-2000. Two sets of experiments have been completed with horizontal resolutions of 50 and 25 km, respectively. The spectral nudging technique has been applied to one of the models within the ensemble. The RCMs reproduce the weather regimes behavior in terms of composite pattern, mean frequency of occurrence and persistence reasonably well. The models also simulate well the long-term trends and the inter-annual variability of the frequency of occurrence. However, there is a non-negligible spread among the models which is stronger in summer than in winter. This spread is due to two reasons: (1) we are dealing with different models and (2) each RCM produces an internal variability. As far as the day-to-day weather regime history is concerned, the ensemble shows large discrepancies. At daily time scale, the model spread has also a seasonal dependence, being stronger in summer than in winter. Results also show that the spectral nudging technique improves the model performance in reproducing the large-scale of the driving field. In addition, the impact of increasing the number of grid points has been addressed by comparing the 25 and 50 km experiments. We show that the horizontal resolution does not affect significantly the model performance for large-scale circulation.

  5. Conceptualizing Conceptual Teaching: Practical Strategies for Large Instrumental Ensembles

    ERIC Educational Resources Information Center

    Tan, Leonard

    2016-01-01

    Half a century ago, calls had already been made for instrumental ensemble directors to move beyond performance to include the teaching of musical concepts in the rehearsal hall. Relatively recent research, however, suggests that conceptual teaching remains relatively infrequent during rehearsals. Given the importance of teaching for long-term…

  6. A statistical analysis of three ensembles of crop model responses to temperature and CO2 concentration

    USDA-ARS?s Scientific Manuscript database

    Ensembles of process-based crop models are now commonly used to simulate crop growth and development for climate scenarios of temperature and/or precipitation changes corresponding to different projections of atmospheric CO2 concentrations. This approach generates large datasets with thousands of de...

  7. Tiered Evaluation in Large Ensemble Settings.

    ERIC Educational Resources Information Center

    Scott, David

    1998-01-01

    Discusses the use of a tiered evaluation system (TES) that allows students to work at different levels, enables teachers to assess progress objectively, and presents students with appropriate challenges in the music ensembles. Focuses on how TES works and its advantages, considers the challenges and flexibility of TES, and provides samples. (CMK)

  8. Entanglement between two spatially separated atomic modes

    NASA Astrophysics Data System (ADS)

    Lange, Karsten; Peise, Jan; Lücke, Bernd; Kruse, Ilka; Vitagliano, Giuseppe; Apellaniz, Iagoba; Kleinmann, Matthias; Tóth, Géza; Klempt, Carsten

    2018-04-01

    Modern quantum technologies in the fields of quantum computing, quantum simulation, and quantum metrology require the creation and control of large ensembles of entangled particles. In ultracold ensembles of neutral atoms, nonclassical states have been generated with mutual entanglement among thousands of particles. The entanglement generation relies on the fundamental particle-exchange symmetry in ensembles of identical particles, which lacks the standard notion of entanglement between clearly definable subsystems. Here, we present the generation of entanglement between two spatially separated clouds by splitting an ensemble of ultracold identical particles prepared in a twin Fock state. Because the clouds can be addressed individually, our experiments open a path to exploit the available entangled states of indistinguishable particles for quantum information applications.

  9. Thermodynamic limitations on the temperature sensitivity of cell-membrane ion channels: Trouble with enthalpy uncertainty

    NASA Astrophysics Data System (ADS)

    Zheltikov, A. M.

    2018-06-01

    Energy exchange between a thermodynamic ensemble of heat- and cold-activated cell-membrane ion channels and the surrounding heat reservoir is shown to impose fundamental limitations on the performance of such channels as temperature-controlled gates for thermal cell activation. Analysis of unavoidable thermodynamic internal-energy fluctuations caused by energy exchange between the ion channels and the heat bath suggests that the resulting enthalpy uncertainty is too high for a robust ion-current gating by a single ion channel, implying that large ensembles of ion channels are needed for thermal cell activation. We argue, based on this thermodynamic analysis, that, had thermosensitive cell-membrane ion channels operated individually, rather than as large ensembles, robust thermal cell activation would have been impossible because of thermodynamic fluctuations.

  10. A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.

    PubMed

    Hart, Emma; Sim, Kevin

    2016-01-01

    We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

  11. On the Sensitivity of Atmospheric Ensembles to Cloud Microphysics in Long-Term Cloud-Resolving Model Simulations

    NASA Technical Reports Server (NTRS)

    Zeng, Xiping; Tao, Wei-Kuo; Lang, Stephen; Hou, Arthur Y.; Zhang, Minghua; Simpson, Joanne

    2008-01-01

    Month-long large-scale forcing data from two field campaigns are used to drive a cloud-resolving model (CRM) and produce ensemble simulations of clouds and precipitation. Observational data are then used to evaluate the model results. To improve the model results, a new parameterization of the Bergeron process is proposed that incorporates the number concentration of ice nuclei (IN). Numerical simulations reveal that atmospheric ensembles are sensitive to IN concentration and ice crystal multiplication. Two- (2D) and three-dimensional (3D) simulations are carried out to address the sensitivity of atmospheric ensembles to model dimensionality. It is found that the ensembles with high IN concentration are more sensitive to dimensionality than those with low IN concentration. Both the analytic solutions of linear dry models and the CRM output show that there are more convective cores with stronger updrafts in 3D simulations than in 2D, which explains the differing sensitivity of the ensembles to dimensionality at different IN concentrations.

  12. Weighted projected networks: mapping hypergraphs to networks.

    PubMed

    López, Eduardo

    2013-05-01

    Many natural, technological, and social systems incorporate multiway interactions, yet are characterized and measured on the basis of weighted pairwise interactions. In this article, I propose a family of models in which pairwise interactions originate from multiway interactions, by starting from ensembles of hypergraphs and applying projections that generate ensembles of weighted projected networks. I calculate analytically the statistical properties of weighted projected networks, and suggest ways these could be used beyond theoretical studies. Weighted projected networks typically exhibit weight disorder along links even for very simple generating hypergraph ensembles. Also, as the size of a hypergraph changes, a signature of multiway interaction emerges on the link weights of weighted projected networks that distinguishes them from fundamentally weighted pairwise networks. This signature could be used to search for hidden multiway interactions in weighted network data. I find the percolation threshold and size of the largest component for hypergraphs of arbitrary uniform rank, translate the results into projected networks, and show that the transition is second order. This general approach to network formation has the potential to shed new light on our understanding of weighted networks.

  13. Role of Forcing Uncertainty and Background Model Error Characterization in Snow Data Assimilation

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.; Dong, Jiarul; Peters-Lidard, Christa D.; Mocko, David; Gomez, Breogan

    2017-01-01

    Accurate specification of the model error covariances in data assimilation systems is a challenging issue. Ensemble land data assimilation methods rely on stochastic perturbations of input forcing and model prognostic fields for developing representations of input model error covariances. This article examines the limitations of using a single forcing dataset for specifying forcing uncertainty inputs for assimilating snow depth retrievals. Using an idealized data assimilation experiment, the article demonstrates that the use of hybrid forcing input strategies (either through the use of an ensemble of forcing products or through the added use of the forcing climatology) provide a better characterization of the background model error, which leads to improved data assimilation results, especially during the snow accumulation and melt-time periods. The use of hybrid forcing ensembles is then employed for assimilating snow depth retrievals from the AMSR2 (Advanced Microwave Scanning Radiometer 2) instrument over two domains in the continental USA with different snow evolution characteristics. Over a region near the Great Lakes, where the snow evolution tends to be ephemeral, the use of hybrid forcing ensembles provides significant improvements relative to the use of a single forcing dataset. Over the Colorado headwaters characterized by large snow accumulation, the impact of using the forcing ensemble is less prominent and is largely limited to the snow transition time periods. The results of the article demonstrate that improving the background model error through the use of a forcing ensemble enables the assimilation system to better incorporate the observational information.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shiogama, Hideo; Imada, Yukiko; Mori, Masato

    Here, we describe two unprecedented large (100-member), longterm (61-year) ensembles based on MRI-AGCM3.2, which were driven by historical and non-warming climate forcing. These ensembles comprise the "Database for Policy Decision making for Future climate change (d4PDF)". We compare these ensembles to large ensembles based on another climate model, as well as to observed data, to investigate the influence of anthropogenic activities on historical changes in the numbers of record-breaking events, including: the annual coldest daily minimum temperature (TNn), the annual warmest daily maximum temperature (TXx) and the annual most intense daily precipitation event (Rx1day). These two climate model ensembles indicatemore » that human activity has already had statistically significant impacts on the number of record-breaking extreme events worldwide mainly in the Northern Hemisphere land. Specifically, human activities have altered the likelihood that a wider area globally would suffer record-breaking TNn, TXx and Rx1day events than that observed over the 2001- 2010 period by a factor of at least 0.6, 5.4 and 1.3, respectively. However, we also find that the estimated spatial patterns and amplitudes of anthropogenic impacts on the probabilities of record-breaking events are sensitive to the climate model and/or natural-world boundary conditions used in the attribution studies.« less

  15. Finite-size analysis of the detectability limit of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Young, Jean-Gabriel; Desrosiers, Patrick; Hébert-Dufresne, Laurent; Laurence, Edward; Dubé, Louis J.

    2017-06-01

    It has been shown in recent years that the stochastic block model is sometimes undetectable in the sparse limit, i.e., that no algorithm can identify a partition correlated with the partition used to generate an instance, if the instance is sparse enough and infinitely large. In this contribution, we treat the finite case explicitly, using arguments drawn from information theory and statistics. We give a necessary condition for finite-size detectability in the general SBM. We then distinguish the concept of average detectability from the concept of instance-by-instance detectability and give explicit formulas for both definitions. Using these formulas, we prove that there exist large equivalence classes of parameters, where widely different network ensembles are equally detectable with respect to our definitions of detectability. In an extensive case study, we investigate the finite-size detectability of a simplified variant of the SBM, which encompasses a number of important models as special cases. These models include the symmetric SBM, the planted coloring model, and more exotic SBMs not previously studied. We conclude with three appendices, where we study the interplay of noise and detectability, establish a connection between our information-theoretic approach and random matrix theory, and provide proofs of some of the more technical results.

  16. Can Atmospheric Reanalysis Data Sets Be Used to Reproduce Flooding Over Large Scales?

    NASA Astrophysics Data System (ADS)

    Andreadis, Konstantinos M.; Schumann, Guy J.-P.; Stampoulis, Dimitrios; Bates, Paul D.; Brakenridge, G. Robert; Kettner, Albert J.

    2017-10-01

    Floods are costly to global economies and can be exceptionally lethal. The ability to produce consistent flood hazard maps over large areas could provide a significant contribution to reducing such losses, as the lack of knowledge concerning flood risk is a major factor in the transformation of river floods into flood disasters. In order to accurately reproduce flooding in river channels and floodplains, high spatial resolution hydrodynamic models are needed. Despite being computationally expensive, recent advances have made their continental to global implementation feasible, although inputs for long-term simulations may require the use of reanalysis meteorological products especially in data-poor regions. We employ a coupled hydrologic/hydrodynamic model cascade forced by the 20CRv2 reanalysis data set and evaluate its ability to reproduce flood inundation area and volume for Australia during the 1973-2012 period. Ensemble simulations using the reanalysis data were performed to account for uncertainty in the meteorology and compared with a validated benchmark simulation. Results show that the reanalysis ensemble capture the inundated areas and volumes relatively well, with correlations for the ensemble mean of 0.82 and 0.85 for area and volume, respectively, although the meteorological ensemble spread propagates in large uncertainty of the simulated flood characteristics.

  17. Ensemble Methods

    NASA Astrophysics Data System (ADS)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and astrophysics has been added (Section 26.6). The conclusion (Section 26.7) ends this paper and lists some issues not covered in this work.

  18. Simulation studies of the fidelity of biomolecular structure ensemble recreation

    NASA Astrophysics Data System (ADS)

    Lätzer, Joachim; Eastwood, Michael P.; Wolynes, Peter G.

    2006-12-01

    We examine the ability of Bayesian methods to recreate structural ensembles for partially folded molecules from averaged data. Specifically we test the ability of various algorithms to recreate different transition state ensembles for folding proteins using a multiple replica simulation algorithm using input from "gold standard" reference ensembles that were first generated with a Gō-like Hamiltonian having nonpairwise additive terms. A set of low resolution data, which function as the "experimental" ϕ values, were first constructed from this reference ensemble. The resulting ϕ values were then treated as one would treat laboratory experimental data and were used as input in the replica reconstruction algorithm. The resulting ensembles of structures obtained by the replica algorithm were compared to the gold standard reference ensemble, from which those "data" were, in fact, obtained. It is found that for a unimodal transition state ensemble with a low barrier, the multiple replica algorithm does recreate the reference ensemble fairly successfully when no experimental error is assumed. The Kolmogorov-Smirnov test as well as principal component analysis show that the overlap of the recovered and reference ensembles is significantly enhanced when multiple replicas are used. Reduction of the multiple replica ensembles by clustering successfully yields subensembles with close similarity to the reference ensembles. On the other hand, for a high barrier transition state with two distinct transition state ensembles, the single replica algorithm only samples a few structures of one of the reference ensemble basins. This is due to the fact that the ϕ values are intrinsically ensemble averaged quantities. The replica algorithm with multiple copies does sample both reference ensemble basins. In contrast to the single replica case, the multiple replicas are constrained to reproduce the average ϕ values, but allow fluctuations in ϕ for each individual copy. These fluctuations facilitate a more faithful sampling of the reference ensemble basins. Finally, we test how robustly the reconstruction algorithm can function by introducing errors in ϕ comparable in magnitude to those suggested by some authors. In this circumstance we observe that the chances of ensemble recovery with the replica algorithm are poor using a single replica, but are improved when multiple copies are used. A multimodal transition state ensemble, however, turns out to be more sensitive to large errors in ϕ (if appropriately gauged) and attempts at successful recreation of the reference ensemble with simple replica algorithms can fall short.

  19. The Contribution of Object Shape and Surface Properties to Object Ensemble Representation in Anterior-medial Ventral Visual Cortex.

    PubMed

    Cant, Jonathan S; Xu, Yaoda

    2017-02-01

    Our visual system can extract summary statistics from large collections of objects without forming detailed representations of the individual objects in the ensemble. In a region in ventral visual cortex encompassing the collateral sulcus and the parahippocampal gyrus and overlapping extensively with the scene-selective parahippocampal place area (PPA), we have previously reported fMRI adaptation to object ensembles when ensemble statistics repeated, even when local image features differed across images (e.g., two different images of the same strawberry pile). We additionally showed that this ensemble representation is similar to (but still distinct from) how visual texture patterns are processed in this region and is not explained by appealing to differences in the color of the elements that make up the ensemble. To further explore the nature of ensemble representation in this brain region, here we used PPA as our ROI and investigated in detail how the shape and surface properties (i.e., both texture and color) of the individual objects constituting an ensemble affect the ensemble representation in anterior-medial ventral visual cortex. We photographed object ensembles of stone beads that varied in shape and surface properties. A given ensemble always contained beads of the same shape and surface properties (e.g., an ensemble of star-shaped rose quartz beads). A change to the shape and/or surface properties of all the beads in an ensemble resulted in a significant release from adaptation in PPA compared with conditions in which no ensemble feature changed. In contrast, in the object-sensitive lateral occipital area (LO), we only observed a significant release from adaptation when the shape of the ensemble elements varied, and found no significant results in additional scene-sensitive regions, namely, the retrosplenial complex and occipital place area. Together, these results demonstrate that the shape and surface properties of the individual objects comprising an ensemble both contribute significantly to object ensemble representation in anterior-medial ventral visual cortex and further demonstrate a functional dissociation between object- (LO) and scene-selective (PPA) visual cortical regions and within the broader scene-processing network itself.

  20. Consistency of climate change projections from multiple global and regional model intercomparison projects

    NASA Astrophysics Data System (ADS)

    Fernández, J.; Frías, M. D.; Cabos, W. D.; Cofiño, A. S.; Domínguez, M.; Fita, L.; Gaertner, M. A.; García-Díez, M.; Gutiérrez, J. M.; Jiménez-Guerrero, P.; Liguori, G.; Montávez, J. P.; Romera, R.; Sánchez, E.

    2018-03-01

    We present an unprecedented ensemble of 196 future climate projections arising from different global and regional model intercomparison projects (MIPs): CMIP3, CMIP5, ENSEMBLES, ESCENA, EURO- and Med-CORDEX. This multi-MIP ensemble includes all regional climate model (RCM) projections publicly available to date, along with their driving global climate models (GCMs). We illustrate consistent and conflicting messages using continental Spain and the Balearic Islands as target region. The study considers near future (2021-2050) changes and their dependence on several uncertainty sources sampled in the multi-MIP ensemble: GCM, future scenario, internal variability, RCM, and spatial resolution. This initial work focuses on mean seasonal precipitation and temperature changes. The results show that the potential GCM-RCM combinations have been explored very unevenly, with favoured GCMs and large ensembles of a few RCMs that do not respond to any ensemble design. Therefore, the grand-ensemble is weighted towards a few models. The selection of a balanced, credible sub-ensemble is challenged in this study by illustrating several conflicting responses between the RCM and its driving GCM and among different RCMs. Sub-ensembles from different initiatives are dominated by different uncertainty sources, being the driving GCM the main contributor to uncertainty in the grand-ensemble. For this analysis of the near future changes, the emission scenario does not lead to a strong uncertainty. Despite the extra computational effort, for mean seasonal changes, the increase in resolution does not lead to important changes.

  1. As-Grown Gallium Nitride Nanowire Electromechanical Resonators

    NASA Astrophysics Data System (ADS)

    Montague, Joshua R.

    Technological development in recent years has led to a ubiquity of micro- and nano-scale electromechanical devices. Sensors for monitoring temperature, pressure, mass, etc., are now found in nearly all electronic devices at both the industrial and consumer levels. As has been true for integrated circuit electronics, these electromechanical devices have continued to be scaled down in size. For many nanometer-scale structures with large surface-to-volume ratio, dissipation (energy loss) becomes prohibitively large causing a decreasing sensitivity with decreasing sensor size. In this work, gallium nitride (GaN) nanowires are investigated as singly-clamped (cantilever) mechanical resonators with typical mechanical quality factors, Q (equal to the ratio of resonance frequency to peak full-width-at-half-maximum-power) and resonance frequencies, respectively, at or above 30,000, and near 1 MHz. These Q values---in vacuum at room temperature---indicate very low levels of dissipation; they are essentially the same as those for bulk quartz crystal resonators that form the basis of simple clocks and mass sensors. The GaN nanowires have lengths and diameters, respectively, of approximately 15 micrometers and hundreds of nanometers. As-grown GaN nanowire Q values are larger than other similarly-sized, bottom-up, cantilever resonators and this property makes them very attractive for use as resonant sensors. We demonstrate the capability of detecting sub-monolayer levels of atomic layer deposited (ALD) films, and the robust nature of the GaN nanowires structure that allows for their 'reuse' after removal of such layers. In addition to electron microscope-based measurement techniques, we demonstrate the successful capacitive detection of a single nanowire using microwave homodyne reflectometry. This technique is then extended to allow for simultaneous measurements of large ensembles of GaN nanowires on a single sample, providing statistical information about the distribution of individual nanowire properties. We observe nanowire-to-nanowire variations in the temperature dependence of GaN nanowire resonance frequency and in the observed mechanical dissipation. We also use this ensemble measurement technique to demonstrate unique, very low-loss resonance behavior at low temperatures. The low dissipation (and corresponding large Q values) observed in as-grown GaN nanowires also provides a unique opportunity for studying fundamental energy loss mechanisms in nano-scale objects. With estimated mass sensitivities on the level of zeptograms (10-21 g) in a one second averaging time, GaN nanowires may be a significant addition to the field of resonant sensors and worthy of future research and device integration.

  2. North Atlantic winter eddy-driven jet and atmospheric blocking variability in the Community Earth System Model version 1 Large Ensemble simulations

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Oh; Camacho, Alicia; Martinez, Carlos; Seo, Hyodae

    2018-01-01

    The atmospheric jet and blocking distributions, especially in the North Atlantic sector, have been challenging features for a climate model to realistically reproduce. This study examines climatological distributions of winter (December-February) daily jet latitude and blocking in the North Atlantic from the 40-member Community Earth System Model version 1 Large Ensemble (CESM1LE) simulations. This analysis aims at examining whether a broad range of internal climate variability encompassed by a large ensemble of simulations results in an improved representation of the jet latitude distributions and blocking days in CESM1LE. In the historical runs (1951-2005), the daily zonal wind at 850 hPa exhibits three distinct preferred latitudes for the eddy-driven jet position as seen in the reanalysis datasets, which represents a significant improvement from the previous version of the same model. However, the meridional separations between the three jet latitudes are much smaller than those in the reanalyses. In particular, the jet rarely migrates to the observed southernmost position around 37°N. This leads to the bias in blocking frequency that is too low over Greenland and too high over the Azores. These features are shown to be remarkably stable across the 40 ensemble members with negligible member-to-member spread. This result implies the range of internal variability of winter jet latitude and blocking frequency within the 55-year segment from each ensemble member is comparable to that represented by the full large ensemble. Comparison with 2046-2100 from the RCP8.5 future projection runs suggests that the daily jet position is projected to maintain the same three preferred latitudes, with a slightly higher frequency of occurrence over the central latitude around 50°N, instead of shifting poleward in the future as documented in some previous studies. In addition, the daily jet speed is projected not to change significantly between 1951-2005 and 2046-2100. On the other hand, the climatological mean jet is projected to become slightly more elongated and stronger on its southern flank, and the blocking frequency over the Azores is projected to decrease.

  3. Model Independence in Downscaled Climate Projections: a Case Study in the Southeast United States

    NASA Astrophysics Data System (ADS)

    Gray, G. M. E.; Boyles, R.

    2016-12-01

    Downscaled climate projections are used to deduce how the climate will change in future decades at local and regional scales. It is important to use multiple models to characterize part of the future uncertainty given the impact on adaptation decision making. This is traditionally employed through an equally-weighted ensemble of multiple GCMs downscaled using one technique. Newer practices include several downscaling techniques in an effort to increase the ensemble's representation of future uncertainty. However, this practice may be adding statistically dependent models to the ensemble. Previous research has shown a dependence problem in the GCM ensemble in multiple generations, but has not been shown in the downscaled ensemble. In this case study, seven downscaled climate projections on the daily time scale are considered: CLAREnCE10, SERAP, BCCA (CMIP5 and CMIP3 versions), Hostetler, CCR, and MACA-LIVNEH. These data represent 83 ensemble members, 44 GCMs, and two generations of GCMs. Baseline periods are compared against the University of Idaho's METDATA gridded observation dataset. Hierarchical agglomerative clustering is applied to the correlated errors to determine dependent clusters. Redundant GCMs across different downscaling techniques show the most dependence, while smaller dependence signals are detected within downscaling datasets and across generations of GCMs. These results indicate that using additional downscaled projections to increase the ensemble size must be done with care to avoid redundant GCMs and the process of downscaling may increase the dependence of those downscaled GCMs. Climate model generation does not appear dissimilar enough to be treated as two separate statistical populations for ensemble building at the local and regional scales.

  4. Real-time demonstration and evaluation of over-the-loop short to medium-range ensemble streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Wood, A. W.; Clark, E.; Newman, A. J.; Nijssen, B.; Clark, M. P.; Gangopadhyay, S.; Arnold, J. R.

    2015-12-01

    The US National Weather Service River Forecasting Centers are beginning to operationalize short range to medium range ensemble predictions that have been in development for several years. This practice contrasts with the traditional single-value forecast practice at these lead times not only because the ensemble forecasts offer a basis for quantifying forecast uncertainty, but also because the use of ensembles requires a greater degree of automation in the forecast workflow than is currently used. For instance, individual ensemble member forcings cannot (practically) be manually adjusted, a step not uncommon with the current single-value paradigm, thus the forecaster is required to adopt a more 'over-the-loop' role than before. The relative lack of experience among operational forecasters and forecast users (eg, water managers) in the US with over-the-loop approaches motivates the creation of a real-time demonstration and evaluation platform for exploring the potential of over-the-loop workflows to produce usable ensemble short-to-medium range forecasts, as well as long range predictions. We describe the development and early results of such an effort by a collaboration between NCAR and the two water agencies, the US Army Corps of Engineers and the US Bureau of Reclamation. Focusing on small to medium sized headwater basins around the US, and using multi-decade series of ensemble streamflow hindcasts, we also describe early results, assessing the skill of daily-updating, over-the-loop forecasts driven by a set of ensemble atmospheric outputs from the NCEP GEFS for lead times from 1-15 days.

  5. Estimating the soil moisture profile by assimilating near-surface observations with the ensemble Kalman filter (EnKF)

    NASA Astrophysics Data System (ADS)

    Zhang, Shuwen; Li, Haorui; Zhang, Weidong; Qiu, Chongjian; Li, Xin

    2005-11-01

    The paper investigates the ability to retrieve the true soil moisture profile by assimilating near-surface soil moisture into a soil moisture model with an ensemble Kaiman filter (EnKF) assimilation scheme, including the effect of ensemble size, update interval and nonlinearities in the profile retrieval, the required time for full retrieval of the soil moisture profiles, and the possible influence of the depth of the soil moisture observation. These questions are addressed by a desktop study using synthetic data. The “true” soil moisture profiles are generated from the soil moisture model under the boundary condition of 0.5 cm d-1 evaporation. To test the assimilation schemes, the model is initialized with a poor initial guess of the soil moisture profile, and different ensemble sizes are tested showing that an ensemble of 40 members is enough to represent the covariance of the model forecasts. Also compared are the results with those from the direct insertion assimilation scheme, showing that the EnKF is superior to the direct insertion assimilation scheme, for hourly observations, with retrieval of the soil moisture profile being achieved in 16 h as compared to 12 days or more. For daily observations, the true soil moisture profile is achieved in about 15 days with the EnKF, but it is impossible to approximate the true moisture within 18 days by using direct insertion. It is also found that observation depth does not have a significant effect on profile retrieval time for the EnKF. The nonlinearities have some negative influence on the optimal estimates of soil moisture profile but not very seriously.

  6. Impact of natural variability on the perception of climate change for the upcoming decades: Analysis of the CanESM2-LE and CESM-LE large ensembles

    NASA Astrophysics Data System (ADS)

    Rondeau-Genesse, G.; Braun, M.; Chaumont, D.

    2017-12-01

    The pace of climate change can have a direct impact on the efforts required to adapt. However, for relatively short time scales, this pace can be masked by natural variability (NV). In some cases, this variability might cause, for a few decades, climate change to exceed what would be expected from the greenhouse gas (GHG) emissions alone or, to the contrary, it might cause slowdowns or even hiatuses. This phenomenon is difficult to explore using ensembles such as CMIP5, which are composed of multiple climatological models and thus combine both NV and inter-model differences. This study analyses CanESM2-LE and CESM-LE, two state-of-the-art large ensembles (LE) comprised of multiple realizations from a single climatological model and a single GHG emission scenario. We explore the relationship between NV and climate change over the next few decades in Canada and the United States. Temperature indices, namely the mean annual temperature and the 3-day maximum and minimum temperatures are assessed. Results indicate that under the RCP8.5, temperatures within most of the individual large ensemble members will increase in a roughly linear manner between 2021 and 2060. Nevertheless, in some regions such as parts of Canada and Alaska, there is a 20 to 35% probability that the temperature increase will slow down between 2021 and 2040. Such a slowdown in warming temperatures would provide some leeway for adaptation projects, but this phenomenon is caused by NV alone and, as such, is only temporary. Indeed, members of the large ensembles where a slowdown of warming is found during the 2021-2040 period are two to five times more likely to experience a period of very fast warming in the following decades. The opposite scenario, where the changes expected by 2050 would occur early because of NV, remains fairly uncommon for the mean annual temperature. For the extreme temperature indices however, this early warming still occurs in 5 to 20% of the large ensemble members. As such, while our results indicate that the dominant pattern in Canada and the United States is a fairly linear warming, the chances for other patterns is non negligible for the upcoming decades. This reinforces the need for constant, uninterrupted efforts towards climate change adaptation.

  7. Laser ektacytometry and evaluation of statistical characteristics of inhomogeneous ensembles of red blood cells

    NASA Astrophysics Data System (ADS)

    Nikitin, S. Yu.; Priezzhev, A. V.; Lugovtsov, A. E.; Ustinov, V. D.; Razgulin, A. V.

    2014-10-01

    The paper is devoted to development of the laser ektacytometry technique for evaluation of the statistical characteristics of inhomogeneous ensembles of red blood cells (RBCs). We have analyzed theoretically laser beam scattering by the inhomogeneous ensembles of elliptical discs, modeling red blood cells in the ektacytometer. The analysis shows that the laser ektacytometry technique allows for quantitative evaluation of such population characteristics of RBCs as the cells mean shape, the cells deformability variance and asymmetry of the cells distribution in the deformability. Moreover, we show that the deformability distribution itself can be retrieved by solving a specific Fredholm integral equation of the first kind. At this stage we do not take into account the scatter in the RBC sizes.

  8. The Fukushima-137Cs deposition case study: properties of the multi-model ensemble.

    PubMed

    Solazzo, E; Galmarini, S

    2015-01-01

    In this paper we analyse the properties of an eighteen-member ensemble generated by the combination of five atmospheric dispersion modelling systems and six meteorological data sets. The models have been applied to the total deposition of (137)Cs, following the nuclear accident of the Fukushima power plant in March 2011. Analysis is carried out with the scope of determining whether the ensemble is reliable, sufficiently diverse and if its accuracy and precision can be improved. Although ensemble practice is becoming more and more popular in many geophysical applications, good practice guidelines are missing as to how models should be combined for the ensembles to offer an improvement over single model realisations. We show that the ensemble of models share large portions of bias and variance and make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble mean with the advantage of being poorly correlated, allowing to save computational resources and reduce noise (and thus improving accuracy). We further propose and discuss two methods for selecting subsets of skilful and diverse members, and prove that, in the contingency of the present analysis, their mean outscores the full ensemble mean in terms of both accuracy (error) and precision (variance). Copyright © 2014. Published by Elsevier Ltd.

  9. Improving database enrichment through ensemble docking

    NASA Astrophysics Data System (ADS)

    Rao, Shashidhar; Sanschagrin, Paul C.; Greenwood, Jeremy R.; Repasky, Matthew P.; Sherman, Woody; Farid, Ramy

    2008-09-01

    While it may seem intuitive that using an ensemble of multiple conformations of a receptor in structure-based virtual screening experiments would necessarily yield improved enrichment of actives relative to using just a single receptor, it turns out that at least in the p38 MAP kinase model system studied here, a very large majority of all possible ensembles do not yield improved enrichment of actives. However, there are combinations of receptor structures that do lead to improved enrichment results. We present here a method to select the ensembles that produce the best enrichments that does not rely on knowledge of active compounds or sophisticated analyses of the 3D receptor structures. In the system studied here, the small fraction of ensembles of up to 3 receptors that do yield good enrichments of actives were identified by selecting ensembles that have the best mean GlideScore for the top 1% of the docked ligands in a database screen of actives and drug-like "decoy" ligands. Ensembles of two receptors identified using this mean GlideScore metric generally outperform single receptors, while ensembles of three receptors identified using this metric consistently give optimal enrichment factors in which, for example, 40% of the known actives outrank all the other ligands in the database.

  10. Optimization of Coronal Mass Ejection Ensemble Forecasting Using WSA-ENLIL with Coned Model

    DTIC Science & Technology

    2013-03-01

    previous versions by a large margin. The mean absolute forecast error of the median ensemble results was improved by over 43% over the original Coned...for reference for the six extra CMEs. .............................................................................................54 Figure 19...single-shot runs) with the flare location noted for reference for the six extra CMEs

  11. Peer-Teaching in the Secondary Music Ensemble

    ERIC Educational Resources Information Center

    Johnson, Erik

    2015-01-01

    Peer-teaching is an instructional technique that has been used by teachers world-wide to successfully engage, exercise and deepen student learning. Yet, in some instances, teachers find the application of peer-teaching in large music ensembles at the secondary level to be daunting. This article is meant to be a practical resource for secondary…

  12. Memory for Multiple Visual Ensembles in Infancy

    ERIC Educational Resources Information Center

    Zosh, Jennifer M.; Halberda, Justin; Feigenson, Lisa

    2011-01-01

    The number of individual items that can be maintained in working memory is limited. One solution to this problem is to store representations of ensembles that contain summary information about large numbers of items (e.g., the approximate number or cumulative area of a group of many items). Here we explored the developmental origins of ensemble…

  13. Phase-selective entrainment of nonlinear oscillator ensembles

    DOE PAGES

    Zlotnik, Anatoly V.; Nagao, Raphael; Kiss, Istvan Z.; ...

    2016-03-18

    The ability to organize and finely manipulate the hierarchy and timing of dynamic processes is important for understanding and influencing brain functions, sleep and metabolic cycles, and many other natural phenomena. However, establishing spatiotemporal structures in biological oscillator ensembles is a challenging task that requires controlling large collections of complex nonlinear dynamical units. In this report, we present a method to design entrainment signals that create stable phase patterns in ensembles of heterogeneous nonlinear oscillators without using state feedback information. We demonstrate the approach using experiments with electrochemical reactions on multielectrode arrays, in which we selectively assign ensemble subgroups intomore » spatiotemporal patterns with multiple phase clusters. As a result, the experimentally confirmed mechanism elucidates the connection between the phases and natural frequencies of a collection of dynamical elements, the spatial and temporal information that is encoded within this ensemble, and how external signals can be used to retrieve this information.« less

  14. Phase-selective entrainment of nonlinear oscillator ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zlotnik, Anatoly V.; Nagao, Raphael; Kiss, Istvan Z.

    The ability to organize and finely manipulate the hierarchy and timing of dynamic processes is important for understanding and influencing brain functions, sleep and metabolic cycles, and many other natural phenomena. However, establishing spatiotemporal structures in biological oscillator ensembles is a challenging task that requires controlling large collections of complex nonlinear dynamical units. In this report, we present a method to design entrainment signals that create stable phase patterns in ensembles of heterogeneous nonlinear oscillators without using state feedback information. We demonstrate the approach using experiments with electrochemical reactions on multielectrode arrays, in which we selectively assign ensemble subgroups intomore » spatiotemporal patterns with multiple phase clusters. As a result, the experimentally confirmed mechanism elucidates the connection between the phases and natural frequencies of a collection of dynamical elements, the spatial and temporal information that is encoded within this ensemble, and how external signals can be used to retrieve this information.« less

  15. AUC-Maximizing Ensembles through Metalearning.

    PubMed

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  16. AUC-Maximizing Ensembles through Metalearning

    PubMed Central

    LeDell, Erin; van der Laan, Mark J.; Peterson, Maya

    2016-01-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721

  17. Phase-selective entrainment of nonlinear oscillator ensembles

    NASA Astrophysics Data System (ADS)

    Zlotnik, Anatoly; Nagao, Raphael; Kiss, István Z.; Li-Shin, Jr.

    2016-03-01

    The ability to organize and finely manipulate the hierarchy and timing of dynamic processes is important for understanding and influencing brain functions, sleep and metabolic cycles, and many other natural phenomena. However, establishing spatiotemporal structures in biological oscillator ensembles is a challenging task that requires controlling large collections of complex nonlinear dynamical units. In this report, we present a method to design entrainment signals that create stable phase patterns in ensembles of heterogeneous nonlinear oscillators without using state feedback information. We demonstrate the approach using experiments with electrochemical reactions on multielectrode arrays, in which we selectively assign ensemble subgroups into spatiotemporal patterns with multiple phase clusters. The experimentally confirmed mechanism elucidates the connection between the phases and natural frequencies of a collection of dynamical elements, the spatial and temporal information that is encoded within this ensemble, and how external signals can be used to retrieve this information.

  18. Coherent Spin Control at the Quantum Level in an Ensemble-Based Optical Memory.

    PubMed

    Jobez, Pierre; Laplane, Cyril; Timoney, Nuala; Gisin, Nicolas; Ferrier, Alban; Goldner, Philippe; Afzelius, Mikael

    2015-06-12

    Long-lived quantum memories are essential components of a long-standing goal of remote distribution of entanglement in quantum networks. These can be realized by storing the quantum states of light as single-spin excitations in atomic ensembles. However, spin states are often subjected to different dephasing processes that limit the storage time, which in principle could be overcome using spin-echo techniques. Theoretical studies suggest this to be challenging due to unavoidable spontaneous emission noise in ensemble-based quantum memories. Here, we demonstrate spin-echo manipulation of a mean spin excitation of 1 in a large solid-state ensemble, generated through storage of a weak optical pulse. After a storage time of about 1 ms we optically read-out the spin excitation with a high signal-to-noise ratio. Our results pave the way for long-duration optical quantum storage using spin-echo techniques for any ensemble-based memory.

  19. Deterministically Entangling Two Remote Atomic Ensembles via Light-Atom Mixed Entanglement Swapping

    PubMed Central

    Liu, Yanhong; Yan, Zhihui; Jia, Xiaojun; Xie, Changde

    2016-01-01

    Entanglement of two distant macroscopic objects is a key element for implementing large-scale quantum networks consisting of quantum channels and quantum nodes. Entanglement swapping can entangle two spatially separated quantum systems without direct interaction. Here we propose a scheme of deterministically entangling two remote atomic ensembles via continuous-variable entanglement swapping between two independent quantum systems involving light and atoms. Each of two stationary atomic ensembles placed at two remote nodes in a quantum network is prepared to a mixed entangled state of light and atoms respectively. Then, the entanglement swapping is unconditionally implemented between the two prepared quantum systems by means of the balanced homodyne detection of light and the feedback of the measured results. Finally, the established entanglement between two macroscopic atomic ensembles is verified by the inseparability criterion of correlation variances between two anti-Stokes optical beams respectively coming from the two atomic ensembles. PMID:27165122

  20. Vertically grown nanowire crystals of dibenzotetrathienocoronene (DBTTC) on large-area graphene

    DOE PAGES

    Kim, B.; Chiu, C. -Y.; Kang, S. J.; ...

    2016-06-01

    Here we demonstrate controlled growth of vertical organic crystal nanowires on single layer graphene. Using Scanning Electron Microscopy (SEM), high-resolution transition electron microscopy (TEM), and Grazing Incidence X-ray Diffraction (GIXD), we probe the microstructure and morphology of dibenzotetrathienocoronene (DBTTC) nanowires epitaxially grown on graphene. The investigation is performed at both the ensemble and single nanowire level, and as function of growth parameters, providing insight of and control over the formation mechanism. Finally, the size, density and height of the nanowires can be tuned via growth conditions, opening new avenues for tailoring three-dimensional (3-D) nanostructured architectures for organic electronics with improvedmore » functional performance.« less

  1. A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers

    PubMed Central

    2012-01-01

    Background Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? Results The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Conclusion Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway. PMID:23216969

  2. A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers.

    PubMed

    Günther, Oliver P; Chen, Virginia; Freue, Gabriela Cohen; Balshaw, Robert F; Tebbutt, Scott J; Hollander, Zsuzsanna; Takhar, Mandeep; McMaster, W Robert; McManus, Bruce M; Keown, Paul A; Ng, Raymond T

    2012-12-08

    Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway.

  3. Ensemble Simulation of the Atmospheric Radionuclides Discharged by the Fukushima Nuclear Accident

    NASA Astrophysics Data System (ADS)

    Sekiyama, Thomas; Kajino, Mizuo; Kunii, Masaru

    2013-04-01

    Enormous amounts of radionuclides were discharged into the atmosphere by a nuclear accident at the Fukushima Daiichi nuclear power plant (FDNPP) after the earthquake and tsunami on 11 March 2011. The radionuclides were dispersed from the power plant and deposited mainly over eastern Japan and the North Pacific Ocean. A lot of numerical simulations of the radionuclide dispersion and deposition had been attempted repeatedly since the nuclear accident. However, none of them were able to perfectly simulate the distribution of dose rates observed after the accident over eastern Japan. This was partly due to the error of the wind vectors and precipitations used in the numerical simulations; unfortunately, their deterministic simulations could not deal with the probability distribution of the simulation results and errors. Therefore, an ensemble simulation of the atmospheric radionuclides was performed using the ensemble Kalman filter (EnKF) data assimilation system coupled with the Japan Meteorological Agency (JMA) non-hydrostatic mesoscale model (NHM); this mesoscale model has been used operationally for daily weather forecasts by JMA. Meteorological observations were provided to the EnKF data assimilation system from the JMA operational-weather-forecast dataset. Through this ensemble data assimilation, twenty members of the meteorological analysis over eastern Japan from 11 to 31 March 2011 were successfully obtained. Using these meteorological ensemble analysis members, the radionuclide behavior in the atmosphere such as advection, convection, diffusion, dry deposition, and wet deposition was simulated. This ensemble simulation provided the multiple results of the radionuclide dispersion and distribution. Because a large ensemble deviation indicates the low accuracy of the numerical simulation, the probabilistic information is obtainable from the ensemble simulation results. For example, the uncertainty of precipitation triggered the uncertainty of wet deposition; the uncertainty of wet deposition triggered the uncertainty of atmospheric radionuclide amounts. Then the remained radionuclides were transported downwind; consequently the uncertainty signal of the radionuclide amounts was propagated downwind. The signal propagation was seen in the ensemble simulation by the tracking of the large deviation areas of radionuclide concentration and deposition. These statistics are able to provide information useful for the probabilistic prediction of radionuclides.

  4. Wigner Functions for Arbitrary Quantum Systems.

    PubMed

    Tilma, Todd; Everitt, Mark J; Samson, John H; Munro, William J; Nemoto, Kae

    2016-10-28

    The possibility of constructing a complete, continuous Wigner function for any quantum system has been a subject of investigation for over 50 years. A key system that has served to illustrate the difficulties of this problem has been an ensemble of spins. Here we present a general and consistent framework for constructing Wigner functions exploiting the underlying symmetries in the physical system at hand. The Wigner function can be used to fully describe any quantum system of arbitrary dimension or ensemble size.

  5. Minimal ensemble based on subset selection using ECG to diagnose categories of CAN.

    PubMed

    Abawajy, Jemal; Kelarev, Andrei; Yi, Xun; Jelinek, Herbert F

    2018-07-01

    Early diagnosis of cardiac autonomic neuropathy (CAN) is critical for reversing or decreasing its progression and prevent complications. Diagnostic accuracy or precision is one of the core requirements of CAN detection. As the standard Ewing battery tests suffer from a number of shortcomings, research in automating and improving the early detection of CAN has recently received serious attention in identifying additional clinical variables and designing advanced ensembles of classifiers to improve the accuracy or precision of CAN diagnostics. Although large ensembles are commonly proposed for the automated diagnosis of CAN, large ensembles are characterized by slow processing speed and computational complexity. This paper applies ECG features and proposes a new ensemble-based approach for diagnosis of CAN progression. We introduce a Minimal Ensemble Based On Subset Selection (MEBOSS) for the diagnosis of all categories of CAN including early, definite and atypical CAN. MEBOSS is based on a novel multi-tier architecture applying classifier subset selection as well as the training subset selection during several steps of its operation. Our experiments determined the diagnostic accuracy or precision obtained in 5 × 2 cross-validation for various options employed in MEBOSS and other classification systems. The experiments demonstrate the operation of the MEBOSS procedure invoking the most effective classifiers available in the open source software environment SageMath. The results of our experiments show that for the large DiabHealth database of CAN related parameters MEBOSS outperformed other classification systems available in SageMath and achieved 94% to 97% precision in 5 × 2 cross-validation correctly distinguishing any two CAN categories to a maximum of five categorizations including control, early, definite, severe and atypical CAN. These results show that MEBOSS architecture is effective and can be recommended for practical implementations in systems for the diagnosis of CAN progression. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence

    PubMed Central

    Kelly, David; Majda, Andrew J.; Tong, Xin T.

    2015-01-01

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335

  7. Overlapped Partitioning for Ensemble Classifiers of P300-Based Brain-Computer Interfaces

    PubMed Central

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance. PMID:24695550

  8. Overlapped partitioning for ensemble classifiers of P300-based brain-computer interfaces.

    PubMed

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    A P300-based brain-computer interface (BCI) enables a wide range of people to control devices that improve their quality of life. Ensemble classifiers with naive partitioning were recently applied to the P300-based BCI and these classification performances were assessed. However, they were usually trained on a large amount of training data (e.g., 15300). In this study, we evaluated ensemble linear discriminant analysis (LDA) classifiers with a newly proposed overlapped partitioning method using 900 training data. In addition, the classification performances of the ensemble classifier with naive partitioning and a single LDA classifier were compared. One of three conditions for dimension reduction was applied: the stepwise method, principal component analysis (PCA), or none. The results show that an ensemble stepwise LDA (SWLDA) classifier with overlapped partitioning achieved a better performance than the commonly used single SWLDA classifier and an ensemble SWLDA classifier with naive partitioning. This result implies that the performance of the SWLDA is improved by overlapped partitioning and the ensemble classifier with overlapped partitioning requires less training data than that with naive partitioning. This study contributes towards reducing the required amount of training data and achieving better classification performance.

  9. Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.

    PubMed

    Kelly, David; Majda, Andrew J; Tong, Xin T

    2015-08-25

    The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.

  10. Correlated variability modifies working memory fidelity in primate prefrontal neuronal ensembles

    PubMed Central

    Leavitt, Matthew L.; Pieper, Florian; Sachs, Adam J.; Martinez-Trujillo, Julio C.

    2017-01-01

    Neurons in the primate lateral prefrontal cortex (LPFC) encode working memory (WM) representations via sustained firing, a phenomenon hypothesized to arise from recurrent dynamics within ensembles of interconnected neurons. Here, we tested this hypothesis by using microelectrode arrays to examine spike count correlations (rsc) in LPFC neuronal ensembles during a spatial WM task. We found a pattern of pairwise rsc during WM maintenance indicative of stronger coupling between similarly tuned neurons and increased inhibition between dissimilarly tuned neurons. We then used a linear decoder to quantify the effects of the high-dimensional rsc structure on information coding in the neuronal ensembles. We found that the rsc structure could facilitate or impair coding, depending on the size of the ensemble and tuning properties of its constituent neurons. A simple optimization procedure demonstrated that near-maximum decoding performance could be achieved using a relatively small number of neurons. These WM-optimized subensembles were more signal correlation (rsignal)-diverse and anatomically dispersed than predicted by the statistics of the full recorded population of neurons, and they often contained neurons that were poorly WM-selective, yet enhanced coding fidelity by shaping the ensemble’s rsc structure. We observed a pattern of rsc between LPFC neurons indicative of recurrent dynamics as a mechanism for WM-related activity and that the rsc structure can increase the fidelity of WM representations. Thus, WM coding in LPFC neuronal ensembles arises from a complex synergy between single neuron coding properties and multidimensional, ensemble-level phenomena. PMID:28275096

  11. Ensemble forecasting of short-term system scale irrigation demands using real-time flow data and numerical weather predictions

    NASA Astrophysics Data System (ADS)

    Perera, Kushan C.; Western, Andrew W.; Robertson, David E.; George, Biju; Nawarathna, Bandara

    2016-06-01

    Irrigation demands fluctuate in response to weather variations and a range of irrigation management decisions, which creates challenges for water supply system operators. This paper develops a method for real-time ensemble forecasting of irrigation demand and applies it to irrigation command areas of various sizes for lead times of 1 to 5 days. The ensemble forecasts are based on a deterministic time series model coupled with ensemble representations of the various inputs to that model. Forecast inputs include past flow, precipitation, and potential evapotranspiration. These inputs are variously derived from flow observations from a modernized irrigation delivery system; short-term weather forecasts derived from numerical weather prediction models and observed weather data available from automatic weather stations. The predictive performance for the ensemble spread of irrigation demand was quantified using rank histograms, the mean continuous rank probability score (CRPS), the mean CRPS reliability and the temporal mean of the ensemble root mean squared error (MRMSE). The mean forecast was evaluated using root mean squared error (RMSE), Nash-Sutcliffe model efficiency (NSE) and bias. The NSE values for evaluation periods ranged between 0.96 (1 day lead time, whole study area) and 0.42 (5 days lead time, smallest command area). Rank histograms and comparison of MRMSE, mean CRPS, mean CRPS reliability and RMSE indicated that the ensemble spread is generally a reliable representation of the forecast uncertainty for short lead times but underestimates the uncertainty for long lead times.

  12. Statistical characterization of planar two-dimensional Rayleigh-Taylor mixing layers

    NASA Astrophysics Data System (ADS)

    Sendersky, Dmitry

    2000-10-01

    The statistical evolution of a planar, randomly perturbed fluid interface subject to Rayleigh-Taylor instability is explored through numerical simulation in two space dimensions. The data set, generated by the front-tracking code FronTier, is highly resolved and covers a large ensemble of initial perturbations, allowing a more refined analysis of closure issues pertinent to the stochastic modeling of chaotic fluid mixing. We closely approach a two-fold convergence of the mean two-phase flow: convergence of the numerical solution under computational mesh refinement, and statistical convergence under increasing ensemble size. Quantities that appear in the two-phase averaged Euler equations are computed directly and analyzed for numerical and statistical convergence. Bulk averages show a high degree of convergence, while interfacial averages are convergent only in the outer portions of the mixing zone, where there is a coherent array of bubble and spike tips. Comparison with the familiar bubble/spike penetration law h = alphaAgt 2 is complicated by the lack of scale invariance, inability to carry the simulations to late time, the increasing Mach numbers of the bubble/spike tips, and sensitivity to the method of data analysis. Finally, we use the simulation data to analyze some constitutive properties of the mixing process.

  13. Using ensembles in water management: forecasting dry and wet episodes

    NASA Astrophysics Data System (ADS)

    van het Schip-Haverkamp, Tessa; van den Berg, Wim; van de Beek, Remco

    2015-04-01

    Extreme weather situations as droughts and extensive precipitation are becoming more frequent, which makes it more important to obtain accurate weather forecasts for the short and long term. Ensembles can provide a solution in terms of scenario forecasts. MeteoGroup uses ensembles in a new forecasting technique which presents a number of weather scenarios for a dynamical water management project, called Water-Rijk, in which water storage and water retention plays a large role. The Water-Rijk is part of Park Lingezegen, which is located between Arnhem and Nijmegen in the Netherlands. In collaboration with the University of Wageningen, Alterra and Eijkelkamp a forecasting system is developed for this area which can provide water boards with a number of weather and hydrology scenarios in order to assist in the decision whether or not water retention or water storage is necessary in the near future. In order to make a forecast for drought and extensive precipitation, the difference 'precipitation- evaporation' is used as a measurement of drought in the weather forecasts. In case of an upcoming drought this difference will take larger negative values. In case of a wet episode, this difference will be positive. The Makkink potential evaporation is used which gives the most accurate potential evaporation values during the summer, when evaporation plays an important role in the availability of surface water. Scenarios are determined by reducing the large number of forecasts in the ensemble to a number of averaged members with each its own likelihood of occurrence. For the Water-Rijk project 5 scenario forecasts are calculated: extreme dry, dry, normal, wet and extreme wet. These scenarios are constructed for two forecasting periods, each using its own ensemble technique: up to 48 hours ahead and up to 15 days ahead. The 48-hour forecast uses an ensemble constructed from forecasts of multiple high-resolution regional models: UKMO's Euro4 model,the ECMWF model, WRF and Hirlam. Using multiple model runs and additional post processing, an ensemble can be created from non-ensemble models. The 15-day forecast uses the ECMWF Ensemble Prediction System forecast from which scenarios can be deduced directly. A combination of the ensembles from the two forecasting periods is used in order to have the highest possible resolution of the forecast for the first 48 hours followed by the lower resolution long term forecast.

  14. Active cells for redundant and configurable articulated structures

    NASA Astrophysics Data System (ADS)

    Swensen, John P.; Nawroj, Ahsan I.; Pounds, Paul E. I.; Dollar, Aaron M.

    2014-10-01

    The proposed research effort explores the development of active cells—simple contractile electro-mechanical units that can be used as the material basis for larger articulable structures. Each cell, which might be considered a ‘muscle unit,’ consists of a contractile Nitinol Shape Memory Alloy (SMA) core with conductive terminals. Large numbers of these cells might be combined and externally powered to change phase, contracting to either articulate with a large strain or increase the stiffness of the ensemble, depending on the cell design. Unlike traditional work in modular robotics, the approach presented here focuses on cells that have a simplistic design and function, are inexpensive to fabricate, and are eventually scalable to sub-millimeter sizes, working toward our vision of articulated and robotic structures that can be custom-fabricated from large numbers of general cell units, similar to biological structures. In this paper, we present the design of the active cells and demonstrate their usage with three articulated structures built with them.

  15. Data Assimilation in the ADAPT Photospheric Flux Transport Model

    DOE PAGES

    Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; ...

    2015-03-17

    Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF)more » to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.« less

  16. Rapid acquisition of novel interface control by small ensembles of arbitrarily selected primary motor cortex neurons

    PubMed Central

    Law, Andrew J.; Rivlis, Gil

    2014-01-01

    Pioneering studies demonstrated that novel degrees of freedom could be controlled individually by directly encoding the firing rate of single motor cortex neurons, without regard to each neuron's role in controlling movement of the native limb. In contrast, recent brain-computer interface work has emphasized decoding outputs from large ensembles that include substantially more neurons than the number of degrees of freedom being controlled. To bridge the gap between direct encoding by single neurons and decoding output from large ensembles, we studied monkeys controlling one degree of freedom by comodulating up to four arbitrarily selected motor cortex neurons. Performance typically exceeded random quite early in single sessions and then continued to improve to different degrees in different sessions. We therefore examined factors that might affect performance. Performance improved with larger ensembles. In contrast, other factors that might have reflected preexisting synaptic architecture—such as the similarity of preferred directions—had little if any effect on performance. Patterns of comodulation among ensemble neurons became more consistent across trials as performance improved over single sessions. Compared with the ensemble neurons, other simultaneously recorded neurons showed less modulation. Patterns of voluntarily comodulated firing among small numbers of arbitrarily selected primary motor cortex (M1) neurons thus can be found and improved rapidly, with little constraint based on the normal relationships of the individual neurons to native limb movement. This rapid flexibility in relationships among M1 neurons may in part underlie our ability to learn new movements and improve motor skill. PMID:24920030

  17. The Principle of Energetic Consistency

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.

    2009-01-01

    A basic result in estimation theory is that the minimum variance estimate of the dynamical state, given the observations, is the conditional mean estimate. This result holds independently of the specifics of any dynamical or observation nonlinearity or stochasticity, requiring only that the probability density function of the state, conditioned on the observations, has two moments. For nonlinear dynamics that conserve a total energy, this general result implies the principle of energetic consistency: if the dynamical variables are taken to be the natural energy variables, then the sum of the total energy of the conditional mean and the trace of the conditional covariance matrix (the total variance) is constant between observations. Ensemble Kalman filtering methods are designed to approximate the evolution of the conditional mean and covariance matrix. For them the principle of energetic consistency holds independently of ensemble size, even with covariance localization. However, full Kalman filter experiments with advection dynamics have shown that a small amount of numerical dissipation can cause a large, state-dependent loss of total variance, to the detriment of filter performance. The principle of energetic consistency offers a simple way to test whether this spurious loss of variance limits ensemble filter performance in full-blown applications. The classical second-moment closure (third-moment discard) equations also satisfy the principle of energetic consistency, independently of the rank of the conditional covariance matrix. Low-rank approximation of these equations offers an energetically consistent, computationally viable alternative to ensemble filtering. Current formulations of long-window, weak-constraint, four-dimensional variational methods are designed to approximate the conditional mode rather than the conditional mean. Thus they neglect the nonlinear bias term in the second-moment closure equation for the conditional mean. The principle of energetic consistency implies that, to precisely the extent that growing modes are important in data assimilation, this term is also important.

  18. Forces and stress in second order Møller-Plesset perturbation theory for condensed phase systems within the resolution-of-identity Gaussian and plane waves approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Ben, Mauro, E-mail: mauro.delben@chem.uzh.ch; Hutter, Jürg, E-mail: hutter@chem.uzh.ch; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    The forces acting on the atoms as well as the stress tensor are crucial ingredients for calculating the structural and dynamical properties of systems in the condensed phase. Here, these derivatives of the total energy are evaluated for the second-order Møller-Plesset perturbation energy (MP2) in the framework of the resolution of identity Gaussian and plane waves method, in a way that is fully consistent with how the total energy is computed. This consistency is non-trivial, given the different ways employed to compute Coulomb, exchange, and canonical four center integrals, and allows, for example, for energy conserving dynamics in various ensembles.more » Based on this formalism, a massively parallel algorithm has been developed for finite and extended system. The designed parallel algorithm displays, with respect to the system size, cubic, quartic, and quintic requirements, respectively, for the memory, communication, and computation. All these requirements are reduced with an increasing number of processes, and the measured performance shows excellent parallel scalability and efficiency up to thousands of nodes. Additionally, the computationally more demanding quintic scaling steps can be accelerated by employing graphics processing units (GPU’s) showing, for large systems, a gain of almost a factor two compared to the standard central processing unit-only case. In this way, the evaluation of the derivatives of the RI-MP2 energy can be performed within a few minutes for systems containing hundreds of atoms and thousands of basis functions. With good time to solution, the implementation thus opens the possibility to perform molecular dynamics (MD) simulations in various ensembles (microcanonical ensemble and isobaric-isothermal ensemble) at the MP2 level of theory. Geometry optimization, full cell relaxation, and energy conserving MD simulations have been performed for a variety of molecular crystals including NH{sub 3}, CO{sub 2}, formic acid, and benzene.« less

  19. An Effective Approach for Clustering InhA Molecular Dynamics Trajectory Using Substrate-Binding Cavity Features

    PubMed Central

    Ruiz, Duncan D. A.; Norberto de Souza, Osmar

    2015-01-01

    Protein receptor conformations, obtained from molecular dynamics (MD) simulations, have become a promising treatment of its explicit flexibility in molecular docking experiments applied to drug discovery and development. However, incorporating the entire ensemble of MD conformations in docking experiments to screen large candidate compound libraries is currently an unfeasible task. Clustering algorithms have been widely used as a means to reduce such ensembles to a manageable size. Most studies investigate different algorithms using pairwise Root-Mean Square Deviation (RMSD) values for all, or part of the MD conformations. Nevertheless, the RMSD only may not be the most appropriate gauge to cluster conformations when the target receptor has a plastic active site, since they are influenced by changes that occur on other parts of the structure. Hence, we have applied two partitioning methods (k-means and k-medoids) and four agglomerative hierarchical methods (Complete linkage, Ward’s, Unweighted Pair Group Method and Weighted Pair Group Method) to analyze and compare the quality of partitions between a data set composed of properties from an enzyme receptor substrate-binding cavity and two data sets created using different RMSD approaches. Ensembles of representative MD conformations were generated by selecting a medoid of each group from all partitions analyzed. We investigated the performance of our new method for evaluating binding conformation of drug candidates to the InhA enzyme, which were performed by cross-docking experiments between a 20 ns MD trajectory and 20 different ligands. Statistical analyses showed that the novel ensemble, which is represented by only 0.48% of the MD conformations, was able to reproduce 75% of all dynamic behaviors within the binding cavity for the docking experiments performed. Moreover, this new approach not only outperforms the other two RMSD-clustering solutions, but it also shows to be a promising strategy to distill biologically relevant information from MD trajectories, especially for docking purposes. PMID:26218832

  20. An Effective Approach for Clustering InhA Molecular Dynamics Trajectory Using Substrate-Binding Cavity Features.

    PubMed

    De Paris, Renata; Quevedo, Christian V; Ruiz, Duncan D A; Norberto de Souza, Osmar

    2015-01-01

    Protein receptor conformations, obtained from molecular dynamics (MD) simulations, have become a promising treatment of its explicit flexibility in molecular docking experiments applied to drug discovery and development. However, incorporating the entire ensemble of MD conformations in docking experiments to screen large candidate compound libraries is currently an unfeasible task. Clustering algorithms have been widely used as a means to reduce such ensembles to a manageable size. Most studies investigate different algorithms using pairwise Root-Mean Square Deviation (RMSD) values for all, or part of the MD conformations. Nevertheless, the RMSD only may not be the most appropriate gauge to cluster conformations when the target receptor has a plastic active site, since they are influenced by changes that occur on other parts of the structure. Hence, we have applied two partitioning methods (k-means and k-medoids) and four agglomerative hierarchical methods (Complete linkage, Ward's, Unweighted Pair Group Method and Weighted Pair Group Method) to analyze and compare the quality of partitions between a data set composed of properties from an enzyme receptor substrate-binding cavity and two data sets created using different RMSD approaches. Ensembles of representative MD conformations were generated by selecting a medoid of each group from all partitions analyzed. We investigated the performance of our new method for evaluating binding conformation of drug candidates to the InhA enzyme, which were performed by cross-docking experiments between a 20 ns MD trajectory and 20 different ligands. Statistical analyses showed that the novel ensemble, which is represented by only 0.48% of the MD conformations, was able to reproduce 75% of all dynamic behaviors within the binding cavity for the docking experiments performed. Moreover, this new approach not only outperforms the other two RMSD-clustering solutions, but it also shows to be a promising strategy to distill biologically relevant information from MD trajectories, especially for docking purposes.

  1. Optical Rabi Oscillations in a Quantum Dot Ensemble

    NASA Astrophysics Data System (ADS)

    Kujiraoka, Mamiko; Ishi-Hayase, Junko; Akahane, Kouichi; Yamamoto, Naokatsu; Ema, Kazuhiro; Sasaki, Masahide

    2010-09-01

    We have investigated Rabi oscillations of exciton polarization in a self-assembled InAs quantum dot ensemble. The four-wave mixing signals measured as a function of the average of the pulse area showed the large in-plane anisotropy and nonharmonic oscillations. The experimental results can be well reproduced by a two-level model calculation including three types of inhomogeneities without any fitting parameter. The large anisotropy can be well explained by the anisotropic dipole moments. We also find that the nonharmonic behaviors partly originate from the polarization interference.

  2. Storm Surge Simulation and Ensemble Forecast for Hurricane Irene (2011)

    NASA Astrophysics Data System (ADS)

    Lin, N.; Emanuel, K.

    2012-12-01

    Hurricane Irene, raking the U.S. East Coast during the period of 26-30 August 2011, caused widespread damage estimated at $15.8 billion and was responsible for 49 direct deaths (Avila and Cangialosi, 2011). Although the most severe impact in the northeastern U.S. was catastrophic inland flooding, with its unusually large size, Irene also generated high waves and storm surges and caused moderate to major coastal flooding. The most severe surge damage occurred between Oregon Inlet and Cape Hatteras in North Carolina (NC). Significant storm surge damage also occurred along southern Chesapeake Bay, and moderate and high surges were observed along the coast from New Jersey (NJ) northward. A storm surge of 0.9-1.8 m caused hundreds of millions of dollars in property damage in New York City (NYC) and Long Island, despite the fact that the storm made landfall to the west of NYC with peak winds of no more than tropical storm strength. Making three U.S. landfalls (in NC, NJ, and NY), Hurricane Irene provides a unique case for studying storm surge along the eastern U.S. coastline. We apply the hydrodynamic model ADCIRC (Luettich et al. 1992) to conduct surge simulations for Pamlico Sound, Chesapeake Bay, and NYC, using best track data and parametric wind and pressure models. The results agree well with tidal-gauge observations. Then we explore a new methodology for storm surge ensemble forecasting and apply it to Irene. This method applies a statistical/deterministic hurricane model (Emanuel et al. 2006) to generate large numbers of storm ensembles under the storm environment described by the 51 ECMWF ensemble members. The associated surge ensembles are then generated with the ADCIRC model. The numerical simulation is computationally efficient, making the method applicable to real-time storm surge ensemble forecasting. We report the results for NYC in this presentation. The ADCIRC simulation using the best track data generates a storm surge of 1.3 m and a storm tide of 2.1 m at the Battery, NYC, which agree well with the observed storm surge of 1.33 m and storm tide of 2.12 m, although the simulated surge arrives about 2 hours earlier than the observed. Based on the surge climatology estimated by Lin et al. (2012), Hurricane Irene's storm surge is approximately a 60-year event for NYC, but its storm tide, with the surge happening right at the high astronomical tide, is a 100-year event. Lin et al. (2012) also projected that such 100-year storm tide events might occur on average every 3-20 years by the end of the century, under the IPCC A1B emission scenario and a 1-m sea level rise. The ensemble forecasting, starting from two and one days (each with 1000 ensembles) before Irene's first landfall in NC, shows that Irene's actual storm surge at the Battery had a chance of about 9% and 10% to be exceeded, respectively. The largest surges among the two ensemble sets are 2.28 m and 2.05 m, respectively. If happening at the high tide, as with Hurricane Irene, the worst-case storm tides would be about 3-3.2 m, similar to the highest historical water level at the Battery due to a hurricane in 1821. Lin et al. (2012) estimated that such a storm tide of about 3.1 m had a return period of about 500 years under current climate conditions, but the return period might become 25-240 years by the end of the century, under the IPCC A1B emission scenario and a 1-m sea level rise.

  3. Continuous Easy-Plane Deconfined Phase Transition on the Kagome Lattice

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-Feng; He, Yin-Chen; Eggert, Sebastian; Moessner, Roderich; Pollmann, Frank

    2018-03-01

    We use large scale quantum Monte Carlo simulations to study an extended Hubbard model of hard core bosons on the kagome lattice. In the limit of strong nearest-neighbor interactions at 1 /3 filling, the interplay between frustration and quantum fluctuations leads to a valence bond solid ground state. The system undergoes a quantum phase transition to a superfluid phase as the interaction strength is decreased. It is still under debate whether the transition is weakly first order or represents an unconventional continuous phase transition. We present a theory in terms of an easy plane noncompact C P1 gauge theory describing the phase transition at 1 /3 filling. Utilizing large scale quantum Monte Carlo simulations with parallel tempering in the canonical ensemble up to 15552 spins, we provide evidence that the phase transition is continuous at exactly 1 /3 filling. A careful finite size scaling analysis reveals an unconventional scaling behavior hinting at deconfined quantum criticality.

  4. End-to-end distance and contour length distribution functions of DNA helices

    NASA Astrophysics Data System (ADS)

    Zoli, Marco

    2018-06-01

    I present a computational method to evaluate the end-to-end and the contour length distribution functions of short DNA molecules described by a mesoscopic Hamiltonian. The method generates a large statistical ensemble of possible configurations for each dimer in the sequence, selects the global equilibrium twist conformation for the molecule, and determines the average base pair distances along the molecule backbone. Integrating over the base pair radial and angular fluctuations, I derive the room temperature distribution functions as a function of the sequence length. The obtained values for the most probable end-to-end distance and contour length distance, providing a measure of the global molecule size, are used to examine the DNA flexibility at short length scales. It is found that, also in molecules with less than ˜60 base pairs, coiled configurations maintain a large statistical weight and, consistently, the persistence lengths may be much smaller than in kilo-base DNA.

  5. The Local Ensemble Transform Kalman Filter with the Weather Research and Forecasting Model: Experiments with Real Observations

    NASA Astrophysics Data System (ADS)

    Miyoshi, Takemasa; Kunii, Masaru

    2012-03-01

    The local ensemble transform Kalman filter (LETKF) is implemented with the Weather Research and Forecasting (WRF) model, and real observations are assimilated to assess the newly-developed WRF-LETKF system. The WRF model is a widely-used mesoscale numerical weather prediction model, and the LETKF is an ensemble Kalman filter (EnKF) algorithm particularly efficient in parallel computer architecture. This study aims to provide the basis of future research on mesoscale data assimilation using the WRF-LETKF system, an additional testbed to the existing EnKF systems with the WRF model used in the previous studies. The particular LETKF system adopted in this study is based on the system initially developed in 2004 and has been continuously improved through theoretical studies and wide applications to many kinds of dynamical models including realistic geophysical models. Most recent and important improvements include an adaptive covariance inflation scheme which considers the spatial and temporal inhomogeneity of inflation parameters. Experiments show that the LETKF successfully assimilates real observations and that adaptive inflation is advantageous. Additional experiments with various ensemble sizes show that using more ensemble members improves the analyses consistently.

  6. On the incidence of meteorological and hydrological processors: Effect of resolution, sharpness and reliability of hydrological ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Abaza, Mabrouk; Anctil, François; Fortin, Vincent; Perreault, Luc

    2017-12-01

    Meteorological and hydrological ensemble prediction systems are imperfect. Their outputs could often be improved through the use of a statistical processor, opening up the question of the necessity of using both processors (meteorological and hydrological), only one of them, or none. This experiment compares the predictive distributions from four hydrological ensemble prediction systems (H-EPS) utilising the Ensemble Kalman filter (EnKF) probabilistic sequential data assimilation scheme. They differ in the inclusion or not of the Distribution Based Scaling (DBS) method for post-processing meteorological forecasts and the ensemble Bayesian Model Averaging (ensemble BMA) method for hydrological forecast post-processing. The experiment is implemented on three large watersheds and relies on the combination of two meteorological reforecast products: the 4-member Canadian reforecasts from the Canadian Centre for Meteorological and Environmental Prediction (CCMEP) and the 10-member American reforecasts from the National Oceanic and Atmospheric Administration (NOAA), leading to 14 members at each time step. Results show that all four tested H-EPS lead to resolution and sharpness values that are quite similar, with an advantage to DBS + EnKF. The ensemble BMA is unable to compensate for any bias left in the precipitation ensemble forecasts. On the other hand, it succeeds in calibrating ensemble members that are otherwise under-dispersed. If reliability is preferred over resolution and sharpness, DBS + EnKF + ensemble BMA performs best, making use of both processors in the H-EPS system. Conversely, for enhanced resolution and sharpness, DBS is the preferred method.

  7. Climate, wildfire, and erosion ensemble foretells more sediment in western USA watersheds

    Treesearch

    Joel B. Sankey; Jason Kreitler; Todd J. Hawbaker; Jason L. McVay; Mary Ellen Miller; Erich R. Mueller; Nicole M. Vaillant; Scott E. Lowe; Temuulen T. Sankey

    2017-01-01

    The area burned annually by wildfires is expected to increase worldwide due to climate change. Burned areas increase soil erosion rates within watersheds, which can increase sedimentation in downstream rivers and reservoirs. However, which watersheds will be impacted by future wildfires is largely unknown. Using an ensemble of climate, fire, and erosion models, we show...

  8. Nonlocal continuous models for forced vibration analysis of two- and three-dimensional ensembles of single-walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Kiani, Keivan

    2014-06-01

    Novel nonlocal discrete and continuous models are proposed for dynamic analysis of two- and three-dimensional ensembles of single-walled carbon nanotubes (SWCNTs). The generated extra van der Waals forces between adjacent SWCNTs due to their lateral motions are evaluated via Lennard-Jones potential function. Using a nonlocal Rayleigh beam model, the discrete and continuous models are developed for both two- and three-dimensional ensembles of SWCNTs acted upon by transverse dynamic loads. The capabilities of the proposed continuous models in capturing the vibration behavior of SWCNTs ensembles are then examined through various numerical simulations. A reasonably good agreement between the results of the continuous models and those of the discrete ones is also reported. The effects of the applied load frequency, intertube spaces, and small-scale parameter on the transverse dynamic responses of both two- and three-dimensional ensembles of SWCNTs are explained. The proposed continuous models would be very useful for dynamic analyses of large populated ensembles of SWCNTs whose discrete models suffer from both computational efforts and labor costs.

  9. How uncertain are climate model projections of water availability indicators across the Middle East?

    PubMed

    Hemming, Debbie; Buontempo, Carlo; Burke, Eleanor; Collins, Mat; Kaye, Neil

    2010-11-28

    The projection of robust regional climate changes over the next 50 years presents a considerable challenge for the current generation of climate models. Water cycle changes are particularly difficult to model in this area because major uncertainties exist in the representation of processes such as large-scale and convective rainfall and their feedback with surface conditions. We present climate model projections and uncertainties in water availability indicators (precipitation, run-off and drought index) for the 1961-1990 and 2021-2050 periods. Ensembles from two global climate models (GCMs) and one regional climate model (RCM) are used to examine different elements of uncertainty. Although all three ensembles capture the general distribution of observed annual precipitation across the Middle East, the RCM is consistently wetter than observations, especially over the mountainous areas. All future projections show decreasing precipitation (ensemble median between -5 and -25%) in coastal Turkey and parts of Lebanon, Syria and Israel and consistent run-off and drought index changes. The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) GCM ensemble exhibits drying across the north of the region, whereas the Met Office Hadley Centre work Quantifying Uncertainties in Model ProjectionsAtmospheric (QUMP-A) GCM and RCM ensembles show slight drying in the north and significant wetting in the south. RCM projections also show greater sensitivity (both wetter and drier) and a wider uncertainty range than QUMP-A. The nature of these uncertainties suggests that both large-scale circulation patterns, which influence region-wide drying/wetting patterns, and regional-scale processes, which affect localized water availability, are important sources of uncertainty in these projections. To reduce large uncertainties in water availability projections, it is suggested that efforts would be well placed to focus on the understanding and modelling of both large-scale processes and their teleconnections with Middle East climate and localized processes involved in orographic precipitation.

  10. Large-scale online semantic indexing of biomedical articles via an ensemble of multi-label classification models.

    PubMed

    Papanikolaou, Yannis; Tsoumakas, Grigorios; Laliotis, Manos; Markantonatos, Nikos; Vlahavas, Ioannis

    2017-09-22

    In this paper we present the approach that we employed to deal with large scale multi-label semantic indexing of biomedical papers. This work was mainly implemented within the context of the BioASQ challenge (2013-2017), a challenge concerned with biomedical semantic indexing and question answering. Our main contribution is a MUlti-Label Ensemble method (MULE) that incorporates a McNemar statistical significance test in order to validate the combination of the constituent machine learning algorithms. Some secondary contributions include a study on the temporal aspects of the BioASQ corpus (observations apply also to the BioASQ's super-set, the PubMed articles collection) and the proper parametrization of the algorithms used to deal with this challenging classification task. The ensemble method that we developed is compared to other approaches in experimental scenarios with subsets of the BioASQ corpus giving positive results. In our participation in the BioASQ challenge we obtained the first place in 2013 and the second place in the four following years, steadily outperforming MTI, the indexing system of the National Library of Medicine (NLM). The results of our experimental comparisons, suggest that employing a statistical significance test to validate the ensemble method's choices, is the optimal approach for ensembling multi-label classifiers, especially in contexts with many rare labels.

  11. Projecting malaria hazard from climate change in eastern Africa using large ensembles to estimate uncertainty.

    PubMed

    Leedale, Joseph; Tompkins, Adrian M; Caminade, Cyril; Jones, Anne E; Nikulin, Grigory; Morse, Andrew P

    2016-03-31

    The effect of climate change on the spatiotemporal dynamics of malaria transmission is studied using an unprecedented ensemble of climate projections, employing three diverse bias correction and downscaling techniques, in order to partially account for uncertainty in climate- driven malaria projections. These large climate ensembles drive two dynamical and spatially explicit epidemiological malaria models to provide future hazard projections for the focus region of eastern Africa. While the two malaria models produce very distinct transmission patterns for the recent climate, their response to future climate change is similar in terms of sign and spatial distribution, with malaria transmission moving to higher altitudes in the East African Community (EAC) region, while transmission reduces in lowland, marginal transmission zones such as South Sudan. The climate model ensemble generally projects warmer and wetter conditions over EAC. The simulated malaria response appears to be driven by temperature rather than precipitation effects. This reduces the uncertainty due to the climate models, as precipitation trends in tropical regions are very diverse, projecting both drier and wetter conditions with the current state-of-the-art climate model ensemble. The magnitude of the projected changes differed considerably between the two dynamical malaria models, with one much more sensitive to climate change, highlighting that uncertainty in the malaria projections is also associated with the disease modelling approach.

  12. Attributing Historical Changes in Probabilities of Record-Breaking Daily Temperature and Precipitation Extreme Events

    DOE PAGES

    Shiogama, Hideo; Imada, Yukiko; Mori, Masato; ...

    2016-08-07

    Here, we describe two unprecedented large (100-member), longterm (61-year) ensembles based on MRI-AGCM3.2, which were driven by historical and non-warming climate forcing. These ensembles comprise the "Database for Policy Decision making for Future climate change (d4PDF)". We compare these ensembles to large ensembles based on another climate model, as well as to observed data, to investigate the influence of anthropogenic activities on historical changes in the numbers of record-breaking events, including: the annual coldest daily minimum temperature (TNn), the annual warmest daily maximum temperature (TXx) and the annual most intense daily precipitation event (Rx1day). These two climate model ensembles indicatemore » that human activity has already had statistically significant impacts on the number of record-breaking extreme events worldwide mainly in the Northern Hemisphere land. Specifically, human activities have altered the likelihood that a wider area globally would suffer record-breaking TNn, TXx and Rx1day events than that observed over the 2001- 2010 period by a factor of at least 0.6, 5.4 and 1.3, respectively. However, we also find that the estimated spatial patterns and amplitudes of anthropogenic impacts on the probabilities of record-breaking events are sensitive to the climate model and/or natural-world boundary conditions used in the attribution studies.« less

  13. The Impact of Density and Ratio on Object-Ensemble Representation in Human Anterior-Medial Ventral Visual Cortex.

    PubMed

    Cant, Jonathan S; Xu, Yaoda

    2015-11-01

    Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Crossover ensembles of random matrices and skew-orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Santosh, E-mail: skumar.physics@gmail.com; Pandey, Akhilesh, E-mail: ap0700@mail.jnu.ac.in

    2011-08-15

    Highlights: > We study crossover ensembles of Jacobi family of random matrices. > We consider correlations for orthogonal-unitary and symplectic-unitary crossovers. > We use the method of skew-orthogonal polynomials and quaternion determinants. > We prove universality of spectral correlations in crossover ensembles. > We discuss applications to quantum conductance and communication theory problems. - Abstract: In a recent paper (S. Kumar, A. Pandey, Phys. Rev. E, 79, 2009, p. 026211) we considered Jacobi family (including Laguerre and Gaussian cases) of random matrix ensembles and reported exact solutions of crossover problems involving time-reversal symmetry breaking. In the present paper we givemore » details of the work. We start with Dyson's Brownian motion description of random matrix ensembles and obtain universal hierarchic relations among the unfolded correlation functions. For arbitrary dimensions we derive the joint probability density (jpd) of eigenvalues for all transitions leading to unitary ensembles as equilibrium ensembles. We focus on the orthogonal-unitary and symplectic-unitary crossovers and give generic expressions for jpd of eigenvalues, two-point kernels and n-level correlation functions. This involves generalization of the theory of skew-orthogonal polynomials to crossover ensembles. We also consider crossovers in the circular ensembles to show the generality of our method. In the large dimensionality limit, correlations in spectra with arbitrary initial density are shown to be universal when expressed in terms of a rescaled symmetry breaking parameter. Applications of our crossover results to communication theory and quantum conductance problems are also briefly discussed.« less

  15. Characterization and visualization of RNA secondary structure Boltzmann ensemble via information theory.

    PubMed

    Lin, Luan; McKerrow, Wilson H; Richards, Bryce; Phonsom, Chukiat; Lawrence, Charles E

    2018-03-05

    The nearest neighbor model and associated dynamic programming algorithms allow for the efficient estimation of the RNA secondary structure Boltzmann ensemble. However because a given RNA secondary structure only contains a fraction of the possible helices that could form from a given sequence, the Boltzmann ensemble is multimodal. Several methods exist for clustering structures and finding those modes. However less focus is given to exploring the underlying reasons for this multimodality: the presence of conflicting basepairs. Information theory, or more specifically mutual information, provides a method to identify those basepairs that are key to the secondary structure. To this end we find most informative basepairs and visualize the effect of these basepairs on the secondary structure. Knowing whether a most informative basepair is present tells us not only the status of the particular pair but also provides a large amount of information about which other pairs are present or not present. We find that a few basepairs account for a large amount of the structural uncertainty. The identification of these pairs indicates small changes to sequence or stability that will have a large effect on structure. We provide a novel algorithm that uses mutual information to identify the key basepairs that lead to a multimodal Boltzmann distribution. We then visualize the effect of these pairs on the overall Boltzmann ensemble.

  16. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less

  17. Set size manipulations reveal the boundary conditions of perceptual ensemble learning.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-11-01

    Recent evidence suggests that observers can grasp patterns of feature variations in the environment with surprising efficiency. During visual search tasks where all distractors are randomly drawn from a certain distribution rather than all being homogeneous, observers are capable of learning highly complex statistical properties of distractor sets. After only a few trials (learning phase), the statistical properties of distributions - mean, variance and crucially, shape - can be learned, and these representations affect search during a subsequent test phase (Chetverikov, Campana, & Kristjánsson, 2016). To assess the limits of such distribution learning, we varied the information available to observers about the underlying distractor distributions by manipulating set size during the learning phase in two experiments. We found that robust distribution learning only occurred for large set sizes. We also used set size to assess whether the learning of distribution properties makes search more efficient. The results reveal how a certain minimum of information is required for learning to occur, thereby delineating the boundary conditions of learning of statistical variation in the environment. However, the benefits of distribution learning for search efficiency remain unclear. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Study of static and dynamic magnetic properties of Fe nanoparticles composited with activated carbon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pal, Satyendra Prakash, E-mail: sppal85@gmail.com; Department of Physical Sciences, Indian Institute of Science Education and Research, Mohali, Knowledge city, Sector81, SAS Nagar, Manauli-140306, Punjab; Kaur, Guratinder

    2016-05-23

    Nanocomposite of Fe nanoparticles with activated carbon has been synthesized to alter the magnetic spin-spin interaction and hence study the dilution effect on the static and dynamic magnetic properties of the Fe nanoparticle system. Transmission electron microscopic (TEM) image shows the spherical Fe nanoparticles dispersed in carbon matrix with 13.8 nm particle size. Temperature dependent magnetization measurement does not show any blocking temperature at all, right up to the room temperature. Magnetic hysteresis curve, taken at 300 K, shows small value of the coercivity and this small hysteresis indicates the presence of an energy barrier and inherent magnetization dynamics. Langevinmore » function fitting of the hysteresis curve gives almost similar value of particle size as obtained from TEM analysis. Magnetic relaxation data, taken at a temperature of 100 K, were fitted with a combination of two exponentially decaying function. This diluted form of nanoparticle system, which has particles size in the superparamagnetic limit, behaves like a dilute ensemble of superspins with large value of the magnetic anisotropic barrier.« less

  19. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2016-01-05

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  20. An ensemble constrained variational analysis of atmospheric forcing data and its application to evaluate clouds in CAM5: Ensemble 3DCVA and Its Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less

  1. Ensemble Downscaling of Winter Seasonal Forecasts: The MRED Project

    NASA Astrophysics Data System (ADS)

    Arritt, R. W.; Mred Team

    2010-12-01

    The Multi-Regional climate model Ensemble Downscaling (MRED) project is a multi-institutional project that is producing large ensembles of downscaled winter seasonal forecasts from coupled atmosphere-ocean seasonal prediction models. Eight regional climate models each are downscaling 15-member ensembles from the National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) and the new NASA seasonal forecast system based on the GEOS5 atmospheric model coupled with the MOM4 ocean model. This produces 240-member ensembles, i.e., 8 regional models x 15 global ensemble members x 2 global models, for each winter season (December-April) of 1982-2003. Results to date show that combined global-regional downscaled forecasts have greatest skill for seasonal precipitation anomalies during strong El Niño events such as 1982-83 and 1997-98. Ensemble means of area-averaged seasonal precipitation for the regional models generally track the corresponding results for the global model, though there is considerable inter-model variability amongst the regional models. For seasons and regions where area mean precipitation is accurately simulated the regional models bring added value by extracting greater spatial detail from the global forecasts, mainly due to better resolution of terrain in the regional models. Our results also emphasize that an ensemble approach is essential to realizing the added value from the combined global-regional modeling system.

  2. Size and Velocity Distributions of Particles and Droplets in Spray Combustion Systems.

    DTIC Science & Technology

    1984-11-01

    constructed, calibrated, and successfully applied. Our efforts to verify the performance and accuracy of this diagnostic led to a parallel research...array will not be an acceptable detection system for size distribution measurements by this method. VI. Conclusions This study has led to the following...radiation is also useful particle size analysis by ensemble multiangle scattering. One problem for all multiwavelength or multiaricle diagnostics for

  3. Quantifying the Influence of Dynamics Across Scales on Regional Climate Uncertainty in Western North America

    NASA Astrophysics Data System (ADS)

    Goldenson, Naomi L.

    Uncertainties in climate projections at the regional scale are inevitably larger than those for global mean quantities. Here, focusing on western North American regional climate, several approaches are taken to quantifying uncertainties starting with the output of global climate model projections. Internal variance is found to be an important component of the projection uncertainty up and down the west coast. To quantify internal variance and other projection uncertainties in existing climate models, we evaluate different ensemble configurations. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find internal variability can be quantified consistently using a large ensemble or an ensemble of opportunity that includes small ensembles from multiple models and climate scenarios. The latter offers the advantage of also producing estimates of uncertainty due to model differences. We conclude that climate projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible. We then conduct a small single-model ensemble of simulations using the Model for Prediction Across Scales with physics from the Community Atmosphere Model Version 5 (MPAS-CAM5) and prescribed historical sea surface temperatures. In the global variable resolution domain, the finest resolution (at 30 km) is in our region of interest over western North America and upwind over the northeast Pacific. In the finer-scale region, extreme precipitation from atmospheric rivers (ARs) is connected to tendencies in seasonal snowpack in mountains of the Northwest United States and California. In most of the Cascade Mountains, winters with more AR days are associated with less snowpack, in contrast to the northern Rockies and California's Sierra Nevadas. In snowpack observations and reanalysis of the atmospheric circulation, we find similar relationships between frequency of AR events and winter season snowpack in the western United States. In spring, however, there is not a clear relationship between number of AR days and seasonal mean snowpack across the model ensemble, so caution is urged in interpreting the historical record in the spring season. Finally, the representation of the El Nino Southern Oscillation (ENSO)--an important source of interannual climate predictability in some regions--is explored in a large single-model ensemble using ensemble Empirical Orthogonal Functions (EOFs) to find modes of variance across the entire ensemble at once. The leading EOF is ENSO. The principal components (PCs) of the next three EOFs exhibit a lead-lag relationship with the ENSO signal captured in the first PC. The second PC, with most of its variance in the summer season, is the most strongly cross-correlated with the first. This approach offers insight into how the model considered represents this important atmosphere-ocean interaction. Taken together these varied approaches quantify the implications of climate projections regionally, identify processes that make snowpack water resources vulnerable, and seek insight into how to better simulate the large-scale climate modes controlling regional variability.

  4. Classifying injury narratives of large administrative databases for surveillance-A practical approach combining machine learning ensembles and human review.

    PubMed

    Marucci-Wellman, Helen R; Corns, Helen L; Lehto, Mark R

    2017-01-01

    Injury narratives are now available real time and include useful information for injury surveillance and prevention. However, manual classification of the cause or events leading to injury found in large batches of narratives, such as workers compensation claims databases, can be prohibitive. In this study we compare the utility of four machine learning algorithms (Naïve Bayes, Single word and Bi-gram models, Support Vector Machine and Logistic Regression) for classifying narratives into Bureau of Labor Statistics Occupational Injury and Illness event leading to injury classifications for a large workers compensation database. These algorithms are known to do well classifying narrative text and are fairly easy to implement with off-the-shelf software packages such as Python. We propose human-machine learning ensemble approaches which maximize the power and accuracy of the algorithms for machine-assigned codes and allow for strategic filtering of rare, emerging or ambiguous narratives for manual review. We compare human-machine approaches based on filtering on the prediction strength of the classifier vs. agreement between algorithms. Regularized Logistic Regression (LR) was the best performing algorithm alone. Using this algorithm and filtering out the bottom 30% of predictions for manual review resulted in high accuracy (overall sensitivity/positive predictive value of 0.89) of the final machine-human coded dataset. The best pairings of algorithms included Naïve Bayes with Support Vector Machine whereby the triple ensemble NB SW =NB BI-GRAM =SVM had very high performance (0.93 overall sensitivity/positive predictive value and high accuracy (i.e. high sensitivity and positive predictive values)) across both large and small categories leaving 41% of the narratives for manual review. Integrating LR into this ensemble mix improved performance only slightly. For large administrative datasets we propose incorporation of methods based on human-machine pairings such as we have done here, utilizing readily-available off-the-shelf machine learning techniques and resulting in only a fraction of narratives that require manual review. Human-machine ensemble methods are likely to improve performance over total manual coding. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. Climate Modeling with a Million CPUs

    NASA Astrophysics Data System (ADS)

    Tobis, M.; Jackson, C. S.

    2010-12-01

    Michael Tobis, Ph.D. Research Scientist Associate University of Texas Institute for Geophysics Charles S. Jackson Research Scientist University of Texas Institute for Geophysics Meteorological, oceanographic, and climatological applications have been at the forefront of scientific computing since its inception. The trend toward ever larger and more capable computing installations is unabated. However, much of the increase in capacity is accompanied by an increase in parallelism and a concomitant increase in complexity. An increase of at least four additional orders of magnitude in the computational power of scientific platforms is anticipated. It is unclear how individual climate simulations can continue to make effective use of the largest platforms. Conversion of existing community codes to higher resolution, or to more complex phenomenology, or both, presents daunting design and validation challenges. Our alternative approach is to use the expected resources to run very large ensembles of simulations of modest size, rather than to await the emergence of very large simulations. We are already doing this in exploring the parameter space of existing models using the Multiple Very Fast Simulated Annealing algorithm, which was developed for seismic imaging. Our experiments have the dual intentions of tuning the model and identifying ranges of parameter uncertainty. Our approach is less strongly constrained by the dimensionality of the parameter space than are competing methods. Nevertheless, scaling up remains costly. Much could be achieved by increasing the dimensionality of the search and adding complexity to the search algorithms. Such ensemble approaches scale naturally to very large platforms. Extensions of the approach are anticipated. For example, structurally different models can be tuned to comparable effectiveness. This can provide an objective test for which there is no realistic precedent with smaller computations. We find ourselves inventing new code to manage our ensembles. Component computations involve tens to hundreds of CPUs and tens to hundreds of hours. The results of these moderately large parallel jobs influence the scheduling of subsequent jobs, and complex algorithms may be easily contemplated for this. The operating system concept of a "thread" re-emerges at a very coarse level, where each thread manages atomic computations of thousands of CPU-hours. That is, rather than multiple threads operating on a processor, at this level, multiple processors operate within a single thread. In collaboration with the Texas Advanced Computing Center, we are developing a software library at the system level, which should facilitate the development of computations involving complex strategies which invoke large numbers of moderately large multi-processor jobs. While this may have applications in other sciences, our key intent is to better characterize the coupled behavior of a very large set of climate model configurations.

  6. Stable statistical representations facilitate visual search.

    PubMed

    Corbett, Jennifer E; Melcher, David

    2014-10-01

    Observers represent the average properties of object ensembles even when they cannot identify individual elements. To investigate the functional role of ensemble statistics, we examined how modulating statistical stability affects visual search. We varied the mean and/or individual sizes of an array of Gabor patches while observers searched for a tilted target. In "stable" blocks, the mean and/or local sizes of the Gabors were constant over successive displays, whereas in "unstable" baseline blocks they changed from trial to trial. Although there was no relationship between the context and the spatial location of the target, observers found targets faster (as indexed by faster correct responses and fewer saccades) as the global mean size became stable over several displays. Building statistical stability also facilitated scanning the scene, as measured by larger saccadic amplitudes, faster saccadic reaction times, and shorter fixation durations. These findings suggest a central role for peripheral visual information, creating context to free resources for detailed processing of salient targets and maintaining the illusion of visual stability.

  7. Multi-Parameter Scattering Sensor and Methods

    NASA Technical Reports Server (NTRS)

    Greenberg, Paul S. (Inventor); Fischer, David G. (Inventor)

    2016-01-01

    Methods, detectors and systems detect particles and/or measure particle properties. According to one embodiment, a detector for detecting particles comprises: a sensor for receiving radiation scattered by an ensemble of particles; and a processor for determining a physical parameter for the detector, or an optimal detection angle or a bound for an optimal detection angle, for measuring at least one moment or integrated moment of the ensemble of particles, the physical parameter, or detection angle, or detection angle bound being determined based on one or more of properties (a) and/or (b) and/or (c) and/or (d) or ranges for one or more of properties (a) and/or (b) and/or (c) and/or (d), wherein (a)-(d) are the following: (a) is a wavelength of light incident on the particles, (b) is a count median diameter or other characteristic size parameter of the particle size distribution, (c) is a standard deviation or other characteristic width parameter of the particle size distribution, and (d) is a refractive index of particles.

  8. Fluctuation instability of the Dirac Sea in quark models of strong interactions

    NASA Astrophysics Data System (ADS)

    Zinovjev, G. M.; Molodtsov, S. V.

    2016-03-01

    A number of exactly integrable (quark) models of quantum field theory that feature an infinite correlation length are considered. An instability of the standard vacuum quark ensemble, a Dirac sea (in spacetimes of dimension higher than three), is highlighted. It is due to a strong ground-state degeneracy, which, in turn, stems from a special character of the energy distribution. In the case where the momentumcutoff parameter tends to infinity, this distribution becomes infinitely narrow and leads to large (unlimited) fluctuations. A comparison of the results for various vacuum ensembles, including a Dirac sea, a neutral ensemble, a color superconductor, and a Bardeen-Cooper-Schrieffer (BCS) state, was performed. In the presence of color quark interaction, a BCS state is unambiguously chosen as the ground state of the quark ensemble.

  9. Fast adaptive flat-histogram ensemble to enhance the sampling in large systems

    NASA Astrophysics Data System (ADS)

    Xu, Shun; Zhou, Xin; Jiang, Yi; Wang, YanTing

    2015-09-01

    An efficient novel algorithm was developed to estimate the Density of States (DOS) for large systems by calculating the ensemble means of an extensive physical variable, such as the potential energy, U, in generalized canonical ensembles to interpolate the interior reverse temperature curve , where S( U) is the logarithm of the DOS. This curve is computed with different accuracies in different energy regions to capture the dependence of the reverse temperature on U without setting prior grid in the U space. By combining with a U-compression transformation, we decrease the computational complexity from O( N 3/2) in the normal Wang Landau type method to O( N 1/2) in the current algorithm, as the degrees of freedom of system N. The efficiency of the algorithm is demonstrated by applying to Lennard Jones fluids with various N, along with its ability to find different macroscopic states, including metastable states.

  10. Scaling laws and fluctuations in the statistics of word frequencies

    NASA Astrophysics Data System (ADS)

    Gerlach, Martin; Altmann, Eduardo G.

    2014-11-01

    In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.

  11. Ensemble-based diagnosis of the large-scale processes associated with multiple high-impact weather events over North America during late October 2007

    NASA Astrophysics Data System (ADS)

    Moore, B. J.; Bosart, L. F.; Keyser, D.

    2013-12-01

    During late October 2007, the interaction between a deep polar trough and Tropical Cyclone (TC) Kajiki off the eastern Asian coast perturbed the North Pacific jet stream and resulted in the development of a high-amplitude Rossby wave train extending into North America, contributing to three concurrent high-impact weather events in North America: wildfires in southern California associated with strong Santa Ana winds, a cold surge into eastern Mexico, and widespread heavy rainfall (~150 mm) in the south-central United States. Observational analysis indicates that these high-impact weather events were all dynamically linked with the development of a major high-latitude ridge over the eastern North Pacific and western North America and a deep trough over central North America. In this study, global operational ensemble forecasts from the European Centre for Medium-Range Weather Forecasts (ECMWF) obtained from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) archive are used to characterize the medium-range predictability of the large-scale flow pattern associated with the three events and to diagnose the large-scale atmospheric processes favorable, or unfavorable, for the occurrence of the three events. Examination of the ECMWF forecasts leading up to the time period of the three high-impact weather events (~23-25 October 2007) indicates that ensemble spread (i.e., uncertainty) in the 500-hPa geopotential height field develops in connection with downstream baroclinic development (DBD) across the North Pacific, associated with the interaction between TC Kajiki and the polar trough along the eastern Asian coast, and subsequently moves downstream into North America, yielding considerable uncertainty with respect to the structure, amplitude, and position of the ridge-trough pattern over North America. Ensemble sensitivity analysis conducted for key sensible weather parameters corresponding to the three high-impact weather events, including relative humidity, temperature, and precipitation, demonstrates quantitatively that all three high-impact weather events are closely linked with the development of the ridge-trough pattern over North America. Moreover, results of this analysis indicate that the development of the ridge-trough pattern is modulated by DBD and cyclogenesis upstream over the central and eastern North Pacific. Specifically, ensemble members exhibiting less intense cyclogenesis and a more poleward cyclone track over the central and eastern North Pacific feature the development of a poleward-displaced ridge over the eastern North Pacific and western North America and a cut-off low over the Intermountain West, an unfavorable scenario for the occurrence the three high-impact weather events. Conversely, ensemble members exhibiting more intense cyclogenesis and a less poleward cyclone track feature persistent ridging along the western coast of North America and trough development over central North America, establishing a favorable flow pattern for the three high-impact weather events. Results demonstrate that relatively small initial differences in the large-scale flow pattern over the North Pacific among ensemble members can result in large uncertainty in the forecast downstream flow response over North America.

  12. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29475799','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29475799"><span>Reprint of "Investigating ensemble perception of emotions in autistic and typical children and adolescents".</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth</p> <p>2018-01-01</p> <p>Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4718788','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4718788"><span>Predicting protein function and other biomedical characteristics with heterogeneous ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Whalen, Sean; Pandey, Om Prakash</p> <p>2015-01-01</p> <p>Prediction problems in biomedical sciences, including protein function prediction (PFP), are generally quite difficult. This is due in part to incomplete knowledge of the cellular phenomenon of interest, the appropriateness and data quality of the variables and measurements used for prediction, as well as a lack of consensus regarding the ideal predictor for specific problems. In such scenarios, a powerful approach to improving prediction performance is to construct heterogeneous ensemble predictors that combine the output of diverse individual predictors that capture complementary aspects of the problems and/or datasets. In this paper, we demonstrate the potential of such heterogeneous ensembles, derived from stacking and ensemble selection methods, for addressing PFP and other similar biomedical prediction problems. Deeper analysis of these results shows that the superior predictive ability of these methods, especially stacking, can be attributed to their attention to the following aspects of the ensemble learning process: (i) better balance of diversity and performance, (ii) more effective calibration of outputs and (iii) more robust incorporation of additional base predictors. Finally, to make the effective application of heterogeneous ensembles to large complex datasets (big data) feasible, we present DataSink, a distributed ensemble learning framework, and demonstrate its sound scalability using the examined datasets. DataSink is publicly available from https://github.com/shwhalen/datasink. PMID:26342255</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28160619','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28160619"><span>Ensemble perception of emotions in autistic and typical children and adolescents.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth</p> <p>2017-04-01</p> <p>Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMIN11B0033T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMIN11B0033T"><span>Statistical Compression of Wind Speed Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tagle, F.; Castruccio, S.; Crippa, P.; Genton, M.</p> <p>2017-12-01</p> <p>In this work we introduce a lossy compression approach that utilizes a stochastic wind generator based on a non-Gaussian distribution to reproduce the internal climate variability of daily wind speed as represented by the CESM Large Ensemble over Saudi Arabia. Stochastic wind generators, and stochastic weather generators more generally, are statistical models that aim to match certain statistical properties of the data on which they are trained. They have been used extensively in applications ranging from agricultural models to climate impact studies. In this novel context, the parameters of the fitted model can be interpreted as encoding the information contained in the original uncompressed data. The statistical model is fit to only 3 of the 30 ensemble members and it adequately captures the variability of the ensemble in terms of seasonal internannual variability of daily wind speed. To deal with such a large spatial domain, it is partitioned into 9 region, and the model is fit independently to each of these. We further discuss a recent refinement of the model, which relaxes this assumption of regional independence, by introducing a large-scale component that interacts with the fine-scale regional effects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PrOce.138..533W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PrOce.138..533W"><span>Two takes on the ecosystem impacts of climate change and fishing: Comparing a size-based and a species-based ecosystem model in the central North Pacific</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Woodworth-Jefcoats, Phoebe A.; Polovina, Jeffrey J.; Howell, Evan A.; Blanchard, Julia L.</p> <p>2015-11-01</p> <p>We compare two ecosystem model projections of 21st century climate change and fishing impacts in the central North Pacific. Both a species-based and a size-based ecosystem modeling approach are examined. While both models project a decline in biomass across all sizes in response to climate change and a decline in large fish biomass in response to increased fishing mortality, the models vary significantly in their handling of climate and fishing scenarios. For example, based on the same climate forcing the species-based model projects a 15% decline in catch by the end of the century while the size-based model projects a 30% decline. Disparities in the models' output highlight the limitations of each approach by showing the influence model structure can have on model output. The aspects of bottom-up change to which each model is most sensitive appear linked to model structure, as does the propagation of interannual variability through the food web and the relative impact of combined top-down and bottom-up change. Incorporating integrated size- and species-based ecosystem modeling approaches into future ensemble studies may help separate the influence of model structure from robust projections of ecosystem change.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29378218','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29378218"><span>Ensemble support vector machine classification of dementia using structural MRI and mini-mental state examination.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sørensen, Lauge; Nielsen, Mads</p> <p>2018-05-15</p> <p>The International Challenge for Automated Prediction of MCI from MRI data offered independent, standardized comparison of machine learning algorithms for multi-class classification of normal control (NC), mild cognitive impairment (MCI), converting MCI (cMCI), and Alzheimer's disease (AD) using brain imaging and general cognition. We proposed to use an ensemble of support vector machines (SVMs) that combined bagging without replacement and feature selection. SVM is the most commonly used algorithm in multivariate classification of dementia, and it was therefore valuable to evaluate the potential benefit of ensembling this type of classifier. The ensemble SVM, using either a linear or a radial basis function (RBF) kernel, achieved multi-class classification accuracies of 55.6% and 55.0% in the challenge test set (60 NC, 60 MCI, 60 cMCI, 60 AD), resulting in a third place in the challenge. Similar feature subset sizes were obtained for both kernels, and the most frequently selected MRI features were the volumes of the two hippocampal subregions left presubiculum and right subiculum. Post-challenge analysis revealed that enforcing a minimum number of selected features and increasing the number of ensemble classifiers improved classification accuracy up to 59.1%. The ensemble SVM outperformed single SVM classifications consistently in the challenge test set. Ensemble methods using bagging and feature selection can improve the performance of the commonly applied SVM classifier in dementia classification. This resulted in competitive classification accuracies in the International Challenge for Automated Prediction of MCI from MRI data. Copyright © 2018 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Conductor&pg=2&id=EJ1034882','ERIC'); return false;" href="https://eric.ed.gov/?q=Conductor&pg=2&id=EJ1034882"><span>The Role of the Conductor's Goal Orientation and Use of Shared Performance Cues on Collegiate Instrumentalists' Motivational Beliefs and Performance in Large Musical Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Matthews, Wendy K.; Kitsantas, Anastasia</p> <p>2013-01-01</p> <p>This study examined the effects of the conductor's goal orientation (mastery vs. performance) and use of shared performance cues (basic vs. interpretive vs. expressive) on instrumentalists' self-efficacy, collective efficacy, attributions, and performance. Eighty-one college instrumentalists from two musical ensembles participated in the study. It…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=JAZZ&id=EJ978795','ERIC'); return false;" href="https://eric.ed.gov/?q=JAZZ&id=EJ978795"><span>Jazz Style and Articulation: How to Get Your Band or Choir to Swing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Tolson, Jerry</p> <p>2012-01-01</p> <p>The interpretation of jazz style is crucial to the element of swing in any jazz ensemble performance. Today, many charts for both large and small instrumental and vocal jazz ensembles are well marked with articulations and expression markings. However, in some cases, there is nothing to guide the musician. This article addresses some common jazz…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23144222','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23144222"><span>Quantum teleportation between remote atomic-ensemble quantum memories.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei</p> <p>2012-12-11</p> <p>Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a "quantum channel," quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895-1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼10(8) rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H54C..08L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H54C..08L"><span>Data assimilation for groundwater flow modelling using Unbiased Ensemble Square Root Filter: Case study in Guantao, North China Plain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.</p> <p>2017-12-01</p> <p>Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011MNRAS.414.1625Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011MNRAS.414.1625Y"><span>New insights into galaxy structure from GALPHAT- I. Motivation, methodology and benchmarks for Sérsic models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yoon, Ilsang; Weinberg, Martin D.; Katz, Neal</p> <p>2011-06-01</p> <p>We introduce a new galaxy image decomposition tool, GALPHAT (GALaxy PHotometric ATtributes), which is a front-end application of the Bayesian Inference Engine (BIE), a parallel Markov chain Monte Carlo package, to provide full posterior probability distributions and reliable confidence intervals for all model parameters. The BIE relies on GALPHAT to compute the likelihood function. GALPHAT generates scale-free cumulative image tables for the desired model family with precise error control. Interpolation of this table yields accurate pixellated images with any centre, scale and inclination angle. GALPHAT then rotates the image by position angle using a Fourier shift theorem, yielding high-speed, accurate likelihood computation. We benchmark this approach using an ensemble of simulated Sérsic model galaxies over a wide range of observational conditions: the signal-to-noise ratio S/N, the ratio of galaxy size to the point spread function (PSF) and the image size, and errors in the assumed PSF; and a range of structural parameters: the half-light radius re and the Sérsic index n. We characterize the strength of parameter covariance in the Sérsic model, which increases with S/N and n, and the results strongly motivate the need for the full posterior probability distribution in galaxy morphology analyses and later inferences. The test results for simulated galaxies successfully demonstrate that, with a careful choice of Markov chain Monte Carlo algorithms and fast model image generation, GALPHAT is a powerful analysis tool for reliably inferring morphological parameters from a large ensemble of galaxies over a wide range of different observational conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMGC32B..04S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMGC32B..04S"><span>An Ensemble Approach to Understanding the ENSO Response to Climate Change</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stevenson, S.; Capotondi, A.; Fasullo, J.; Otto-Bliesner, B. L.</p> <p>2017-12-01</p> <p>The dynamics of the El Nino/Southern Oscillation (ENSO) are known to be sensitive to changes in background climate conditions, as well as atmosphere/ocean feedbacks. However, the degree to which shifts in ENSO characteristics can be robustly attributed to external climate forcings remains unknown. Efforts to assess these changes in a multi-model framework are subject to uncertainties due to both differing model physics and internal ENSO variability. New community ensembles created at the National Center for Atmospheric Research and the NOAA Geophysical Fluid Dynamics Laboratory are ideally suited to addressing this problem, providing many realizations of the climate of the 850-2100 period with a combination of both natural and anthropogenic climate forcing factors. Here we analyze the impacts of external forcing on El Nino and La Nina evolution using four sets of simulations: the CESM Last Millennium Ensemble (CESM-LME), which covers the 850-2005 period and provides long-term context for forced responses; the Large Ensemble (CESM-LE), which includes 20th century and 21st century (RCP8.5) projections; the Medium Ensemble (CESM-ME), which is composed of 21st century RCP4.5 projections; and a large ensemble with the GFDL ESM2M, which includes 20th century and RCP8.5 projections. In the CESM, ENSO variance increases slightly over the 20th century in all ensembles, with the effects becoming much larger during the 21st. The slower increase in variance over the 20th century is shown to arise from compensating influences from greenhouse gas (GHG) and anthropogenic aerosol emissions, which give way to GHG-dominated effects by 2100. However, the 21st century variance increase is not robust: CESM and the ESM2M differ drastically in their ENSO projections. The mechanisms for these inter-model differences are discussed, as are the implications for the design of future multi-model ENSO projection experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.4328V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.4328V"><span>The role of ensemble post-processing for modeling the ensemble tail</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Van De Vyver, Hans; Van Schaeybroeck, Bert; Vannitsem, Stéphane</p> <p>2016-04-01</p> <p>The past decades the numerical weather prediction community has witnessed a paradigm shift from deterministic to probabilistic forecast and state estimation (Buizza and Leutbecher, 2015; Buizza et al., 2008), in an attempt to quantify the uncertainties associated with initial-condition and model errors. An important benefit of a probabilistic framework is the improved prediction of extreme events. However, one may ask to what extent such model estimates contain information on the occurrence probability of extreme events and how this information can be optimally extracted. Different approaches have been proposed and applied on real-world systems which, based on extreme value theory, allow the estimation of extreme-event probabilities conditional on forecasts and state estimates (Ferro, 2007; Friederichs, 2010). Using ensemble predictions generated with a model of low dimensionality, a thorough investigation is presented quantifying the change of predictability of extreme events associated with ensemble post-processing and other influencing factors including the finite ensemble size, lead time and model assumption and the use of different covariates (ensemble mean, maximum, spread...) for modeling the tail distribution. Tail modeling is performed by deriving extreme-quantile estimates using peak-over-threshold representation (generalized Pareto distribution) or quantile regression. Common ensemble post-processing methods aim to improve mostly the ensemble mean and spread of a raw forecast (Van Schaeybroeck and Vannitsem, 2015). Conditional tail modeling, on the other hand, is a post-processing in itself, focusing on the tails only. Therefore, it is unclear how applying ensemble post-processing prior to conditional tail modeling impacts the skill of extreme-event predictions. This work is investigating this question in details. Buizza, Leutbecher, and Isaksen, 2008: Potential use of an ensemble of analyses in the ECMWF Ensemble Prediction System, Q. J. R. Meteorol. Soc. 134: 2051-2066.Buizza and Leutbecher, 2015: The forecast skill horizon, Q. J. R. Meteorol. Soc. 141: 3366-3382.Ferro, 2007: A probability model for verifying deterministic forecasts of extreme events. Weather and Forecasting 22 (5), 1089-1100.Friederichs, 2010: Statistical downscaling of extreme precipitation events using extreme value theory. Extremes 13, 109-132.Van Schaeybroeck and Vannitsem, 2015: Ensemble post-processing using member-by-member approaches: theoretical aspects. Q.J.R. Meteorol. Soc., 141: 807-818.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24987464','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24987464"><span>Using beta binomials to estimate classification uncertainty for ensemble models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin</p> <p>2014-01-01</p> <p>Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014HESSD..1110635A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014HESSD..1110635A"><span>Improving operational flood ensemble prediction by the assimilation of satellite soil moisture: comparison between lumped and semi-distributed schemes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Alvarez-Garreton, C.; Ryu, D.; Western, A. W.; Su, C.-H.; Crow, W. T.; Robertson, D. E.; Leahy, C.</p> <p>2014-09-01</p> <p>Assimilation of remotely sensed soil moisture data (SM-DA) to correct soil water stores of rainfall-runoff models has shown skill in improving streamflow prediction. In the case of large and sparsely monitored catchments, SM-DA is a particularly attractive tool. Within this context, we assimilate active and passive satellite soil moisture (SSM) retrievals using an ensemble Kalman filter to improve operational flood prediction within a large semi-arid catchment in Australia (>40 000 km2). We assess the importance of accounting for channel routing and the spatial distribution of forcing data by applying SM-DA to a lumped and a semi-distributed scheme of the probability distributed model (PDM). Our scheme also accounts for model error representation and seasonal biases and errors in the satellite data. Before assimilation, the semi-distributed model provided more accurate streamflow prediction (Nash-Sutcliffe efficiency, NS = 0.77) than the lumped model (NS = 0.67) at the catchment outlet. However, this did not ensure good performance at the "ungauged" inner catchments. After SM-DA, the streamflow ensemble prediction at the outlet was improved in both the lumped and the semi-distributed schemes: the root mean square error of the ensemble was reduced by 27 and 31%, respectively; the NS of the ensemble mean increased by 7 and 38%, respectively; the false alarm ratio was reduced by 15 and 25%, respectively; and the ensemble prediction spread was reduced while its reliability was maintained. Our findings imply that even when rainfall is the main driver of flooding in semi-arid catchments, adequately processed SSM can be used to reduce errors in the model soil moisture, which in turn provides better streamflow ensemble prediction. We demonstrate that SM-DA efficacy is enhanced when the spatial distribution in forcing data and routing processes are accounted for. At ungauged locations, SM-DA is effective at improving streamflow ensemble prediction, however, the updated prediction is still poor since SM-DA does not address systematic errors in the model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A23D2394H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A23D2394H"><span>Combining super-ensembles and statistical emulation to improve a regional climate and vegetation model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hawkins, L. R.; Rupp, D. E.; Li, S.; Sarah, S.; McNeall, D. J.; Mote, P.; Betts, R. A.; Wallom, D.</p> <p>2017-12-01</p> <p>Changing regional patterns of surface temperature, precipitation, and humidity may cause ecosystem-scale changes in vegetation, altering the distribution of trees, shrubs, and grasses. A changing vegetation distribution, in turn, alters the albedo, latent heat flux, and carbon exchanged with the atmosphere with resulting feedbacks onto the regional climate. However, a wide range of earth-system processes that affect the carbon, energy, and hydrologic cycles occur at sub grid scales in climate models and must be parameterized. The appropriate parameter values in such parameterizations are often poorly constrained, leading to uncertainty in predictions of how the ecosystem will respond to changes in forcing. To better understand the sensitivity of regional climate to parameter selection and to improve regional climate and vegetation simulations, we used a large perturbed physics ensemble and a suite of statistical emulators. We dynamically downscaled a super-ensemble (multiple parameter sets and multiple initial conditions) of global climate simulations using a 25-km resolution regional climate model HadRM3p with the land-surface scheme MOSES2 and dynamic vegetation module TRIFFID. We simultaneously perturbed land surface parameters relating to the exchange of carbon, water, and energy between the land surface and atmosphere in a large super-ensemble of regional climate simulations over the western US. Statistical emulation was used as a computationally cost-effective tool to explore uncertainties in interactions. Regions of parameter space that did not satisfy observational constraints were eliminated and an ensemble of parameter sets that reduce regional biases and span a range of plausible interactions among earth system processes were selected. This study demonstrated that by combining super-ensemble simulations with statistical emulation, simulations of regional climate could be improved while simultaneously accounting for a range of plausible land-atmosphere feedback strengths.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26593007','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26593007"><span>SQUEEZE-E: The Optimal Solution for Molecular Simulations with Periodic Boundary Conditions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wassenaar, Tsjerk A; de Vries, Sjoerd; Bonvin, Alexandre M J J; Bekker, Henk</p> <p>2012-10-09</p> <p>In molecular simulations of macromolecules, it is desirable to limit the amount of solvent in the system to avoid spending computational resources on uninteresting solvent-solvent interactions. As a consequence, periodic boundary conditions are commonly used, with a simulation box chosen as small as possible, for a given minimal distance between images. Here, we describe how such a simulation cell can be set up for ensembles, taking into account a priori available or estimable information regarding conformational flexibility. Doing so ensures that any conformation present in the input ensemble will satisfy the distance criterion during the simulation. This helps avoid periodicity artifacts due to conformational changes. The method introduces three new approaches in computational geometry: (1) The first is the derivation of an optimal packing of ensembles, for which the mathematical framework is described. (2) A new method for approximating the α-hull and the contact body for single bodies and ensembles is presented, which is orders of magnitude faster than existing routines, allowing the calculation of packings of large ensembles and/or large bodies. 3. A routine is described for searching a combination of three vectors on a discretized contact body forming a reduced base for a lattice with minimal cell volume. The new algorithms reduce the time required to calculate packings of single bodies from minutes or hours to seconds. The use and efficacy of the method is demonstrated for ensembles obtained from NMR, MD simulations, and elastic network modeling. An implementation of the method has been made available online at http://haddock.chem.uu.nl/services/SQUEEZE/ and has been made available as an option for running simulations through the weNMR GRID MD server at http://haddock.science.uu.nl/enmr/services/GROMACS/main.php .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H32F..01B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H32F..01B"><span>Revealing Risks in Adaptation Planning: expanding Uncertainty Treatment and dealing with Large Projection Ensembles during Planning Scenario development</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brekke, L. D.; Clark, M. P.; Gutmann, E. D.; Wood, A.; Mizukami, N.; Mendoza, P. A.; Rasmussen, R.; Ikeda, K.; Pruitt, T.; Arnold, J. R.; Rajagopalan, B.</p> <p>2015-12-01</p> <p>Adaptation planning assessments often rely on single methods for climate projection downscaling and hydrologic analysis, do not reveal uncertainties from associated method choices, and thus likely produce overly confident decision-support information. Recent work by the authors has highlighted this issue by identifying strengths and weaknesses of widely applied methods for downscaling climate projections and assessing hydrologic impacts. This work has shown that many of the methodological choices made can alter the magnitude, and even the sign of the climate change signal. Such results motivate consideration of both sources of method uncertainty within an impacts assessment. Consequently, the authors have pursued development of improved downscaling techniques spanning a range of method classes (quasi-dynamical and circulation-based statistical methods) and developed approaches to better account for hydrologic analysis uncertainty (multi-model; regional parameter estimation under forcing uncertainty). This presentation summarizes progress in the development of these methods, as well as implications of pursuing these developments. First, having access to these methods creates an opportunity to better reveal impacts uncertainty through multi-method ensembles, expanding on present-practice ensembles which are often based only on emissions scenarios and GCM choices. Second, such expansion of uncertainty treatment combined with an ever-expanding wealth of global climate projection information creates a challenge of how to use such a large ensemble for local adaptation planning. To address this challenge, the authors are evaluating methods for ensemble selection (considering the principles of fidelity, diversity and sensitivity) that is compatible with present-practice approaches for abstracting change scenarios from any "ensemble of opportunity". Early examples from this development will also be presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/15020771-changes-seasonal-extreme-hydrologic-conditions-georgia-basin-puget-sound-ensemble-regional-climate-simulation-mid-century','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/15020771-changes-seasonal-extreme-hydrologic-conditions-georgia-basin-puget-sound-ensemble-regional-climate-simulation-mid-century"><span>Changes in Seasonal and Extreme Hydrologic Conditions of the Georgia Basin/Puget Sound in an Ensemble Regional Climate Simulation for the Mid-Century</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Leung, Lai R.; Qian, Yun</p> <p></p> <p>This study examines an ensemble of climate change projections simulated by a global climate model (GCM) and downscaled with a region climate model (RCM) to 40 km spatial resolution for the western North America. One control and three ensemble future climate simulations were produced by the GCM following a business as usual scenario for greenhouse gases and aerosols emissions from 1995 to 2100. The RCM was used to downscale the GCM control simulation (1995-2015) and each ensemble future GCM climate (2040-2060) simulation. Analyses of the regional climate simulations for the Georgia Basin/Puget Sound showed a warming of 1.5-2oC and statisticallymore » insignificant changes in precipitation by the mid-century. Climate change has large impacts on snowpack (about 50% reduction) but relatively smaller impacts on the total runoff for the basin as a whole. However, climate change can strongly affect small watersheds such as those located in the transient snow zone, causing a higher likelihood of winter flooding as a higher percentage of precipitation falls in the form of rain rather than snow, and reduced streamflow in early summer. In addition, there are large changes in the monthly total runoff above the upper 1% threshold (or flood volume) from October through May, and the December flood volume of the future climate is 60% above the maximum monthly flood volume of the control climate. Uncertainty of the climate change projections, as characterized by the spread among the ensemble future climate simulations, is relatively small for the basin mean snowpack and runoff, but increases in smaller watersheds, especially in the transient snow zone, and associated with extreme events. This emphasizes the importance of characterizing uncertainty through ensemble simulations.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2849064','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2849064"><span>Balanced Electrostatic and Structural Forces Guide the Large Conformational Change Associated with Maturation of T = 4 Virus</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Matsui, Tsutomu; Tsuruta, Hiro; Johnson, John E.</p> <p>2010-01-01</p> <p>Nudaurelia capensis omega virus has a well-characterized T = 4 capsid that undergoes a pH-dependent large conformational changes (LCC) and associated auto-catalytic cleavage of the subunit. We examined previously the particle size at different pH values and showed that maturation occurred at pH 5.5. We now characterized the LCC with time-resolved small-angle x-ray scattering and showed that there were three kinetic stages initiated with an incremental drop in pH: 1), a rapid (<10 ms) collapse to an incrementally smaller particle; 2), a continuous size reduction over the next 5 s; and 3), a smaller final transition occurring in 2–3 min. Equilibrium measurements similar to those reported previously, but now more precise, showed that the particle dimension between pH 5.5 and 5 requires the autocatalytic cleavage to achieve its final compact size. A balance of electrostatic and structural forces shapes the energy landscape of the LCC with the latter requiring annealing of portions of the subunit. Equilibrium experiments showed that many intermediate states could be populated with a homogeneous ensemble of particles by carefully controlling the pH. A titration curve for the LCC was generated that showed that the virtual pKa (i.e., the composite of all titratable residues that contribute to the LCC) is 5.8. PMID:20371334</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1228968-complex-structural-dynamics-nanocatalysts-revealed-operando-conditions-correlated-imaging-spectroscopy-probes','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1228968-complex-structural-dynamics-nanocatalysts-revealed-operando-conditions-correlated-imaging-spectroscopy-probes"><span>Complex structural dynamics of nanocatalysts revealed in Operando conditions by correlated imaging and spectroscopy probes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Li, Y.; Zakharov, D.; Zhao, S.; ...</p> <p>2015-06-29</p> <p>Understanding how heterogeneous catalysts change size, shape and structure during chemical reactions is limited by the paucity of methods for studying catalytic ensembles in working state, that is, in operando conditions. Here by a correlated use of synchrotron X-ray absorption spectroscopy and scanning transmission electron microscopy in operando conditions, we quantitatively describe the complex structural dynamics of supported Pt catalysts exhibited during an exemplary catalytic reaction—ethylene hydrogenation. This work exploits a microfabricated catalytic reactor compatible with both probes. The results demonstrate dynamic transformations of the ensemble of Pt clusters that spans a broad size range throughout changing reaction conditions. Lastly,more » this method is generalizable to quantitative operando studies of complex systems using a wide variety of X-ray and electron-based experimental probes.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HESS...22.2073G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HESS...22.2073G"><span>Long-term ensemble forecast of snowmelt inflow into the Cheboksary Reservoir under two different weather scenarios</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gelfan, Alexander; Moreydo, Vsevolod; Motovilov, Yury; Solomatine, Dimitri P.</p> <p>2018-04-01</p> <p>A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia), is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics) that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast), and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast). We have studied the following: (1) whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2) whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form) outperform the operational forecasts of the April-June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period) obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001AAS...199.5806K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001AAS...199.5806K"><span>A New Outer Galaxy Molecular Cloud Catalog: Applications to Galactic Structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kerton, C. R.; Brunt, C. M.; Pomerleau, C.</p> <p>2001-12-01</p> <p>We have generated a new molecular cloud catalog from a reprocessed version of the Five College Radio Astronomy (FCRAO) Observatory Outer Galaxy Survey (OGS) of 12CO (J=1--0) emission. The catalog has been used to develop a technique that uses the observed angular size-linewidth relation (ASLWR) as a distance indicator to molecular cloud ensembles. The new technique is a promising means to map out the large-scale structure of our Galaxy using the new high spatial dynamic range CO surveys currently available. The catalog was created using a two-stage object-identification algorithm. We first identified contiguous emission structures of a specified minimum number of pixels above a specified temperature threshold. Each structure so defined was then examined and localized emission enhancements within each structure were identified as separate objects. The resulting cloud catalog, contains basic data on 14595 objects. From the OGS we identified twenty-three cloud ensembles. For each, bisector fits to angular size vs. linewidth plots were made. The fits vary in a systematic way that allows a calibration of the fit parameters with distance to be made. Our derived distances to the ensembles are consistent with the distance to the Perseus Arm, and the accurate radial velocity measurements available from the same data are in accord with the known non-circular motions at the location of the Perseus Arm. The ASLWR method was also successfully applied to data from the Boston University/FCRAO Galactic Ring Survey (GRS) of 13CO(J=1--0) emission. Based upon our experience with the GRS and OGS, the ASLWR technique should be usable in any data set with sufficient spatial dynamic range to allow it to be properly calibrated. C.P. participated in this study through the Women in Engineering and Science (WES) program of NRC Canada. The Dominion Radio Astrophysical Observatory is a National Facility operated by the National Research Council. The Canadian Galactic Plane Survey is a Canadian project with international partners, and is supported by the Natural Sciences and Engineering Research Council (NSERC).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18401541','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18401541"><span>Using ensemble of classifiers for predicting HIV protease cleavage sites in proteins.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nanni, Loris; Lumini, Alessandra</p> <p>2009-03-01</p> <p>The focus of this work is the use of ensembles of classifiers for predicting HIV protease cleavage sites in proteins. Due to the complex relationships in the biological data, several recent works show that often ensembles of learning algorithms outperform stand-alone methods. We show that the fusion of approaches based on different encoding models can be useful for improving the performance of this classification problem. In particular, in this work four different feature encodings for peptides are described and tested. An extensive evaluation on a large dataset according to a blind testing protocol is reported which demonstrates how different feature extraction methods and classifiers can be combined for obtaining a robust and reliable system. The comparison with other stand-alone approaches allows quantifying the performance improvement obtained by the ensembles proposed in this work.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22612682-fluctuation-instability-dirac-sea-quark-models-strong-interactions','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22612682-fluctuation-instability-dirac-sea-quark-models-strong-interactions"><span>Fluctuation instability of the Dirac Sea in quark models of strong interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zinovjev, G. M., E-mail: Gennady.Zinovjev@cern.ch; Molodtsov, S. V.</p> <p></p> <p>A number of exactly integrable (quark) models of quantum field theory that feature an infinite correlation length are considered. An instability of the standard vacuum quark ensemble, a Dirac sea (in spacetimes of dimension higher than three), is highlighted. It is due to a strong ground-state degeneracy, which, in turn, stems from a special character of the energy distribution. In the case where the momentumcutoff parameter tends to infinity, this distribution becomes infinitely narrow and leads to large (unlimited) fluctuations. A comparison of the results for various vacuum ensembles, including a Dirac sea, a neutral ensemble, a color superconductor, andmore » a Bardeen–Cooper–Schrieffer (BCS) state, was performed. In the presence of color quark interaction, a BCS state is unambiguously chosen as the ground state of the quark ensemble.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/5859946-nonuniform-fluids-grand-canonical-ensemble','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/5859946-nonuniform-fluids-grand-canonical-ensemble"><span>Nonuniform fluids in the grand canonical ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Percus, J.K.</p> <p>1982-01-01</p> <p>Nonuniform simple classical fluids are considered quite generally. The grand canonical ensemble is particularly suitable, conceptually, in the leading approximation of local thermodynamics, which figuratively divides the system into approximately uniform spatial subsystems. The procedure is reviewed by which this approach is systematically corrected for slowly varying density profiles, and a model is suggested that carries the correction into the domain of local fluctuations. The latter is assessed for substrate bounded fluids, as well as for two-phase interfaces. The peculiarities of the grand ensemble in a two-phase region stem from the inherent very large number fluctuations. A primitive model showsmore » how these are quenched in the canonical ensemble. This is taken advantage of by applying the Kac-Siegert representation of the van der Waals decomposition with petit canonical corrections, to the two-phase regime.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014APS..MARF28002S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014APS..MARF28002S"><span>Quantum memory operations in a flux qubit - spin ensemble hybrid system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saito, S.; Zhu, X.; Amsuss, R.; Matsuzaki, Y.; Kakuyanagi, K.; Shimo-Oka, T.; Mizuochi, N.; Nemoto, K.; Munro, W. J.; Semba, K.</p> <p>2014-03-01</p> <p>Superconducting quantum bits (qubits) are one of the most promising candidates for a future large-scale quantum processor. However for larger scale realizations the currently reported coherence times of these macroscopic objects (superconducting qubits) has not yet reached those of microscopic systems (electron spins, nuclear spins, etc). In this context, a superconductor-spin ensemble hybrid system has attracted considerable attention. The spin ensemble could operate as a quantum memory for superconducting qubits. We have experimentally demonstrated quantum memory operations in a superconductor-diamond hybrid system. An excited state and a superposition state prepared in the flux qubit can be transferred to, stored in and retrieved from the NV spin ensemble in diamond. From these experiments, we have found the coherence time of the spin ensemble is limited by the inhomogeneous broadening of the electron spin (4.4 MHz) and by the hyperfine coupling to nitrogen nuclear spins (2.3 MHz). In the future, spin echo techniques could eliminate these effects and elongate the coherence time. Our results are a significant first step in utilizing the spin ensemble as long-lived quantum memory for superconducting flux qubits. This work was supported by the FIRST program and NICT.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..MARA24004S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..MARA24004S"><span>Collective coupling in hybrid superconducting circuits</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Saito, Shiro</p> <p></p> <p>Hybrid quantum systems utilizing superconducting circuits have attracted significant recent attention, not only for quantum information processing tasks but also as a way to explore fundamentally new physics regimes. In this talk, I will discuss two superconducting circuit based hybrid quantum system approaches. The first is a superconducting flux qubit - electron spin ensemble hybrid system in which quantum information manipulated in the flux qubit can be transferred to, stored in and retrieved from the ensemble. Although the coherence time of the ensemble is short, about 20 ns, this is a significant first step to utilize the spin ensemble as quantum memory for superconducting flux qubits. The second approach is a superconducting resonator - flux qubit ensemble hybrid system in which we fabricated a superconducting LC resonator coupled to a large ensemble of flux qubits. Here we observed a dispersive frequency shift of approximately 250 MHz in the resonators transmission spectrum. This indicates thousands of flux qubits are coupling to the resonator collectively. Although we need to improve our qubits inhomogeneity, our system has many potential uses including the creation of new quantum metamaterials, novel applications in quantum metrology and so on. This work was partially supported by JSPS KAKENHI Grant Number 25220601.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5325197','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5325197"><span>Clustering cancer gene expression data by projective clustering ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yu, Xianxue; Yu, Guoxian</p> <p>2017-01-01</p> <p>Gene expression data analysis has paramount implications for gene treatments, cancer diagnosis and other domains. Clustering is an important and promising tool to analyze gene expression data. Gene expression data is often characterized by a large amount of genes but with limited samples, thus various projective clustering techniques and ensemble techniques have been suggested to combat with these challenges. However, it is rather challenging to synergy these two kinds of techniques together to avoid the curse of dimensionality problem and to boost the performance of gene expression data clustering. In this paper, we employ a projective clustering ensemble (PCE) to integrate the advantages of projective clustering and ensemble clustering, and to avoid the dilemma of combining multiple projective clusterings. Our experimental results on publicly available cancer gene expression data show PCE can improve the quality of clustering gene expression data by at least 4.5% (on average) than other related techniques, including dimensionality reduction based single clustering and ensemble approaches. The empirical study demonstrates that, to further boost the performance of clustering cancer gene expression data, it is necessary and promising to synergy projective clustering with ensemble clustering. PCE can serve as an effective alternative technique for clustering gene expression data. PMID:28234920</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990087333&hterms=behavior+modification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbehavior%2Bmodification','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990087333&hterms=behavior+modification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbehavior%2Bmodification"><span>Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zhu, Yanqui; Cohn, Stephen E.; Todling, Ricardo</p> <p>1999-01-01</p> <p>The Kalman filter is the optimal filter in the presence of known gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions. Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz model as well as more realistic models of the means and atmosphere. A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter situations to allow for correct update of the ensemble members. The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to be quite puzzling in that results state estimates are worse than for their filter analogue. In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use the Lorenz model to test and compare the behavior of a variety of implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20574848','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20574848"><span>On the structure-bounded growth processes in plant populations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kilian, H G; Kazda, M; Király, F; Kaufmann, D; Kemkemer, R; Bartkowiak, D</p> <p>2010-07-01</p> <p>If growing cells in plants are considered to be composed of increments (ICs) an extended version of the law of mass action can be formulated. It evidences that growth of plants runs optimal if the reaction-entropy term (entropy times the absolute temperature) matches the contact energy of ICs. Since these energies are small, thermal molecular movements facilitate via relaxation the removal of structure disturbances. Stem diameter distributions exhibit extra fluctuations likely to be caused by permanent constraints. Since the signal-response system enables in principle perfect optimization only within finite-sized cell ensembles, plants comprising relatively large cell numbers form a network of size-limited subsystems. The maximal number of these constituents depends both on genetic and environmental factors. Accounting for logistical structure-dynamics interrelations, equations can be formulated to describe the bimodal growth curves of very different plants. The reproduction of the S-bended growth curves verifies that the relaxation modes with a broad structure-controlled distribution freeze successively until finally growth is fully blocked thus bringing about "continuous solidification".</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29482593','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29482593"><span>Spatial and temporal variation in the abundance of Culicoides biting midges (Diptera: Ceratopogonidae) in nine European countries.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cuéllar, Ana Carolina; Kjær, Lene Jung; Kirkeby, Carsten; Skovgard, Henrik; Nielsen, Søren Achim; Stockmarr, Anders; Andersson, Gunnar; Lindstrom, Anders; Chirico, Jan; Lühken, Renke; Steinke, Sonja; Kiel, Ellen; Gethmann, Jörn; Conraths, Franz J; Larska, Magdalena; Hamnes, Inger; Sviland, Ståle; Hopp, Petter; Brugger, Katharina; Rubel, Franz; Balenghien, Thomas; Garros, Claire; Rakotoarivony, Ignace; Allène, Xavier; Lhoir, Jonathan; Chavernac, David; Delécolle, Jean-Claude; Mathieu, Bruno; Delécolle, Delphine; Setier-Rio, Marie-Laure; Venail, Roger; Scheid, Bethsabée; Chueca, Miguel Ángel Miranda; Barceló, Carlos; Lucientes, Javier; Estrada, Rosa; Mathis, Alexander; Tack, Wesley; Bødker, Rene</p> <p>2018-02-27</p> <p>Biting midges of the genus Culicoides (Diptera: Ceratopogonidae) are vectors of bluetongue virus (BTV), African horse sickness virus and Schmallenberg virus (SBV). Outbreaks of both BTV and SBV have affected large parts of Europe. The spread of these diseases depends largely on vector distribution and abundance. The aim of this analysis was to identify and quantify major spatial patterns and temporal trends in the distribution and seasonal variation of observed Culicoides abundance in nine countries in Europe. We gathered existing Culicoides data from Spain, France, Germany, Switzerland, Austria, Denmark, Sweden, Norway and Poland. In total, 31,429 Culicoides trap collections were available from 904 ruminant farms across these countries between 2007 and 2013. The Obsoletus ensemble was distributed widely in Europe and accounted for 83% of all 8,842,998 Culicoides specimens in the dataset, with the highest mean monthly abundance recorded in France, Germany and southern Norway. The Pulicaris ensemble accounted for only 12% of the specimens and had a relatively southerly and easterly spatial distribution compared to the Obsoletus ensemble. Culicoides imicola Kieffer was only found in Spain and the southernmost part of France. There was a clear spatial trend in the accumulated annual abundance from southern to northern Europe, with the Obsoletus ensemble steadily increasing from 4000 per year in southern Europe to 500,000 in Scandinavia. The Pulicaris ensemble showed a very different pattern, with an increase in the accumulated annual abundance from 1600 in Spain, peaking at 41,000 in northern Germany and then decreasing again toward northern latitudes. For the two species ensembles and C. imicola, the season began between January and April, with later start dates and increasingly shorter vector seasons at more northerly latitudes. We present the first maps of seasonal Culicoides abundance in large parts of Europe covering a gradient from southern Spain to northern Scandinavia. The identified temporal trends and spatial patterns are useful for planning the allocation of resources for international prevention and surveillance programmes in the European Union.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EPJWC.17513019C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EPJWC.17513019C"><span>Leptonic decay constants for D-mesons from 3-flavour CLS ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Collins, Sara; Eckert, Kevin; Heitger, Jochen; Hofmann, Stefan; Söldner, Wolfgang</p> <p>2018-03-01</p> <p>e report on the status of an ongoing effort by the RQCD and ALPHA Collaborations, aimed at determining leptonic decay constants of charmed mesons. Our analysis is based on large-volume ensembles generated within the CLS effort, employing Nf = 2 + 1 non-perturbatively O(a) improved Wilson quarks, tree-level Symanzik-improved gauge action and open boundary conditions. The ensembles cover lattice spac-ings from a ≈ 0.09 fm to a ≈ 0.05 fm, with pion masses varied from 420 to 200 MeV. To extrapolate to the physical masses, we follow both the (2ml + ms) = const. and the ms = const. lines in parameter space.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26579739','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26579739"><span>Independent Metrics for Protein Backbone and Side-Chain Flexibility: Time Scales and Effects of Ligand Binding.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fuchs, Julian E; Waldner, Birgit J; Huber, Roland G; von Grafenstein, Susanne; Kramer, Christian; Liedl, Klaus R</p> <p>2015-03-10</p> <p>Conformational dynamics are central for understanding biomolecular structure and function, since biological macromolecules are inherently flexible at room temperature and in solution. Computational methods are nowadays capable of providing valuable information on the conformational ensembles of biomolecules. However, analysis tools and intuitive metrics that capture dynamic information from in silico generated structural ensembles are limited. In standard work-flows, flexibility in a conformational ensemble is represented through residue-wise root-mean-square fluctuations or B-factors following a global alignment. Consequently, these approaches relying on global alignments discard valuable information on local dynamics. Results inherently depend on global flexibility, residue size, and connectivity. In this study we present a novel approach for capturing positional fluctuations based on multiple local alignments instead of one single global alignment. The method captures local dynamics within a structural ensemble independent of residue type by splitting individual local and global degrees of freedom of protein backbone and side-chains. Dependence on residue type and size in the side-chains is removed via normalization with the B-factors of the isolated residue. As a test case, we demonstrate its application to a molecular dynamics simulation of bovine pancreatic trypsin inhibitor (BPTI) on the millisecond time scale. This allows for illustrating different time scales of backbone and side-chain flexibility. Additionally, we demonstrate the effects of ligand binding on side-chain flexibility of three serine proteases. We expect our new methodology for quantifying local flexibility to be helpful in unraveling local changes in biomolecular dynamics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ClDy...50...67O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ClDy...50...67O"><span>Subseasonal prediction of the heat wave of December 2013 in Southern South America by the POAMA and BCC-CPS models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Osman, Marisol; Alvarez, Mariano S.</p> <p>2018-01-01</p> <p>The prediction skill of subseasonal forecast models is evaluated for a strong and long-lasting heat wave occurred in December 2013 over Southern South America. Reforecasts from two models participating in the WCRP/WWRP Subseasonal to Seasonal project, the Bureau of Meteorology POAMA and Beijing Climate Center model BCC-CPS were considered to evaluate their skill in forecasting temperature and circulation anomalies during that event. The POAMA reforecast of 32-member ensemble size, initialized every five days, and BCC-CPS reforecast of 4-member ensemble size for the same date of POAMA plus the previous 4 days were considered. Weekly ensemble-mean forecasts were computed with leadtimes from 2 days up to 24 days every 5 days. Weekly anomalies were calculated for observations from 13th of December to 31st of December 2013. Anomalies for both observations and reforecast were calculated with respect to their own climatology. Results show that the ensemble mean warm anomalies forecasted for week 1 and 2 of the heat wave resulted more similar to the observations for the POAMA model, especially for longer leads. The BCC-CPS performed better for leads shorter than 7 (14) for week 1 (2). For week 3 the BCC-CPS outperformed the POAMA model, particularly at shorter leads, locating more accurately the maxima of the anomalies. In a probabilistic approach, POAMA predicted with a higher chance than BCC-CPS the excess of the upper tercile of temperature anomalies for almost every week and lead time. The forecast of the circulation anomalies over South America could be used to explain the location of the highest temperature anomalies. In summary, for this case, models skill in forecasting surface temperature in a context of a heat wave resulted moderate at lead times longer than the fortnight. However, this study is limited to model-to-model analysis and a multi-model ensemble strategy might increase the skill.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.5529B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.5529B"><span>A non-parametric postprocessor for bias-correcting multi-model ensemble forecasts of hydrometeorological and hydrologic variables</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brown, James; Seo, Dong-Jun</p> <p>2010-05-01</p> <p>Operational forecasts of hydrometeorological and hydrologic variables often contain large uncertainties, for which ensemble techniques are increasingly used. However, the utility of ensemble forecasts depends on the unbiasedness of the forecast probabilities. We describe a technique for quantifying and removing biases from ensemble forecasts of hydrometeorological and hydrologic variables, intended for use in operational forecasting. The technique makes no a priori assumptions about the distributional form of the variables, which is often unknown or difficult to model parametrically. The aim is to estimate the conditional cumulative distribution function (ccdf) of the observed variable given a (possibly biased) real-time ensemble forecast from one or several forecasting systems (multi-model ensembles). The technique is based on Bayesian optimal linear estimation of indicator variables, and is analogous to indicator cokriging (ICK) in geostatistics. By developing linear estimators for the conditional expectation of the observed variable at many thresholds, ICK provides a discrete approximation of the full ccdf. Since ICK minimizes the conditional error variance of the indicator expectation at each threshold, it effectively minimizes the Continuous Ranked Probability Score (CRPS) when infinitely many thresholds are employed. However, the ensemble members used as predictors in ICK, and other bias-correction techniques, are often highly cross-correlated, both within and between models. Thus, we propose an orthogonal transform of the predictors used in ICK, which is analogous to using their principal components in the linear system of equations. This leads to a well-posed problem in which a minimum number of predictors are used to provide maximum information content in terms of the total variance explained. The technique is used to bias-correct precipitation ensemble forecasts from the NCEP Global Ensemble Forecast System (GEFS), for which independent validation results are presented. Extension to multimodel ensembles from the NCEP GFS and Short Range Ensemble Forecast (SREF) systems is also proposed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28716511','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28716511"><span>An ensemble predictive modeling framework for breast cancer classification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nagarajan, Radhakrishnan; Upreti, Meenakshi</p> <p>2017-12-01</p> <p>Molecular changes often precede clinical presentation of diseases and can be useful surrogates with potential to assist in informed clinical decision making. Recent studies have demonstrated the usefulness of modeling approaches such as classification that can predict the clinical outcomes from molecular expression profiles. While useful, a majority of these approaches implicitly use all molecular markers as features in the classification process often resulting in sparse high-dimensional projection of the samples often comparable to that of the sample size. In this study, a variant of the recently proposed ensemble classification approach is used for predicting good and poor-prognosis breast cancer samples from their molecular expression profiles. In contrast to traditional single and ensemble classifiers, the proposed approach uses multiple base classifiers with varying feature sets obtained from two-dimensional projection of the samples in conjunction with a majority voting strategy for predicting the class labels. In contrast to our earlier implementation, base classifiers in the ensembles are chosen based on maximal sensitivity and minimal redundancy by choosing only those with low average cosine distance. The resulting ensemble sets are subsequently modeled as undirected graphs. Performance of four different classification algorithms is shown to be better within the proposed ensemble framework in contrast to using them as traditional single classifier systems. Significance of a subset of genes with high-degree centrality in the network abstractions across the poor-prognosis samples is also discussed. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25628649','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25628649"><span>Evaluation of an ensemble of genetic models for prediction of a quantitative trait.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Milton, Jacqueline N; Steinberg, Martin H; Sebastiani, Paola</p> <p>2014-01-01</p> <p>Many genetic markers have been shown to be associated with common quantitative traits in genome-wide association studies. Typically these associated genetic markers have small to modest effect sizes and individually they explain only a small amount of the variability of the phenotype. In order to build a genetic prediction model without fitting a multiple linear regression model with possibly hundreds of genetic markers as predictors, researchers often summarize the joint effect of risk alleles into a genetic score that is used as a covariate in the genetic prediction model. However, the prediction accuracy can be highly variable and selecting the optimal number of markers to be included in the genetic score is challenging. In this manuscript we present a strategy to build an ensemble of genetic prediction models from data and we show that the ensemble-based method makes the challenge of choosing the number of genetic markers more amenable. Using simulated data with varying heritability and number of genetic markers, we compare the predictive accuracy and inclusion of true positive and false positive markers of a single genetic prediction model and our proposed ensemble method. The results show that the ensemble of genetic models tends to include a larger number of genetic variants than a single genetic model and it is more likely to include all of the true genetic markers. This increased sensitivity is obtained at the price of a lower specificity that appears to minimally affect the predictive accuracy of the ensemble.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26968364','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26968364"><span>Analysis of the interface variability in NMR structure ensembles of protein-protein complexes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Calvanese, Luisa; D'Auria, Gabriella; Vangone, Anna; Falcigno, Lucia; Oliva, Romina</p> <p>2016-06-01</p> <p>NMR structures consist in ensembles of conformers, all satisfying the experimental restraints, which exhibit a certain degree of structural variability. We analyzed here the interface in NMR ensembles of protein-protein heterodimeric complexes and found it to span a wide range of different conservations. The different exhibited conservations do not simply correlate with the size of the systems/interfaces, and are most probably the result of an interplay between different factors, including the quality of experimental data and the intrinsic complex flexibility. In any case, this information is not to be missed when NMR structures of protein-protein complexes are analyzed; especially considering that, as we also show here, the first NMR conformer is usually not the one which best reflects the overall interface. To quantify the interface conservation and to analyze it, we used an approach originally conceived for the analysis and ranking of ensembles of docking models, which has now been extended to directly deal with NMR ensembles. We propose this approach, based on the conservation of the inter-residue contacts at the interface, both for the analysis of the interface in whole ensembles of NMR complexes and for the possible selection of a single conformer as the best representative of the overall interface. In order to make the analyses automatic and fast, we made the protocol available as a web tool at: https://www.molnac.unisa.it/BioTools/consrank/consrank-nmr.html. Copyright © 2016 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A13N..05H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A13N..05H"><span>Low-wave number analysis of observations and ensemble forecasts to develop metrics for the selection of most realistic members to study multi-scale interactions between the environment and the convective organization of hurricanes: Focus on Rapid Intensification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hristova-Veleva, S. M.; Chen, H.; Gopalakrishnan, S.; Haddad, Z. S.</p> <p>2017-12-01</p> <p>Tropical cyclones (TCs) are the product of complex multi-scale processes and interactions. The role of the environment has long been recognized. However, recent research has shown that convective-scale processes in the hurricane core might also play a crucial role in determining TCs intensity and size. Several studies have linked Rapid Intensification to the characteristics of the convective clouds (shallow versus deep), their organization (isolated versus wide-spread) and their location with respect to dynamical controls (the vertical shear, the radius of maximum wind). Yet a third set of controls signifies the interaction between the storm-scale and large-scale processes. Our goal is to use observations and models to advance the still-lacking understanding of these processes. Recently, hurricane models have improved significantly. However, deterministic forecasts have limitations due to the uncertainty in the representation of the physical processes and initial conditions. A crucial step forward is the use of high-resolution ensembles. We adopt the following approach: i) generate a high resolution ensemble forecast using HWRF; ii) produce synthetic data (e.g. brightness temperature) from the model fields for direct comparison to satellite observations; iii) develop metrics to allow us to sub-select the realistic members of the ensemble, based on objective measures of the similarity between observed and forecasted structures; iv) for these most-realistic members, determine the skill in forecasting TCs to provide"guidance on guidance"; v) use the members with the best predictive skill to untangle the complex multi-scale interactions. We will report on the first three goals of our research, using forecasts and observations of hurricane Edouard (2014), focusing on RI. We will focus on describing the metrics for the selection of the most appropriate ensemble members, based on applying low-wave number analysis (WNA - Hristova-Veleva et al., 2016) to the observed and forecasted 2D fields to develop objective criteria for consistency. We investigate the WNA cartoons of environmental moisture, precipitation structure and surface convergence. We will present the preliminary selection of most skillful members and will outline our future goals - analyzing the multi-scale interactions using these members</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26218073','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26218073"><span>Nullspace Sampling with Holonomic Constraints Reveals Molecular Mechanisms of Protein Gαs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pachov, Dimitar V; van den Bedem, Henry</p> <p>2015-07-01</p> <p>Proteins perform their function or interact with partners by exchanging between conformational substates on a wide range of spatiotemporal scales. Structurally characterizing these exchanges is challenging, both experimentally and computationally. Large, diffusional motions are often on timescales that are difficult to access with molecular dynamics simulations, especially for large proteins and their complexes. The low frequency modes of normal mode analysis (NMA) report on molecular fluctuations associated with biological activity. However, NMA is limited to a second order expansion about a minimum of the potential energy function, which limits opportunities to observe diffusional motions. By contrast, kino-geometric conformational sampling (KGS) permits large perturbations while maintaining the exact geometry of explicit conformational constraints, such as hydrogen bonds. Here, we extend KGS and show that a conformational ensemble of the α subunit Gαs of heterotrimeric stimulatory protein Gs exhibits structural features implicated in its activation pathway. Activation of protein Gs by G protein-coupled receptors (GPCRs) is associated with GDP release and large conformational changes of its α-helical domain. Our method reveals a coupled α-helical domain opening motion while, simultaneously, Gαs helix α5 samples an activated conformation. These motions are moderated in the activated state. The motion centers on a dynamic hub near the nucleotide-binding site of Gαs, and radiates to helix α4. We find that comparative NMA-based ensembles underestimate the amplitudes of the motion. Additionally, the ensembles fall short in predicting the accepted direction of the full activation pathway. Taken together, our findings suggest that nullspace sampling with explicit, holonomic constraints yields ensembles that illuminate molecular mechanisms involved in GDP release and protein Gs activation, and further establish conformational coupling between key structural elements of Gαs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4517867','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4517867"><span>Nullspace Sampling with Holonomic Constraints Reveals Molecular Mechanisms of Protein Gαs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pachov, Dimitar V.; van den Bedem, Henry</p> <p>2015-01-01</p> <p>Proteins perform their function or interact with partners by exchanging between conformational substates on a wide range of spatiotemporal scales. Structurally characterizing these exchanges is challenging, both experimentally and computationally. Large, diffusional motions are often on timescales that are difficult to access with molecular dynamics simulations, especially for large proteins and their complexes. The low frequency modes of normal mode analysis (NMA) report on molecular fluctuations associated with biological activity. However, NMA is limited to a second order expansion about a minimum of the potential energy function, which limits opportunities to observe diffusional motions. By contrast, kino-geometric conformational sampling (KGS) permits large perturbations while maintaining the exact geometry of explicit conformational constraints, such as hydrogen bonds. Here, we extend KGS and show that a conformational ensemble of the α subunit Gαs of heterotrimeric stimulatory protein Gs exhibits structural features implicated in its activation pathway. Activation of protein Gs by G protein-coupled receptors (GPCRs) is associated with GDP release and large conformational changes of its α-helical domain. Our method reveals a coupled α-helical domain opening motion while, simultaneously, Gαs helix α5 samples an activated conformation. These motions are moderated in the activated state. The motion centers on a dynamic hub near the nucleotide-binding site of Gαs, and radiates to helix α4. We find that comparative NMA-based ensembles underestimate the amplitudes of the motion. Additionally, the ensembles fall short in predicting the accepted direction of the full activation pathway. Taken together, our findings suggest that nullspace sampling with explicit, holonomic constraints yields ensembles that illuminate molecular mechanisms involved in GDP release and protein Gs activation, and further establish conformational coupling between key structural elements of Gαs. PMID:26218073</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28972674','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28972674"><span>Quantifying rapid changes in cardiovascular state with a moving ensemble average.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T</p> <p>2018-04-01</p> <p>MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ClDy...50.1115H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ClDy...50.1115H"><span>Understanding the Central Equatorial African long-term drought using AMIP-type simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hua, Wenjian; Zhou, Liming; Chen, Haishan; Nicholson, Sharon E.; Jiang, Yan; Raghavendra, Ajay</p> <p>2018-02-01</p> <p>Previous studies show that Indo-Pacific sea surface temperature (SST) variations may help to explain the observed long-term drought during April-May-June (AMJ) since the 1990s over Central equatorial Africa (CEA). However, the underlying physical mechanisms for this drought are still not clear due to observation limitations. Here we use the AMIP-type simulations with 24 ensemble members forced by observed SSTs from the ECHAM4.5 model to explore the likely physical processes that determine the rainfall variations over CEA. We not only examine the ensemble mean (EM), but also compare the "good" and "poor" ensemble members to understand the intra-ensemble variability. In general, EM and the "good" ensemble member can simulate the drought and associated reduced vertical velocity and anomalous anti-cyclonic circulation in the lower troposphere. However, the "poor" ensemble members cannot simulate the drought and associated circulation patterns. These contrasts indicate that the drought is tightly associated with the tropical Walker circulation and atmospheric teleconnection patterns. If the observational circulation patterns cannot be reproduced, the CEA drought will not be captured. Despite the large intra-ensemble spread, the model simulations indicate an essential role of SST forcing in causing the drought. These results suggest that the long-term drought may result from tropical Indo-Pacific SST variations associated with the enhanced and westward extended tropical Walker circulation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28266801','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28266801"><span>Ensemble perception in autism spectrum disorder: Member-identification versus mean-discrimination.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Van der Hallen, Ruth; Lemmens, Lisa; Steyaert, Jean; Noens, Ilse; Wagemans, Johan</p> <p>2017-07-01</p> <p>To efficiently represent the outside world our brain compresses sets of similar items into a summarized representation, a phenomenon known as ensemble perception. While most studies on ensemble perception investigate this perceptual mechanism in typically developing (TD) adults, more recently, researchers studying perceptual organization in individuals with autism spectrum disorder (ASD) have turned their attention toward ensemble perception. The current study is the first to investigate the use of ensemble perception for size in children with and without ASD (N = 42, 8-16 years). We administered a pair of tasks pioneered by Ariely [2001] evaluating both member-identification and mean-discrimination. In addition, we varied the distribution types of our sets to allow a more detailed evaluation of task performance. Results show that, overall, both groups performed similarly in the member-identification task, a test of "local perception," and similarly in the mean identification task, a test of "gist perception." However, in both tasks performance of the TD group was affected more strongly by the degree of stimulus variability in the set, than performance of the ASD group. These findings indicate that both TD children and children with ASD use ensemble statistics to represent a set of similar items, illustrating the fundamental nature of ensemble coding in visual perception. Differences in sensitivity to stimulus variability between both groups are discussed in relation to recent theories of information processing in ASD (e.g., increased sampling, decreased priors, increased precision). Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1291-1299. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009EGUGA..11.6843L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009EGUGA..11.6843L"><span>An operational mesoscale ensemble data assimilation and prediction system: E-RTFDDA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Y.; Hopson, T.; Roux, G.; Hacker, J.; Xu, M.; Warner, T.; Swerdlin, S.</p> <p>2009-04-01</p> <p>Mesoscale (2-2000 km) meteorological processes differ from synoptic circulations in that mesoscale weather changes rapidly in space and time, and physics processes that are parameterized in NWP models play a great role. Complex interactions of synoptic circulations, regional and local terrain, land-surface heterogeneity, and associated physical properties, and the physical processes of radiative transfer, cloud and precipitation and boundary layer mixing, are crucial in shaping regional weather and climate. Mesoscale ensemble analysis and prediction should sample the uncertainties of mesoscale modeling systems in representing these factors. An innovative mesoscale Ensemble Real-Time Four Dimensional Data Assimilation (E-RTFDDA) and forecasting system has been developed at NCAR. E-RTFDDA contains diverse ensemble perturbation approaches that consider uncertainties in all major system components to produce multi-scale continuously-cycling probabilistic data assimilation and forecasting. A 30-member E-RTFDDA system with three nested domains with grid sizes of 30, 10 and 3.33 km has been running on a Department of Defense high-performance computing platform since September 2007. It has been applied at two very different US geographical locations; one in the western inter-mountain area and the other in the northeastern states, producing 6 hour analyses and 48 hour forecasts, with 4 forecast cycles a day. The operational model outputs are analyzed to a) assess overall ensemble performance and properties, b) study terrain effect on mesoscale predictability, c) quantify the contribution of different ensemble perturbation approaches to the overall forecast skill, and d) assess the additional contributed skill from an ensemble calibration process based on a quantile-regression algorithm. The system and the results will be reported at the meeting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29095872','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29095872"><span>Multi-categorical deep learning neural network to classify retinal images: A pilot study employing small database.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Choi, Joon Yul; Yoo, Tae Keun; Seo, Jeong Gi; Kwak, Jiyong; Um, Terry Taewoong; Rim, Tyler Hyungtaek</p> <p>2017-01-01</p> <p>Deep learning emerges as a powerful tool for analyzing medical images. Retinal disease detection by using computer-aided diagnosis from fundus image has emerged as a new method. We applied deep learning convolutional neural network by using MatConvNet for an automated detection of multiple retinal diseases with fundus photographs involved in STructured Analysis of the REtina (STARE) database. Dataset was built by expanding data on 10 categories, including normal retina and nine retinal diseases. The optimal outcomes were acquired by using a random forest transfer learning based on VGG-19 architecture. The classification results depended greatly on the number of categories. As the number of categories increased, the performance of deep learning models was diminished. When all 10 categories were included, we obtained results with an accuracy of 30.5%, relative classifier information (RCI) of 0.052, and Cohen's kappa of 0.224. Considering three integrated normal, background diabetic retinopathy, and dry age-related macular degeneration, the multi-categorical classifier showed accuracy of 72.8%, 0.283 RCI, and 0.577 kappa. In addition, several ensemble classifiers enhanced the multi-categorical classification performance. The transfer learning incorporated with ensemble classifier of clustering and voting approach presented the best performance with accuracy of 36.7%, 0.053 RCI, and 0.225 kappa in the 10 retinal diseases classification problem. First, due to the small size of datasets, the deep learning techniques in this study were ineffective to be applied in clinics where numerous patients suffering from various types of retinal disorders visit for diagnosis and treatment. Second, we found that the transfer learning incorporated with ensemble classifiers can improve the classification performance in order to detect multi-categorical retinal diseases. Further studies should confirm the effectiveness of algorithms with large datasets obtained from hospitals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20040012799&hterms=ensemble&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Densemble','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20040012799&hterms=ensemble&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Densemble"><span>Goddard Cumulus Ensemble (GCE) Model: Application for Understanding Precipitation Processes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tao, Wei-Kuo</p> <p>2002-01-01</p> <p>One of the most promising methods to test the representation of cloud processes used in climate models is to use observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and realistic representations of cloud microphysical processes, and they can reasonably well resolve the time evolution, structure, and life cycles of clouds and cloud systems (size about 2-200 km). The CRMs also allow explicit interaction between out-going longwave (cooling) and incoming solar (heating) radiation with clouds. Observations can provide the initial conditions and validation for CRM results. The Goddard Cumulus Ensemble (GCE) Model, a cloud-resolving model, has been developed and improved at NASA/Goddard Space Flight Center over the past two decades. Dr. Joanne Simpson played a central role in GCE modeling developments and applications. She was the lead author or co-author on more than forty GCE modeling papers. In this paper, a brief discussion and review of the application of the GCE model to (1) cloud interactions and mergers, (2) convective and stratiform interaction, (3) mechanisms of cloud-radiation interaction, (4) latent heating profiles and TRMM, and (5) responses of cloud systems to large-scale processes are provided. Comparisons between the GCE model's results, other cloud-resolving model results and observations are also examined.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JARS...11c2404U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JARS...11c2404U"><span>Kernel parameter variation-based selective ensemble support vector data description for oil spill detection on the ocean via hyperspectral imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Uslu, Faruk Sukru</p> <p>2017-07-01</p> <p>Oil spills on the ocean surface cause serious environmental, political, and economic problems. Therefore, these catastrophic threats to marine ecosystems require detection and monitoring. Hyperspectral sensors are powerful optical sensors used for oil spill detection with the help of detailed spectral information of materials. However, huge amounts of data in hyperspectral imaging (HSI) require fast and accurate computation methods for detection problems. Support vector data description (SVDD) is one of the most suitable methods for detection, especially for large data sets. Nevertheless, the selection of kernel parameters is one of the main problems in SVDD. This paper presents a method, inspired by ensemble learning, for improving performance of SVDD without tuning its kernel parameters. Additionally, a classifier selection technique is proposed to get more gain. The proposed approach also aims to solve the small sample size problem, which is very important for processing high-dimensional data in HSI. The algorithm is applied to two HSI data sets for detection problems. In the first HSI data set, various targets are detected; in the second HSI data set, oil spill detection in situ is realized. The experimental results demonstrate the feasibility and performance improvement of the proposed algorithm for oil spill detection problems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1917423S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1917423S"><span>Insights in time dependent cross compartment sensitivities from ensemble simulations with the fully coupled subsurface-land surface-atmosphere model TerrSysMP</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schalge, Bernd; Rihani, Jehan; Haese, Barbara; Baroni, Gabriele; Erdal, Daniel; Haefliger, Vincent; Lange, Natascha; Neuweiler, Insa; Hendricks-Franssen, Harrie-Jan; Geppert, Gernot; Ament, Felix; Kollet, Stefan; Cirpka, Olaf; Saavedra, Pablo; Han, Xujun; Attinger, Sabine; Kunstmann, Harald; Vereecken, Harry; Simmer, Clemens</p> <p>2017-04-01</p> <p>Currently, an integrated approach to simulating the earth system is evolving where several compartment models are coupled to achieve the best possible physically consistent representation. We used the model TerrSysMP, which fully couples subsurface, land surface and atmosphere, in a synthetic study that mimicked the Neckar catchment in Southern Germany. A virtual reality run at a high resolution of 400m for the land surface and subsurface and 1.1km for the atmosphere was made. Ensemble runs at a lower resolution (800m for the land surface and subsurface) were also made. The ensemble was generated by varying soil and vegetation parameters and lateral atmospheric forcing among the different ensemble members in a systematic way. It was found that the ensemble runs deviated for some variables and some time periods largely from the virtual reality reference run (the reference run was not covered by the ensemble), which could be related to the different model resolutions. This was for example the case for river discharge in the summer. We also analyzed the spread of model states as function of time and found clear relations between the spread and the time of the year and weather conditions. For example, the ensemble spread of latent heat flux related to uncertain soil parameters was larger under dry soil conditions than under wet soil conditions. Another example is that the ensemble spread of atmospheric states was more influenced by uncertain soil and vegetation parameters under conditions of low air pressure gradients (in summer) than under conditions with larger air pressure gradients in winter. The analysis of the ensemble of fully coupled model simulations provided valuable insights in the dynamics of land-atmosphere feedbacks which we will further highlight in the presentation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23406138','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23406138"><span>Thermodynamic and kinetic theory of nucleation, deliquescence and efflorescence transitions in the ensemble of droplets on soluble particles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shchekin, Alexander K; Shabaev, Ilya V; Hellmuth, Olaf</p> <p>2013-02-07</p> <p>Thermodynamic and kinetic peculiarities of nucleation, deliquescence and efflorescence transitions in the ensemble of droplets formed on soluble condensation nuclei from a solvent vapor have been considered. The interplay of the effects of solubility and the size of condensation nuclei has been analyzed. Activation barriers for the deliquescence and phase transitions and for the reverse efflorescence transition have been determined as functions of the relative humidity of the vapor-gas atmosphere, initial size, and solubility of condensation nuclei. It has been demonstrated that, upon variations in the relative humidity of the atmosphere, the crossover in thermodynamically stable and unstable variables of the droplet state takes place. The physical meaning of stable and unstable variables has been clarified. The kinetic equations for establishing equilibrium and steady distributions of binary droplets have been solved. The specific times for relaxation, deliquescence and efflorescence transitions have been calculated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA614514','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA614514"><span>In Silico Design of Smart Binders to Anthrax PA</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2012-09-01</p> <p>nanosecond(ns) molecular dynamics simulation in the NPT ensemble (constant particle number, pressure, and temperature) at 300K, with the CHARMM force...protective antigen (PA). Before the docking runs, the DS23 peptide was simulated using molecular dynamics to generate an ensemble of structures...structure), we do not see a large amount of structural change when using molecular dynamics after Rosetta docking. We note that this RMSD does not take</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.A43D3301S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.A43D3301S"><span>Using a Very Large Ensemble to Examine the Role of the Ocean in Recent Warming Trends.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sparrow, S. N.; Millar, R.; Otto, A.; Yamazaki, K.; Allen, M. R.</p> <p>2014-12-01</p> <p>Results from a very large (~10,000 member) perturbed physics and perturbed initial condition ensemble are presented for the period 1980 to present. A set of model versions that can shadow recent surface and upper ocean observations are identified and the range of uncertainty in the Atlantic Meridional Overturning Circulation (AMOC) assessed. This experiment uses the Met Office Hadley Centre Coupled Model version 3 (HadCM3), a coupled model with fully dynamic atmosphere and ocean components as part of the climateprediction.net distributive computing project. Parameters are selected so that the model has good top of atmosphere radiative balance and simulations are run without flux adjustments that "nudge" the climate towards a realistic state, but have an adverse effect on important ocean processes. This ensemble provides scientific insights on the possible role of the AMOC, among other factors, in climate trends, or lack thereof, over the past 20 years. This ensemble is also used to explore how the occurrence of hiatus events of different durations varies for models with different transient climate response (TCR). We show that models with a higher TCR are less likely to produce a 15-year warming hiatus in global surface temperature than those with a lower TCR.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3528515','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3528515"><span>Quantum teleportation between remote atomic-ensemble quantum memories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bao, Xiao-Hui; Xu, Xiao-Fan; Li, Che-Ming; Yuan, Zhen-Sheng; Lu, Chao-Yang; Pan, Jian-Wei</p> <p>2012-01-01</p> <p>Quantum teleportation and quantum memory are two crucial elements for large-scale quantum networks. With the help of prior distributed entanglement as a “quantum channel,” quantum teleportation provides an intriguing means to faithfully transfer quantum states among distant locations without actual transmission of the physical carriers [Bennett CH, et al. (1993) Phys Rev Lett 70(13):1895–1899]. Quantum memory enables controlled storage and retrieval of fast-flying photonic quantum bits with stationary matter systems, which is essential to achieve the scalability required for large-scale quantum networks. Combining these two capabilities, here we realize quantum teleportation between two remote atomic-ensemble quantum memory nodes, each composed of ∼108 rubidium atoms and connected by a 150-m optical fiber. The spin wave state of one atomic ensemble is mapped to a propagating photon and subjected to Bell state measurements with another single photon that is entangled with the spin wave state of the other ensemble. Two-photon detection events herald the success of teleportation with an average fidelity of 88(7)%. Besides its fundamental interest as a teleportation between two remote macroscopic objects, our technique may be useful for quantum information transfer between different nodes in quantum networks and distributed quantum computing. PMID:23144222</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HESS...22.1831S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HESS...22.1831S"><span>Relative effects of statistical preprocessing and postprocessing on a regional hydrological ensemble prediction system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sharma, Sanjib; Siddique, Ridwan; Reed, Seann; Ahnert, Peter; Mendoza, Pablo; Mejia, Alfonso</p> <p>2018-03-01</p> <p>The relative roles of statistical weather preprocessing and streamflow postprocessing in hydrological ensemble forecasting at short- to medium-range forecast lead times (day 1-7) are investigated. For this purpose, a regional hydrologic ensemble prediction system (RHEPS) is developed and implemented. The RHEPS is comprised of the following components: (i) hydrometeorological observations (multisensor precipitation estimates, gridded surface temperature, and gauged streamflow); (ii) weather ensemble forecasts (precipitation and near-surface temperature) from the National Centers for Environmental Prediction 11-member Global Ensemble Forecast System Reforecast version 2 (GEFSRv2); (iii) NOAA's Hydrology Laboratory-Research Distributed Hydrologic Model (HL-RDHM); (iv) heteroscedastic censored logistic regression (HCLR) as the statistical preprocessor; (v) two statistical postprocessors, an autoregressive model with a single exogenous variable (ARX(1,1)) and quantile regression (QR); and (vi) a comprehensive verification strategy. To implement the RHEPS, 1 to 7 days weather forecasts from the GEFSRv2 are used to force HL-RDHM and generate raw ensemble streamflow forecasts. Forecasting experiments are conducted in four nested basins in the US Middle Atlantic region, ranging in size from 381 to 12 362 km2. Results show that the HCLR preprocessed ensemble precipitation forecasts have greater skill than the raw forecasts. These improvements are more noticeable in the warm season at the longer lead times (> 3 days). Both postprocessors, ARX(1,1) and QR, show gains in skill relative to the raw ensemble streamflow forecasts, particularly in the cool season, but QR outperforms ARX(1,1). The scenarios that implement preprocessing and postprocessing separately tend to perform similarly, although the postprocessing-alone scenario is often more effective. The scenario involving both preprocessing and postprocessing consistently outperforms the other scenarios. In some cases, however, the differences between this scenario and the scenario with postprocessing alone are not as significant. We conclude that implementing both preprocessing and postprocessing ensures the most skill improvements, but postprocessing alone can often be a competitive alternative.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AtmRe.194..245S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AtmRe.194..245S"><span>Probabilistic precipitation nowcasting based on an extrapolation of radar reflectivity and an ensemble approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch</p> <p>2017-09-01</p> <p>A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20866790','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20866790"><span>Randomized central limit theorems: A unified theory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Eliazar, Iddo; Klafter, Joseph</p> <p>2010-08-01</p> <p>The central limit theorems (CLTs) characterize the macroscopic statistical behavior of large ensembles of independent and identically distributed random variables. The CLTs assert that the universal probability laws governing ensembles' aggregate statistics are either Gaussian or Lévy, and that the universal probability laws governing ensembles' extreme statistics are Fréchet, Weibull, or Gumbel. The scaling schemes underlying the CLTs are deterministic-scaling all ensemble components by a common deterministic scale. However, there are "random environment" settings in which the underlying scaling schemes are stochastic-scaling the ensemble components by different random scales. Examples of such settings include Holtsmark's law for gravitational fields and the Stretched Exponential law for relaxation times. In this paper we establish a unified theory of randomized central limit theorems (RCLTs)-in which the deterministic CLT scaling schemes are replaced with stochastic scaling schemes-and present "randomized counterparts" to the classic CLTs. The RCLT scaling schemes are shown to be governed by Poisson processes with power-law statistics, and the RCLTs are shown to universally yield the Lévy, Fréchet, and Weibull probability laws.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1337889-bandgap-inhomogeneity-pbse-quantum-dot-ensemble-from-two-dimensional-spectroscopy-comparison-size-inhomogeneity-from-electron-microscopy','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1337889-bandgap-inhomogeneity-pbse-quantum-dot-ensemble-from-two-dimensional-spectroscopy-comparison-size-inhomogeneity-from-electron-microscopy"><span>Bandgap Inhomogeneity of a PbSe Quantum Dot Ensemble from Two-Dimensional Spectroscopy and Comparison to Size Inhomogeneity from Electron Microscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Park, Samuel D.; Baranov, Dmitry; Ryu, Jisu; ...</p> <p>2017-01-03</p> <p>Femtosecond two-dimensional Fourier transform spectroscopy is used to determine the static bandgap inhomogeneity of a colloidal quantum dot ensemble. The excited states of quantum dots absorb light, so their absorptive two-dimensional (2D) spectra will typically have positive and negative peaks. We show that the absorption bandgap inhomogeneity is robustly determined by the slope of the nodal line separating positive and negative peaks in the 2D spectrum around the bandgap transition; this nodal line slope is independent of excited state parameters not known from the absorption and emission spectra. The absorption bandgap inhomogeneity is compared to a size and shape distributionmore » determined by electron microscopy. The electron microscopy images are analyzed using new 2D histograms that correlate major and minor image projections to reveal elongated nanocrystals, a conclusion supported by grazing incidence small-angle X-ray scattering and high-resolution transmission electron microscopy. Lastly, the absorption bandgap inhomogeneity quantitatively agrees with the bandgap variations calculated from the size and shape distribution, placing upper bounds on any surface contributions.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100016345','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100016345"><span>Impact of Damping Uncertainty on SEA Model Response Variance</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schiller, Noah; Cabell, Randolph; Grosveld, Ferdinand</p> <p>2010-01-01</p> <p>Statistical Energy Analysis (SEA) is commonly used to predict high-frequency vibroacoustic levels. This statistical approach provides the mean response over an ensemble of random subsystems that share the same gross system properties such as density, size, and damping. Recently, techniques have been developed to predict the ensemble variance as well as the mean response. However these techniques do not account for uncertainties in the system properties. In the present paper uncertainty in the damping loss factor is propagated through SEA to obtain more realistic prediction bounds that account for both ensemble and damping variance. The analysis is performed on a floor-equipped cylindrical test article that resembles an aircraft fuselage. Realistic bounds on the damping loss factor are determined from measurements acquired on the sidewall of the test article. The analysis demonstrates that uncertainties in damping have the potential to significantly impact the mean and variance of the predicted response.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26896847','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26896847"><span>Ensembl comparative genomics resources.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Herrero, Javier; Muffato, Matthieu; Beal, Kathryn; Fitzgerald, Stephen; Gordon, Leo; Pignatelli, Miguel; Vilella, Albert J; Searle, Stephen M J; Amode, Ridwan; Brent, Simon; Spooner, William; Kulesha, Eugene; Yates, Andrew; Flicek, Paul</p> <p>2016-01-01</p> <p>Evolution provides the unifying framework with which to understand biology. The coherent investigation of genic and genomic data often requires comparative genomics analyses based on whole-genome alignments, sets of homologous genes and other relevant datasets in order to evaluate and answer evolutionary-related questions. However, the complexity and computational requirements of producing such data are substantial: this has led to only a small number of reference resources that are used for most comparative analyses. The Ensembl comparative genomics resources are one such reference set that facilitates comprehensive and reproducible analysis of chordate genome data. Ensembl computes pairwise and multiple whole-genome alignments from which large-scale synteny, per-base conservation scores and constrained elements are obtained. Gene alignments are used to define Ensembl Protein Families, GeneTrees and homologies for both protein-coding and non-coding RNA genes. These resources are updated frequently and have a consistent informatics infrastructure and data presentation across all supported species. Specialized web-based visualizations are also available including synteny displays, collapsible gene tree plots, a gene family locator and different alignment views. The Ensembl comparative genomics infrastructure is extensively reused for the analysis of non-vertebrate species by other projects including Ensembl Genomes and Gramene and much of the information here is relevant to these projects. The consistency of the annotation across species and the focus on vertebrates makes Ensembl an ideal system to perform and support vertebrate comparative genomic analyses. We use robust software and pipelines to produce reference comparative data and make it freely available. Database URL: http://www.ensembl.org. © The Author(s) 2016. Published by Oxford University Press.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19990089293&hterms=behavior+modification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbehavior%2Bmodification','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19990089293&hterms=behavior+modification&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3Dbehavior%2Bmodification"><span>The Behavior of Filters and Smoothers for Strongly Nonlinear Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Zhu, Yanqiu; Cohn, Stephen E.; Todling, Ricardo</p> <p>1999-01-01</p> <p>The Kalman filter is the optimal filter in the presence of known Gaussian error statistics and linear dynamics. Filter extension to nonlinear dynamics is non trivial in the sense of appropriately representing high order moments of the statistics. Monte Carlo, ensemble-based, methods have been advocated as the methodology for representing high order moments without any questionable closure assumptions (e.g., Miller 1994). Investigation along these lines has been conducted for highly idealized dynamics such as the strongly nonlinear Lorenz (1963) model as well as more realistic models of the oceans (Evensen and van Leeuwen 1996) and atmosphere (Houtekamer and Mitchell 1998). A few relevant issues in this context are related to the necessary number of ensemble members to properly represent the error statistics and, the necessary modifications in the usual filter equations to allow for correct update of the ensemble members (Burgers 1998). The ensemble technique has also been applied to the problem of smoothing for which similar questions apply. Ensemble smoother examples, however, seem to quite puzzling in that results of state estimate are worse than for their filter analogue (Evensen 1997). In this study, we use concepts in probability theory to revisit the ensemble methodology for filtering and smoothing in data assimilation. We use Lorenz (1963) model to test and compare the behavior of a variety implementations of ensemble filters. We also implement ensemble smoothers that are able to perform better than their filter counterparts. A discussion of feasibility of these techniques to large data assimilation problems will be given at the time of the conference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4761110','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4761110"><span>Ensembl comparative genomics resources</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Muffato, Matthieu; Beal, Kathryn; Fitzgerald, Stephen; Gordon, Leo; Pignatelli, Miguel; Vilella, Albert J.; Searle, Stephen M. J.; Amode, Ridwan; Brent, Simon; Spooner, William; Kulesha, Eugene; Yates, Andrew; Flicek, Paul</p> <p>2016-01-01</p> <p>Evolution provides the unifying framework with which to understand biology. The coherent investigation of genic and genomic data often requires comparative genomics analyses based on whole-genome alignments, sets of homologous genes and other relevant datasets in order to evaluate and answer evolutionary-related questions. However, the complexity and computational requirements of producing such data are substantial: this has led to only a small number of reference resources that are used for most comparative analyses. The Ensembl comparative genomics resources are one such reference set that facilitates comprehensive and reproducible analysis of chordate genome data. Ensembl computes pairwise and multiple whole-genome alignments from which large-scale synteny, per-base conservation scores and constrained elements are obtained. Gene alignments are used to define Ensembl Protein Families, GeneTrees and homologies for both protein-coding and non-coding RNA genes. These resources are updated frequently and have a consistent informatics infrastructure and data presentation across all supported species. Specialized web-based visualizations are also available including synteny displays, collapsible gene tree plots, a gene family locator and different alignment views. The Ensembl comparative genomics infrastructure is extensively reused for the analysis of non-vertebrate species by other projects including Ensembl Genomes and Gramene and much of the information here is relevant to these projects. The consistency of the annotation across species and the focus on vertebrates makes Ensembl an ideal system to perform and support vertebrate comparative genomic analyses. We use robust software and pipelines to produce reference comparative data and make it freely available. Database URL: http://www.ensembl.org. PMID:26896847</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JMPSo.106...95B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JMPSo.106...95B"><span>A continuum theory of edge dislocations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berdichevsky, V. L.</p> <p>2017-09-01</p> <p>Continuum theory of dislocation aims to describe the behavior of large ensembles of dislocations. This task is far from completion, and, most likely, does not have a "universal solution", which is applicable to any dislocation ensemble. In this regards it is important to have guiding lines set by benchmark cases, where the transition from a discrete set of dislocations to a continuum description is made rigorously. Two such cases have been considered recently: equilibrium of dislocation walls and screw dislocations in beams. In this paper one more case is studied, equilibrium of a large set of 2D edge dislocations placed randomly in a 2D bounded region. The major characteristic of interest is energy of dislocation ensemble, because it determines the structure of continuum equations. The homogenized energy functional is obtained for the periodic dislocation ensembles with a random contents of the periodic cell. Parameters of the periodic structure can change slowly on distances of order of the size of periodic cells. The energy functional is obtained by the variational-asymptotic method. Equilibrium positions are local minima of energy. It is confirmed the earlier assertion that energy density of the system is the sum of elastic energy of averaged elastic strains and microstructure energy, which is elastic energy of the neutralized dislocation system, i.e. the dislocation system placed in a constant dislocation density field making the averaged dislocation density zero. The computation of energy is reduced to solution of a variational cell problem. This problem is solved analytically. The solution is used to investigate stability of simple dislocation arrays, i.e. arrays with one dislocation in the periodic cell. The relations obtained yield two outcomes: First, there is a state parameter of the system, dislocation polarization; averaged stresses affect only dislocation polarization and cannot change other characteristics of the system. Second, the structure of dislocation phase space is strikingly simple. Dislocation phase space is split in a family of subspaces corresponding to constant values of dislocation polarizations; in each equipolarization subspace there are many local minima of energy; for zero external stresses the system is stuck in a local minimum of energy; for non-zero slowly changing external stress, dislocation polarization evolves, while the system moves over local energy minima of equipolarization subspaces. Such a simple picture of dislocation dynamics is due to the presence of two time scales, slow evolution of dislocation polarization and fast motion of the system over local minima of energy. The existence of two time scales is justified for a neutral system of edge dislocations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/33569','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/33569"><span>Effects of pore-scale dispersion, degree of heterogeneity, sampling size, and source volume on the concentration moments of conservative solutes in heterogeneous formations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Daniele Tonina; Alberto Bellin</p> <p>2008-01-01</p> <p>Pore-scale dispersion (PSD), aquifer heterogeneity, sampling volume, and source size influence solute concentrations of conservative tracers transported in heterogeneous porous formations. In this work, we developed a new set of analytical solutions for the concentration ensemble mean, variance, and coefficient of variation (CV), which consider the effects of all these...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4871227','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4871227"><span>A Sidekick for Membrane Simulations: Automated Ensemble Molecular Dynamics Simulations of Transmembrane Helices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hall, Benjamin A; Halim, Khairul Abd; Buyan, Amanda; Emmanouil, Beatrice; Sansom, Mark S P</p> <p>2016-01-01</p> <p>The interactions of transmembrane (TM) α-helices with the phospholipid membrane and with one another are central to understanding the structure and stability of integral membrane proteins. These interactions may be analysed via coarse-grained molecular dynamics (CGMD) simulations. To obtain statistically meaningful analysis of TM helix interactions, large (N ca. 100) ensembles of CGMD simulations are needed. To facilitate the running and analysis of such ensembles of simulations we have developed Sidekick, an automated pipeline software for performing high throughput CGMD simulations of α-helical peptides in lipid bilayer membranes. Through an end-to-end approach, which takes as input a helix sequence and outputs analytical metrics derived from CGMD simulations, we are able to predict the orientation and likelihood of insertion into a lipid bilayer of a given helix of family of helix sequences. We illustrate this software via analysis of insertion into a membrane of short hydrophobic TM helices containing a single cationic arginine residue positioned at different positions along the length of the helix. From analysis of these ensembles of simulations we estimate apparent energy barriers to insertion which are comparable to experimentally determined values. In a second application we use CGMD simulations to examine self-assembly of dimers of TM helices from the ErbB1 receptor tyrosine kinase, and analyse the numbers of simulation repeats necessary to obtain convergence of simple descriptors of the mode of packing of the two helices within a dimer. Our approach offers proof-of-principle platform for the further employment of automation in large ensemble CGMD simulations of membrane proteins. PMID:26580541</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26580541','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26580541"><span>Sidekick for Membrane Simulations: Automated Ensemble Molecular Dynamics Simulations of Transmembrane Helices.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hall, Benjamin A; Halim, Khairul Bariyyah Abd; Buyan, Amanda; Emmanouil, Beatrice; Sansom, Mark S P</p> <p>2014-05-13</p> <p>The interactions of transmembrane (TM) α-helices with the phospholipid membrane and with one another are central to understanding the structure and stability of integral membrane proteins. These interactions may be analyzed via coarse grained molecular dynamics (CGMD) simulations. To obtain statistically meaningful analysis of TM helix interactions, large (N ca. 100) ensembles of CGMD simulations are needed. To facilitate the running and analysis of such ensembles of simulations, we have developed Sidekick, an automated pipeline software for performing high throughput CGMD simulations of α-helical peptides in lipid bilayer membranes. Through an end-to-end approach, which takes as input a helix sequence and outputs analytical metrics derived from CGMD simulations, we are able to predict the orientation and likelihood of insertion into a lipid bilayer of a given helix of a family of helix sequences. We illustrate this software via analyses of insertion into a membrane of short hydrophobic TM helices containing a single cationic arginine residue positioned at different positions along the length of the helix. From analyses of these ensembles of simulations, we estimate apparent energy barriers to insertion which are comparable to experimentally determined values. In a second application, we use CGMD simulations to examine the self-assembly of dimers of TM helices from the ErbB1 receptor tyrosine kinase and analyze the numbers of simulation repeats necessary to obtain convergence of simple descriptors of the mode of packing of the two helices within a dimer. Our approach offers a proof-of-principle platform for the further employment of automation in large ensemble CGMD simulations of membrane proteins.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19880036515&hterms=aircraft+mass+properties&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Daircraft%2Bmass%2Bproperties','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19880036515&hterms=aircraft+mass+properties&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Daircraft%2Bmass%2Bproperties"><span>Statistical properties of a cloud ensemble - A numerical study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tao, Wei-Kuo; Simpson, Joanne; Soong, Su-Tzai</p> <p>1987-01-01</p> <p>The statistical properties of cloud ensembles under a specified large-scale environment, such as mass flux by cloud drafts and vertical velocity as well as the condensation and evaporation associated with these cloud drafts, are examined using a three-dimensional numerical cloud ensemble model described by Soong and Ogura (1980) and Tao and Soong (1986). The cloud drafts are classified as active and inactive, and separate contributions to cloud statistics in areas of different cloud activity are then evaluated. The model results compare well with results obtained from aircraft measurements of a well-organized ITCZ rainband that occurred on August 12, 1974, during the Global Atmospheric Research Program's Atlantic Tropical Experiment.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JETP..120...57Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JETP..120...57Z"><span>Quark ensembles with the infinite correlation length</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zinov'ev, G. M.; Molodtsov, S. V.</p> <p>2015-01-01</p> <p>A number of exactly integrable (quark) models of quantum field theory with the infinite correlation length have been considered. It has been shown that the standard vacuum quark ensemble—Dirac sea (in the case of the space-time dimension higher than three)—is unstable because of the strong degeneracy of a state, which is due to the character of the energy distribution. When the momentum cutoff parameter tends to infinity, the distribution becomes infinitely narrow, leading to large (unlimited) fluctuations. Various vacuum ensembles—Dirac sea, neutral ensemble, color superconductor, and BCS state—have been compared. In the case of the color interaction between quarks, the BCS state has been certainly chosen as the ground state of the quark ensemble.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28819232','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28819232"><span>Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chertkov, Michael; Chernyak, Vladimir</p> <p>2017-08-17</p> <p>Thermostatically controlled loads, e.g., air conditioners and heaters, are by far the most widespread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control - changing from on to off, and vice versa, depending on temperature. We considered aggregation of a large group of similar devices into a statistical ensemble, where the devices operate following the same dynamics, subject to stochastic perturbations and randomized, Poisson on/off switching policy. Using theoretical and computational tools of statistical physics, we analyzed how the ensemble relaxes to a stationary distribution and established a relationship between the relaxation and the statistics of the probability flux associated with devices' cycling in the mixed (discrete, switch on/off, and continuous temperature) phase space. This allowed us to derive the spectrum of the non-equilibrium (detailed balance broken) statistical system and uncover how switching policy affects oscillatory trends and the speed of the relaxation. Relaxation of the ensemble is of practical interest because it describes how the ensemble recovers from significant perturbations, e.g., forced temporary switching off aimed at utilizing the flexibility of the ensemble to provide "demand response" services to change consumption temporarily to balance a larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4415763','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4415763"><span>A Bayesian Ensemble Approach for Epidemiological Projections</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lindström, Tom; Tildesley, Michael; Webb, Colleen</p> <p>2015-01-01</p> <p>Mathematical models are powerful tools for epidemiology and can be used to compare control actions. However, different models and model parameterizations may provide different prediction of outcomes. In other fields of research, ensemble modeling has been used to combine multiple projections. We explore the possibility of applying such methods to epidemiology by adapting Bayesian techniques developed for climate forecasting. We exemplify the implementation with single model ensembles based on different parameterizations of the Warwick model run for the 2001 United Kingdom foot and mouth disease outbreak and compare the efficacy of different control actions. This allows us to investigate the effect that discrepancy among projections based on different modeling assumptions has on the ensemble prediction. A sensitivity analysis showed that the choice of prior can have a pronounced effect on the posterior estimates of quantities of interest, in particular for ensembles with large discrepancy among projections. However, by using a hierarchical extension of the method we show that prior sensitivity can be circumvented. We further extend the method to include a priori beliefs about different modeling assumptions and demonstrate that the effect of this can have different consequences depending on the discrepancy among projections. We propose that the method is a promising analytical tool for ensemble modeling of disease outbreaks. PMID:25927892</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..1215134S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..1215134S"><span>Skill of Ensemble Seasonal Probability Forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smith, Leonard A.; Binter, Roman; Du, Hailiang; Niehoerster, Falk</p> <p>2010-05-01</p> <p>In operational forecasting, the computational complexity of large simulation models is, ideally, justified by enhanced performance over simpler models. We will consider probability forecasts and contrast the skill of ENSEMBLES-based seasonal probability forecasts of interest to the finance sector (specifically temperature forecasts for Nino 3.4 and the Atlantic Main Development Region (MDR)). The ENSEMBLES model simulations will be contrasted against forecasts from statistical models based on the observations (climatological distributions) and empirical dynamics based on the observations but conditioned on the current state (dynamical climatology). For some start dates, individual ENSEMBLES models yield significant skill even at a lead-time of 14 months. The nature of this skill is discussed, and chances of application are noted. Questions surrounding the interpretation of probability forecasts based on these multi-model ensemble simulations are then considered; the distributions considered are formed by kernel dressing the ensemble and blending with the climatology. The sources of apparent (RMS) skill in distributions based on multi-model simulations is discussed, and it is demonstrated that the inclusion of "zero-skill" models in the long range can improve Root-Mean-Square-Error scores, casting some doubt on the common justification for the claim that all models should be included in forming an operational probability forecast. It is argued that the rational response varies with lead time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1340942-ensemble-thermostatically-controlled-loads-statistical-physics-approach','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1340942-ensemble-thermostatically-controlled-loads-statistical-physics-approach"><span>Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Chertkov, Michael; Chernyak, Vladimir</p> <p>2017-01-17</p> <p>Thermostatically Controlled Loads (TCL), e.g. air-conditioners and heaters, are by far the most wide-spread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control of temperature - changing from on to off , and vice versa, depending on temperature. Aggregation of a large group of similar devices into a statistical ensemble is considered, where the devices operate following the same dynamics subject to stochastic perturbations and randomized, Poisson on/off switching policy. We analyze, using theoretical and computational tools of statistical physics, how the ensemble relaxes to a stationary distribution and establish relation between the re- laxationmore » and statistics of the probability flux, associated with devices' cycling in the mixed (discrete, switch on/off , and continuous, temperature) phase space. This allowed us to derive and analyze spec- trum of the non-equilibrium (detailed balance broken) statistical system. and uncover how switching policy affects oscillatory trend and speed of the relaxation. Relaxation of the ensemble is of a practical interest because it describes how the ensemble recovers from significant perturbations, e.g. forceful temporary switching o aimed at utilizing flexibility of the ensemble in providing "demand response" services relieving consumption temporarily to balance larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27875136','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27875136"><span>Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang</p> <p>2017-01-01</p> <p>Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1340942','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1340942"><span>Ensemble of Thermostatically Controlled Loads: Statistical Physics Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chertkov, Michael; Chernyak, Vladimir</p> <p></p> <p>Thermostatically Controlled Loads (TCL), e.g. air-conditioners and heaters, are by far the most wide-spread consumers of electricity. Normally the devices are calibrated to provide the so-called bang-bang control of temperature - changing from on to off , and vice versa, depending on temperature. Aggregation of a large group of similar devices into a statistical ensemble is considered, where the devices operate following the same dynamics subject to stochastic perturbations and randomized, Poisson on/off switching policy. We analyze, using theoretical and computational tools of statistical physics, how the ensemble relaxes to a stationary distribution and establish relation between the re- laxationmore » and statistics of the probability flux, associated with devices' cycling in the mixed (discrete, switch on/off , and continuous, temperature) phase space. This allowed us to derive and analyze spec- trum of the non-equilibrium (detailed balance broken) statistical system. and uncover how switching policy affects oscillatory trend and speed of the relaxation. Relaxation of the ensemble is of a practical interest because it describes how the ensemble recovers from significant perturbations, e.g. forceful temporary switching o aimed at utilizing flexibility of the ensemble in providing "demand response" services relieving consumption temporarily to balance larger power grid. We discuss how the statistical analysis can guide further development of the emerging demand response technology.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016GMD.....9.2391B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016GMD.....9.2391B"><span>Evaluating statistical consistency in the ocean model component of the Community Earth System Model (pyCECT v2.0)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baker, Allison H.; Hu, Yong; Hammerling, Dorit M.; Tseng, Yu-heng; Xu, Haiying; Huang, Xiaomeng; Bryan, Frank O.; Yang, Guangwen</p> <p>2016-07-01</p> <p>The Parallel Ocean Program (POP), the ocean model component of the Community Earth System Model (CESM), is widely used in climate research. Most current work in CESM-POP focuses on improving the model's efficiency or accuracy, such as improving numerical methods, advancing parameterization, porting to new architectures, or increasing parallelism. Since ocean dynamics are chaotic in nature, achieving bit-for-bit (BFB) identical results in ocean solutions cannot be guaranteed for even tiny code modifications, and determining whether modifications are admissible (i.e., statistically consistent with the original results) is non-trivial. In recent work, an ensemble-based statistical approach was shown to work well for software verification (i.e., quality assurance) on atmospheric model data. The general idea of the ensemble-based statistical consistency testing is to use a qualitative measurement of the variability of the ensemble of simulations as a metric with which to compare future simulations and make a determination of statistical distinguishability. The capability to determine consistency without BFB results boosts model confidence and provides the flexibility needed, for example, for more aggressive code optimizations and the use of heterogeneous execution environments. Since ocean and atmosphere models have differing characteristics in term of dynamics, spatial variability, and timescales, we present a new statistical method to evaluate ocean model simulation data that requires the evaluation of ensemble means and deviations in a spatial manner. In particular, the statistical distribution from an ensemble of CESM-POP simulations is used to determine the standard score of any new model solution at each grid point. Then the percentage of points that have scores greater than a specified threshold indicates whether the new model simulation is statistically distinguishable from the ensemble simulations. Both ensemble size and composition are important. Our experiments indicate that the new POP ensemble consistency test (POP-ECT) tool is capable of distinguishing cases that should be statistically consistent with the ensemble and those that should not, as well as providing a simple, subjective and systematic way to detect errors in CESM-POP due to the hardware or software stack, positively contributing to quality assurance for the CESM-POP code.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhRvE..92b2923H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhRvE..92b2923H"><span>Collective phase response curves for heterogeneous coupled oscillators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hannay, Kevin M.; Booth, Victoria; Forger, Daniel B.</p> <p>2015-08-01</p> <p>Phase response curves (PRCs) have become an indispensable tool in understanding the entrainment and synchronization of biological oscillators. However, biological oscillators are often found in large coupled heterogeneous systems and the variable of physiological importance is the collective rhythm resulting from an aggregation of the individual oscillations. To study this phenomena we consider phase resetting of the collective rhythm for large ensembles of globally coupled Sakaguchi-Kuramoto oscillators. Making use of Ott-Antonsen theory we derive an asymptotically valid analytic formula for the collective PRC. A result of this analysis is a characteristic scaling for the change in the amplitude and entrainment points for the collective PRC compared to the individual oscillator PRC. We support the analytical findings with numerical evidence and demonstrate the applicability of the theory to large ensembles of coupled neuronal oscillators.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..1612369D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..1612369D"><span>Convergence in France facing Big Data era and Exascale challenges for Climate Sciences</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Denvil, Sébastien; Dufresne, Jean-Louis; Salas, David; Meurdesoif, Yann; Valcke, Sophie; Caubel, Arnaud; Foujols, Marie-Alice; Servonnat, Jérôme; Sénési, Stéphane; Derouillat, Julien; Voury, Pascal</p> <p>2014-05-01</p> <p>The presentation will introduce a french national project : CONVERGENCE that has been funded for four years. This project will tackle big data and computational challenges faced by climate modeling community in HPC context. Model simulations are central to the study of complex mechanisms and feedbacks in the climate system and to provide estimates of future and past climate changes. Recent trends in climate modelling are to add more physical components in the modelled system, increasing the resolution of each individual component and the more systematic use of large suites of simulations to address many scientific questions. Climate simulations may therefore differ in their initial state, parameter values, representation of physical processes, spatial resolution, model complexity, and degree of realism or degree of idealisation. In addition, there is a strong need for evaluating, improving and monitoring the performance of climate models using a large ensemble of diagnostics and better integration of model outputs and observational data. High performance computing is currently reaching the exascale and has the potential to produce this exponential increase of size and numbers of simulations. However, post-processing, analysis, and exploration of the generated data have stalled and there is a strong need for new tools to cope with the growing size and complexity of the underlying simulations and datasets. Exascale simulations require new scalable software tools to generate, manage and mine those simulations ,and data to extract the relevant information and to take the correct decision. The primary purpose of this project is to develop a platform capable of running large ensembles of simulations with a suite of models, to handle the complex and voluminous datasets generated, to facilitate the evaluation and validation of the models and the use of higher resolution models. We propose to gather interdisciplinary skills to design, using a component-based approach, a specific programming environment for scalable scientific simulations and analytics, integrating new and efficient ways of deploying and analysing the applications on High Performance Computing (HPC) system. CONVERGENCE, gathering HPC and informatics expertise that cuts across the individual partners and the broader HPC community, will allow the national climate community to leverage information technology (IT) innovations to address its specific needs. Our methodology consists in developing an ensemble of generic elements needed to run the French climate models with different grids and different resolution, ensuring efficient and reliable execution of these models, managing large volume and number of data and allowing analysis of the results and precise evaluation of the models. These elements include data structure definition and input-output (IO), code coupling and interpolation, as well as runtime and pre/post-processing environments. A common data and metadata structure will allow transferring consistent information between the various elements. All these generic elements will be open source and publicly available. The IPSL-CM and CNRM-CM climate models will make use of these elements that will constitute a national platform for climate modelling. This platform will be used, in its entirety, to optimise and tune the next version of the IPSL-CM model and to develop a global coupled climate model with a regional grid refinement. It will also be used, at least partially, to run ensembles of the CNRM-CM model at relatively high resolution and to run a very-high resolution prototype of this model. The climate models we developed are already involved in many international projects. For instance we participate to the CMIP (Coupled Model Intercomparison Project) project that is very demanding but has a high visibility: its results are widely used and are in particular synthesised in the IPCC (Intergovernmental Panel on Climate Change) assessment reports. The CONVERGENCE project will constitute an invaluable step for the French climate community to prepare and better contribute to the next phase of the CMIP project.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20140001064','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20140001064"><span>The Influence of Internal Model Variability in GEOS-5 on Interhemispheric CO2 Exchange</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Allen, Melissa; Erickson, David; Kendall, Wesley; Fu, Joshua; Ott, Leslie; Pawson, Steven</p> <p>2012-01-01</p> <p>An ensemble of eight atmospheric CO2 simulations was completed employing the National Aeronautics and Space Administration (NASA) Goddard Earth Observation System, Version 5 (GEOS-5) for the years 2000-2001, each with initial meteorological conditions corresponding to different days in January 2000 to examine internal model variability. Globally, the model runs show similar concentrations of CO2 for the two years, but in regions of high CO2 concentrations due to fossil fuel emissions, large differences among different model simulations appear. The phasing and amplitude of the CO2 cycle at Northern Hemisphere locations in all of the ensemble members is similar to that of surface observations. In several southern hemisphere locations, however, some of the GEOS-5 model CO2 cycles are out of phase by as much as four months, and large variations occur between the ensemble members. This result indicates that there is large sensitivity to transport in these regions. The differences vary by latitude-the most extreme differences in the Tropics and the least at the South Pole. Examples of these differences among the ensemble members with regard to CO2 uptake and respiration of the terrestrial biosphere and CO2 emissions due to fossil fuel emissions are shown at Cape Grim, Tasmania. Integration-based flow analysis of the atmospheric circulation in the model runs shows widely varying paths of flow into the Tasmania region among the models including sources from North America, South America, South Africa, South Asia and Indonesia. These results suggest that interhemispheric transport can be strongly influenced by internal model variability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMNG33A1851H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMNG33A1851H"><span>DART: A Community Facility Providing State-of-the-Art, Efficient Ensemble Data Assimilation for Large (Coupled) Geophysical Models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hoar, T. J.; Anderson, J. L.; Collins, N.; Kershaw, H.; Hendricks, J.; Raeder, K.; Mizzi, A. P.; Barré, J.; Gaubert, B.; Madaus, L. E.; Aydogdu, A.; Raeder, J.; Arango, H.; Moore, A. M.; Edwards, C. A.; Curchitser, E. N.; Escudier, R.; Dussin, R.; Bitz, C. M.; Zhang, Y. F.; Shrestha, P.; Rosolem, R.; Rahman, M.</p> <p>2016-12-01</p> <p>Strongly-coupled ensemble data assimilation with multiple high-resolution model components requires massive state vectors which need to be efficiently stored and accessed throughout the assimilation process. Supercomputer architectures are tending towards increasing the number of cores per node but have the same or less memory per node. Recent advances in the Data Assimilation Research Testbed (DART), a freely-available community ensemble data assimilation facility that works with dozens of large geophysical models, have addressed the need to run with a smaller memory footprint on a higher node count by utilizing MPI-2 one-sided communication to do non-blocking asynchronous access of distributed data. DART runs efficiently on many computational platforms ranging from laptops through thousands of cores on the newest supercomputers. Benefits of the new DART implementation will be shown. In addition, overviews of the most recently supported models will be presented: CAM-CHEM, WRF-CHEM, CM1, OpenGGCM, FESOM, ROMS, CICE5, TerrSysMP (COSMO, CLM, ParFlow), JULES, and CABLE. DART provides a comprehensive suite of software, documentation, and tutorials that can be used for ensemble data assimilation research, operations, and education. Scientists and software engineers at NCAR are available to support DART users who want to use existing DART products or develop their own applications. Current DART users range from university professors teaching data assimilation, to individual graduate students working with simple models, through national laboratories and state agencies doing operational prediction with large state-of-the-art models.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23767514','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23767514"><span>Universal shocks in the Wishart random-matrix ensemble.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr</p> <p>2013-05-01</p> <p>We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ClDy..tmp..386C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ClDy..tmp..386C"><span>Evaluation of NMME temperature and precipitation bias and forecast skill for South Asia</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cash, Benjamin A.; Manganello, Julia V.; Kinter, James L.</p> <p>2017-08-01</p> <p>Systematic error and forecast skill for temperature and precipitation in two regions of Southern Asia are investigated using hindcasts initialized May 1 from the North American Multi-Model Ensemble. We focus on two contiguous but geographically and dynamically diverse regions: the Extended Indian Monsoon Rainfall (70-100E, 10-30 N) and the nearby mountainous area of Pakistan and Afghanistan (60-75E, 23-39 N). Forecast skill is assessed using the Sign test framework, a rigorous statistical method that can be applied to non-Gaussian variables such as precipitation and to different ensemble sizes without introducing bias. We find that models show significant systematic error in both precipitation and temperature for both regions. The multi-model ensemble mean (MMEM) consistently yields the lowest systematic error and the highest forecast skill for both regions and variables. However, we also find that the MMEM consistently provides a statistically significant increase in skill over climatology only in the first month of the forecast. While the MMEM tends to provide higher overall skill than climatology later in the forecast, the differences are not significant at the 95% level. We also find that MMEMs constructed with a relatively small number of ensemble members per model can equal or outperform MMEMs constructed with more members in skill. This suggests some ensemble members either provide no contribution to overall skill or even detract from it.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008PMB....53.3201S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008PMB....53.3201S"><span>Representation of photon limited data in emission tomography using origin ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sitek, A.</p> <p>2008-06-01</p> <p>Representation and reconstruction of data obtained by emission tomography scanners are challenging due to high noise levels in the data. Typically, images obtained using tomographic measurements are represented using grids. In this work, we define images as sets of origins of events detected during tomographic measurements; we call these origin ensembles (OEs). A state in the ensemble is characterized by a vector of 3N parameters Y, where the parameters are the coordinates of origins of detected events in a three-dimensional space and N is the number of detected events. The 3N-dimensional probability density function (PDF) for that ensemble is derived, and we present an algorithm for OE image estimation from tomographic measurements. A displayable image (e.g. grid based image) is derived from the OE formulation by calculating ensemble expectations based on the PDF using the Markov chain Monte Carlo method. The approach was applied to computer-simulated 3D list-mode positron emission tomography data. The reconstruction errors for a 10 000 000 event acquisition for simulated ranged from 0.1 to 34.8%, depending on object size and sampling density. The method was also applied to experimental data and the results of the OE method were consistent with those obtained by a standard maximum-likelihood approach. The method is a new approach to representation and reconstruction of data obtained by photon-limited emission tomography measurements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhRvL.109d0502H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhRvL.109d0502H"><span>Quantum Entanglement in Random Physical States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo</p> <p>2012-07-01</p> <p>Most states in the Hilbert space are maximally entangled. This fact has proven useful to investigate—among other things—the foundations of statistical mechanics. Unfortunately, most states in the Hilbert space of a quantum many-body system are not physically accessible. We define physical ensembles of states acting on random factorized states by a circuit of length k of random and independent unitaries with local support. We study the typicality of entanglement by means of the purity of the reduced state. We find that for a time k=O(1), the typical purity obeys the area law. Thus, the upper bounds for area law are actually saturated, on average, with a variance that goes to zero for large systems. Similarly, we prove that by means of local evolution a subsystem of linear dimensions L is typically entangled with a volume law when the time scales with the size of the subsystem. Moreover, we show that for large values of k the reduced state becomes very close to the completely mixed state.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Chaos..27h3121S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Chaos..27h3121S"><span>Uncovering low dimensional macroscopic chaotic dynamics of large finite size complex systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Skardal, Per Sebastian; Restrepo, Juan G.; Ott, Edward</p> <p>2017-08-01</p> <p>In the last decade, it has been shown that a large class of phase oscillator models admit low dimensional descriptions for the macroscopic system dynamics in the limit of an infinite number N of oscillators. The question of whether the macroscopic dynamics of other similar systems also have a low dimensional description in the infinite N limit has, however, remained elusive. In this paper, we show how techniques originally designed to analyze noisy experimental chaotic time series can be used to identify effective low dimensional macroscopic descriptions from simulations with a finite number of elements. We illustrate and verify the effectiveness of our approach by applying it to the dynamics of an ensemble of globally coupled Landau-Stuart oscillators for which we demonstrate low dimensional macroscopic chaotic behavior with an effective 4-dimensional description. By using this description, we show that one can calculate dynamical invariants such as Lyapunov exponents and attractor dimensions. One could also use the reconstruction to generate short-term predictions of the macroscopic dynamics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NatCo...814996F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NatCo...814996F"><span>Large near-term projected snowpack loss over the western United States</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fyfe, John C.; Derksen, Chris; Mudryk, Lawrence; Flato, Gregory M.; Santer, Benjamin D.; Swart, Neil C.; Molotch, Noah P.; Zhang, Xuebin; Wan, Hui; Arora, Vivek K.; Scinocca, John; Jiao, Yanjun</p> <p>2017-04-01</p> <p>Peak runoff in streams and rivers of the western United States is strongly influenced by melting of accumulated mountain snowpack. A significant decline in this resource has a direct connection to streamflow, with substantial economic and societal impacts. Observations and reanalyses indicate that between the 1980s and 2000s, there was a 10-20% loss in the annual maximum amount of water contained in the region's snowpack. Here we show that this loss is consistent with results from a large ensemble of climate simulations forced with natural and anthropogenic changes, but is inconsistent with simulations forced by natural changes alone. A further loss of up to 60% is projected within the next 30 years. Uncertainties in loss estimates depend on the size and the rate of response to continued anthropogenic forcing and the magnitude and phasing of internal decadal variability. The projected losses have serious implications for the hydropower, municipal and agricultural sectors in the region.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1045111-identifying-optimal-measurement-subspace-ensemble-kalman-filter','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1045111-identifying-optimal-measurement-subspace-ensemble-kalman-filter"><span>Identifying Optimal Measurement Subspace for the Ensemble Kalman Filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhou, Ning; Huang, Zhenyu; Welch, Greg</p> <p>2012-05-24</p> <p>To reduce the computational load of the ensemble Kalman filter while maintaining its efficacy, an optimization algorithm based on the generalized eigenvalue decomposition method is proposed for identifying the most informative measurement subspace. When the number of measurements is large, the proposed algorithm can be used to make an effective tradeoff between computational complexity and estimation accuracy. This algorithm also can be extended to other Kalman filters for measurement subspace selection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..18.7531C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..18.7531C"><span>Variability of space climate and its extremes with successive solar cycles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chapman, Sandra; Hush, Phillip; Tindale, Elisabeth; Dunlop, Malcolm; Watkins, Nicholas</p> <p>2016-04-01</p> <p>Auroral geomagnetic indices coupled with in situ solar wind monitors provide a comprehensive data set, spanning several solar cycles. Space climate can be considered as the distribution of space weather. We can then characterize these observations in terms of changing space climate by quantifying how the statistical properties of ensembles of these observed variables vary between different phases of the solar cycle. We first consider the AE index burst distribution. Bursts are constructed by thresholding the AE time series; the size of a burst is the sum of the excess in the time series for each time interval over which the threshold is exceeded. The distribution of burst sizes is two component with a crossover in behaviour at thresholds ≈ 1000 nT. Above this threshold, we find[1] a range over which the mean burst size is almost constant with threshold for both solar maxima and minima. The burst size distribution of the largest events has a functional form which is exponential. The relative likelihood of these large events varies from one solar maximum and minimum to the next. If the relative overall activity of a solar maximum/minimum can be estimated, these results then constrain the likelihood of extreme events of a given size for that solar maximum/minimum. We next develop and apply a methodology to quantify how the full distribution of geomagnetic indices and upstream solar wind observables are changing between and across different solar cycles. This methodology[2] estimates how different quantiles of the distribution, or equivalently, how the return times of events of a given size, are changing. [1] Hush, P., S. C. Chapman, M. W. Dunlop, and N. W. Watkins (2015), Robust statistical properties of the size of large burst events in AE, Geophys. Res. Lett.,42 doi:10.1002/2015GL066277 [2] Chapman, S. C., D. A. Stainforth, N. W. Watkins, (2013) On estimating long term local climate trends , Phil. Trans. Royal Soc., A,371 20120287 DOI:10.1098/rsta.2012.0287</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016CoPhC.204...21A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016CoPhC.204...21A"><span>Scalable Metropolis Monte Carlo for simulation of hard shapes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.</p> <p>2016-07-01</p> <p>We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>